clr_keras_utils module¶
-
class
clr_keras_utils.
CyclicLR
(base_lr=0.001, max_lr=0.006, step_size=2000.0, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle')[source]¶ Bases:
keras.callbacks.Callback
This callback implements a cyclical learning rate policy (CLR). The method cycles the learning rate between two boundaries with some constant frequency. # Arguments
- base_lr: initial learning rate which is the
lower boundary in the cycle.
- max_lr: upper boundary in the cycle. Functionally,
it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function.
- step_size: number of training iterations per
half cycle. Authors suggest setting step_size 2-8 x training iterations in epoch.
- mode: one of {triangular, triangular2, exp_range}.
Default ‘triangular’. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored.
- gamma: constant in ‘exp_range’ scaling function:
gamma**(cycle iterations)
- scale_fn: Custom scaling policy defined by a single
argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. mode paramater is ignored
- scale_mode: {‘cycle’, ‘iterations’}.
Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default is ‘cycle’.
The amplitude of the cycle can be scaled on a per-iteration or per-cycle basis. This class has three built-in policies, as put forth in the paper. “triangular”:
A basic triangular cycle w/ no amplitude scaling.
- “triangular2”:
A basic triangular cycle that scales initial amplitude by half each cycle.
- “exp_range”:
A cycle that scales initial amplitude by gamma**(cycle iterations) at each cycle iteration.
For more detail, please see paper.
- # Example for CIFAR-10 w/ batch size 100:
- Class also supports custom scaling functions:
# References
[Cyclical Learning Rates for Training Neural Networks](
-
on_epoch_end
(epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters
epoch – Integer, index of epoch.
logs –
- Dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the
- Model’s metrics are returned. Example`{‘loss’: 0.2, ‘accuracy’:
0.7}`.