tinyms.optimizers

Optimizers module provides common optimizers for training, such as SGD, ADAM, Momentum. The optimizer is used to calculate and update the gradients.

class tinyms.optimizers.Optimizer(learning_rate, parameters, weight_decay=0.0, loss_scale=1.0)[source]

Base class for all optimizers.

Note

This class defines the API to add Ops to train a model. Never use this class directly, but instead instantiate one of its subclasses.

Different parameter groups can set different learning_rate, weight_decay and grad_centralization.

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight_decay is positive. For most optimizer, when not separating parameters, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float.

  • parameters (Union[list[Parameter], list[dict]]) –

    When the parameters is a list of Parameter which will be updated, the element in parameters must be class Parameter. When the parameters is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • weight_decay (Union[float, int]) – An int or a floating point value for the weight decay. It must be equal to or greater than 0. If the type of weight_decay input is int, it will be converted to float. Default: 0.0.

  • loss_scale (float) – A floating point value for the loss scale. It must be greater than 0. If the type of loss_scale input is int, it will be converted to float. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If weight_decay is less than 0.

  • ValueError – If learning_rate is a Tensor, but the dimension of tensor is greater than 1.

Supported Platforms:

Ascend GPU CPU

broadcast_params(optim_result)[source]

Apply Broadcast operations in the sequential order of parameter groups.

Returns

bool, the status flag.

decay_weight(gradients)[source]

Weight decay.

An approach to reduce the overfitting of a deep learning neural network model.

Parameters

gradients (tuple[Tensor]) – The gradients of self.parameters, and have the same shape as self.parameters.

Returns

tuple[Tensor], The gradients after weight decay.

get_lr()[source]

Get the learning rate of current step.

Returns

float, the learning rate of current step.

get_lr_parameter(param)[source]

Get the learning rate of parameter.

Parameters

param (Union[Parameter, list[Parameter]]) – The Parameter or list of Parameter.

Returns

Parameter, single Parameter or list[Parameter] according to the input type.

gradients_centralization(gradients)[source]

Gradients centralization.

A method for optimizing convolutional layer parameters to impore the training speed of a deep learning neural network model.

Parameters

gradients (tuple[Tensor]) – The gradients of self.parameters, and have the same shape as self.parameters.

Returns

tuple[Tensor], The gradients after gradients centralization.

scale_grad(gradients)[source]

Loss scale for mixed precision.

An approach of mixed precision training to improve the speed and energy efficiency of training deep neural network.

Parameters

gradients (tuple[Tensor]) – The gradients of self.parameters, and have the same shape as self.parameters.

Returns

tuple[Tensor], The gradients after loss scale.

property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.

property unique

The method is to see whether to make unique. The input type is bool. The method is read-only.

class tinyms.optimizers.Momentum(*args, **kwargs)[source]

Implements the Momentum algorithm.

Refer to the paper on the importance of initialization and momentum in deep learning for more details.

\[v_{t+1} = v_{t} \ast u + gradients\]

If use_nesterov is True:

\[p_{t+1} = p_{t} - (grad \ast lr + v_{t+1} \ast u \ast lr)\]

If use_nesterov is False:

\[p_{t+1} = p_{t} - lr \ast v_{t+1}\]

Here: where grad, lr, p, v and u denote the gradients, learning_rate, params, moments, and momentum respectively.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float.

  • momentum (float) – Hyperparameter of type float, means momentum for the moving average. It must be at least 0.0.

  • weight_decay (int, float) – Weight decay (L2 penalty). It must be equal to or greater than 0.0. Default: 0.0.

  • loss_scale (float) – A floating point value for the loss scale. It must be greater than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

tuple[bool]. All elements are True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If loss_scale or momentum is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_nesterov is not a bool.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If weight_decay or momentum is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.Momentum(group_params, learning_rate=0.1, momentum=0.9, weight_decay=0.0)
>>> # The conv_params's parameters will use a learning rate of default value 0.1 and a weight decay of 0.01 and
>>> # grad centralization of True.
>>> # The no_conv_params's parameters will use a learning rate of 0.01 and a weight decay of default value 0.0
>>> # and grad centralization of False..
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
class tinyms.optimizers.LARS(*args, **kwargs)[source]

Implements the LARS algorithm with LARSUpdate Operator.

LARS is an optimization algorithm employing a large batch optimization technique. Refer to paper LARGE BATCH TRAINING OF CONVOLUTIONAL NETWORKS.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ \lambda = \frac{\theta \text{ * } || \omega || } \\ {|| g_{t} || \text{ + } \delta \text{ * } || \omega || } \\ \lambda = \begin{cases} \min(\frac{\lambda}{\alpha }, 1) & \text{ if } clip = True \\ \lambda & \text{ otherwise } \end{cases}\\ g_{t+1} = \lambda * (g_{t} + \delta * \omega) \end{array}\end{split}\]

\(\theta\) represents coefficient, \(\omega\) represents parameters, \(g\) represents gradients, \(t\) represents updating step, \(\delta\) represents weight_decay, \(\alpha\) represents learning_rate, \(clip\) represents use_clip.

Parameters
  • optimizer (Optimizer) – MindSpore optimizer for which to wrap and modify gradients.

  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • coefficient (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

  • lars_filter (Function) – A function to determine whether apply the LARS algorithm. Default: lambda x: ‘LayerNorm’ not in x.name and ‘bias’ not in x.name.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params in the optimizer, the shape is the as same as the params in the optimizer.

Outputs:

Union[Tensor[bool], tuple[Parameter]], it depends on the output of optimizer.

Supported Platforms:

Ascend CPU

Examples

>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> opt = nn.Momentum(net.trainable_params(), 0.1, 0.9)
>>> opt_lars = nn.LARS(opt, epsilon=1e-08, coefficient=0.02)
>>> model = Model(net, loss_fn=loss, optimizer=opt_lars, metrics=None)
class tinyms.optimizers.Adam(*args, **kwargs)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \beta_2 * v_{t} + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w_{t+1} = w_{t} - l * \frac{m_{t+1}}{\sqrt{v_{t+1}} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector moment1, \(v\) represents the 2nd moment vector moment2, \(g\) represents gradients, \(l\) represents scaling factor, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents params, \(\epsilon\) represents eps.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters is supported.

The sparse strategy is applied while the SparseGatherV2 operator is used for forward network. The sparse feature is under continuous development. If the sparse strategy wants to be executed on the host, set the target to the CPU.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” is in the keys, the value of the corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” is in the keys, the value of the corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” is in the keys, the value must be the order of parameters and the order will be followed in the optimizer. There are no other keys in the dict and the parameters which in the ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use the dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 1e-3.

  • beta1 (float) – The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). Default: 0.9.

  • beta2 (float) – The exponential decay rate for the 2nd moment estimations. Should be in range (0.0, 1.0). Default: 0.999.

  • eps (float) – Term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-8.

  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

  • weight_decay (float) – Weight decay (L2 penalty). It must be equal to or greater than 0. Default: 0.0.

  • loss_scale (float) – A floating point value for the loss scale. Should be greater than 0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2, eps or loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_locking or use_nesterov is not a bool.

  • ValueError – If loss_scale or eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.Adam(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.

class tinyms.optimizers.AdamWeightDecay(params, learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-06, weight_decay=0.0)[source]

Implements the Adam algorithm to fix the weight decay.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \beta_2 * v_{t} + (1 - \beta_2) * g * g \\ update = \frac{m_{t+1}}{\sqrt{v_{t+1}} + eps} \\ update = \begin{cases} update + weight\_decay * w_{t} & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w_{t+1} = w_{t} - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector moment1, \(v\) represents the 2nd moment vector moment2, \(g\) represents gradients, \(lr\) represents learning_rate, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(w\) represents params.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

To improve parameter groups performance, the customized order of parameters can be supported.

There is usually no connection between a optimizer and mixed precision. But when FixedLossScaleManager is used and drop_overflow_update in FixedLossScaleManager is set to False, optimizer needs to set the ‘loss_scale’. As this optimizer has no argument of loss_scale, so loss_scale needs to be processed by other means, refer document LossScale to process loss_scale correctly.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” is in the keys, the value of the corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” is in the keys, the value of the corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” is in the keys, the value must be the order of parameters and the order will be followed in the optimizer. There are no other keys in the dict and the parameters which in the ‘order_params’ must be in one of group parameters.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use the dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 1e-3.

  • beta1 (float) – The exponential decay rate for the 1st moment estimations. Default: 0.9. Should be in range (0.0, 1.0).

  • beta2 (float) – The exponential decay rate for the 2nd moment estimations. Default: 0.999. Should be in range (0.0, 1.0).

  • eps (float) – Term added to the denominator to improve numerical stability. Default: 1e-6. Should be greater than 0.

  • weight_decay (float) – Weight decay (L2 penalty). It must be equal to or greater than 0. Default: 0.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

tuple[bool], all elements are True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2 or eps is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • ValueError – If eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.AdamWeightDecay(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.AdamWeightDecay(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
class tinyms.optimizers.LazyAdam(*args, **kwargs)[source]

This optimizer will apply a lazy adam algorithm when gradient is sparse.

The original adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \beta_2 * v_{t} + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w_{t+1} = w_{t} - l * \frac{m_{t+1}}{\sqrt{v_{t+1}} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector moment1, \(v\) represents the 2nd moment vector moment2, \(g\) represents gradients, \(l\) represents scaling factor, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents params, \(\epsilon\) represents eps.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

The sparse strategy is applied while the SparseGatherV2 operator being used for forward network. The sparse behavior, to be notice, is not equivalent to the original Adam algorithm, as only the current indices parames will be updated. The sparse feature is under continuous development. If the sparse strategy wants to be executed on the host, set the target to the CPU.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr” and “weight_decay” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 1e-3.

  • beta1 (float) – The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). Default: 0.9.

  • beta2 (float) – The exponential decay rate for the 2nd moment estimations. Should be in range (0.0, 1.0). Default: 0.999.

  • eps (float) – Term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-8.

  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

  • weight_decay (Union[float, int]) – Weight decay (L2 penalty). Default: 0.0.

  • loss_scale (float) – A floating point value for the loss scale. Should be equal to or greater than 1. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2, eps or loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_locking or use_nesterov is not a bool.

  • ValueError – If loss_scale or eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend GPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.LazyAdam(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.LazyAdam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.

class tinyms.optimizers.AdamOffload(params, learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-08, use_locking=False, use_nesterov=False, weight_decay=0.0, loss_scale=1.0)[source]

This optimizer will offload Adam optimizer to host CPU and keep parameters being updated on the device, to minimize the memory cost. Although that would bring about an increase of performance overhead, the optimizer could be used to run a larger model.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \beta_2 * v_{t} + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w_{t+1} = w_{t} - l * \frac{m_{t+1}}{\sqrt{v_{t+1}} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector moment1, \(v\) represents the 2nd moment vector moment2, \(g\) represents gradients, \(l\) represents scaling factor, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents params, \(\epsilon\) represents eps.

Note

This optimizer only supports GRAPH_MODE currently.

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

To improve parameter groups performance, the customized order of parameters is supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” is in the keys, the value of the corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” is in the keys, the value of the corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” is in the keys, the value must be the order of parameters and the order will be followed in the optimizer. There are no other keys in the dict and the parameters which in the ‘order_params’ must be in one of group parameters.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use the dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 1e-3.

  • beta1 (float) – The exponential decay rate for the 1st moment estimations. Should be in range (0.0, 1.0). Default: 0.9.

  • beta2 (float) – The exponential decay rate for the 2nd moment estimations. Should be in range (0.0, 1.0). Default: 0.999.

  • eps (float) – Term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-8.

  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

  • weight_decay (float) – Weight decay (L2 penalty). It must be equal to or greater than 0. Default: 0.0.

  • loss_scale (float) – A floating point value for the loss scale. Should be greater than 0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2, eps or loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_locking or use_nesterov is not a bool.

  • ValueError – If loss_scale or eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.AdamOffload(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.AdamOffload(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
class tinyms.optimizers.Lamb(*args, **kwargs)[source]

Lamb(Layer-wise Adaptive Moments optimizer for Batching training) Dynamic Learning Rate.

LAMB is an optimization algorithm employing a layerwise adaptive large batch optimization technique. Refer to the paper LARGE BATCH OPTIMIZATION FOR DEEP LEARNING: TRAINING BERT IN 76 MINUTES.

The LAMB optimizer aims to increase the training batch size without reducing the accuracy, and it supports adaptive element-by-element update and accurate layered correction.

The updating of parameters follows:

\[\begin{split}\begin{gather*} m_t = \beta_1 m_{t - 1}+ (1 - \beta_1)g_t\\ v_t = \beta_2 v_{t - 1} + (1 - \beta_2)g_t^2\\ m_t = \frac{m_t}{\beta_1^t}\\ v_t = \frac{v_t}{\beta_2^t}\\ r_t = \frac{m_t}{\sqrt{v_t}+\epsilon}\\ w_t = w_{t-1} -\eta_t \frac{\| w_{t-1} \|}{\| r_t + \lambda w_{t-1} \|} (r_t + \lambda w_{t-1}) \end{gather*}\end{split}\]

where \(m\) is the 1st moment, and \(v\) the 2nd moment, \(\eta\) the learning rate, \(\lambda\) the LAMB weight decay rate.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

There is usually no connection between a optimizer and mixed precision. But when FixedLossScaleManager is used and drop_overflow_update in FixedLossScaleManager is set to False, optimizer needs to set the ‘loss_scale’. As this optimizer has no argument of loss_scale, so loss_scale needs to be processed by other means, refer document LossScale to process loss_scale correctly.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float.

  • beta1 (float) – The exponential decay rate for the 1st moment estimations. Default: 0.9. Should be in range (0.0, 1.0).

  • beta2 (float) – The exponential decay rate for the 2nd moment estimations. Default: 0.999. Should be in range (0.0, 1.0).

  • eps (float) – Term added to the denominator to improve numerical stability. Default: 1e-6. Should be greater than 0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0. Should be equal to or greater than 0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

tuple[bool], all elements are True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2 or eps is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • ValueError – If eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.Lamb(params=net.trainable_params(), learning_rate=0.1)
>>>
>>> #2) Use parameter groups and set different values
>>> poly_decay_lr = learning_rate_schedule.PolynomialDecayLR(learning_rate=0.1, end_learning_rate=0.01,
...                                                    decay_steps=4, power = 0.5)
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': poly_decay_lr},
...                 {'order_params': net.trainable_params(0.01)}]
>>> optim = nn.Lamb(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use dynamic learning rate of poly decay learning rate and default
>>> # weight decay of 0.0 and grad centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
class tinyms.optimizers.SGD(*args, **kwargs)[source]

Implements stochastic gradient descent. Momentum is optional.

Introduction to SGD can be found at https://en.wikipedia.org/wiki/Stochastic_gradient_descent. Nesterov momentum is based on the formula from paper On the importance of initialization and momentum in deep learning.

\[v_{t+1} = u \ast v_{t} + gradient \ast (1-dampening)\]

If nesterov is True:

\[p_{t+1} = p_{t} - lr \ast (gradient + u \ast v_{t+1})\]

If nesterov is False:

\[p_{t+1} = p_{t} - lr \ast v_{t+1}\]

To be noticed, for the first step, \(v_{t+1} = gradient\)

Here : where p, v and u denote the parameters, accum, and momentum respectively.

Note

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 0.1.

  • momentum (float) – A floating point value the momentum. must be at least 0.0. Default: 0.0.

  • dampening (float) – A floating point value of dampening for momentum. must be at least 0.0. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). It must be equal to or greater than 0. Default: 0.0.

  • nesterov (bool) – Enables the Nesterov momentum. If use nesterov, momentum must be positive, and dampening must equal to 0.0. Default: False.

  • loss_scale (float) – A floating point value for the loss scale, which must be larger than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises

ValueError – If the momentum, dampening or weight_decay value is less than 0.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.SGD(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params,'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.SGD(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 default weight decay of 0.0 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
class tinyms.optimizers.FTRL(*args, **kwargs)[source]

Implements the FTRL algorithm with ApplyFtrl Operator.

FTRL is an online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions. Refer to paper Adaptive Bound Optimization for Online Convex Optimization. Refer to paper Ad Click Prediction: a View from the Trenches for engineering document.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = m_{t} + g^2 \\ u_{t+1} = u_{t} + g - \frac{m_{t+1}^\text{-p} - m_{t}^\text{-p}}{\alpha } * \omega_{t} \\ \omega_{t+1} = \begin{cases} \frac{(sign(u_{t+1}) * l1 - u_{t+1})}{\frac{m_{t+1}^\text{-p}}{\alpha } + 2 * l2 } & \text{ if } |u_{t+1}| > l1 \\ 0.0 & \text{ otherwise } \end{cases}\\ \end{array}\end{split}\]

\(m\) represents accum, \(g\) represents grads, \(t\) represents updating step, \(u\) represents linear, \(p\) represents lr_power, \(\alpha\) represents learning_rate, \(\omega\) represents params.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on all of the parameters.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

The sparse strategy is applied while the SparseGatherV2 operator being used for forward network. The sparse feature is under continuous development. If the sparse strategy wants to be executed on the host, set the target to the CPU.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Using different learning rate by separating parameters is currently not supported.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • initial_accum (float) – The starting value for accumulators, must be zero or positive values. Default: 0.1.

  • learning_rate (float) – The learning rate value, must be zero or positive, dynamic learning rate is currently not supported. Default: 0.001.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • use_locking (bool) – If true, use locks for updating operation. Default: False.

  • loss_scale (float) – Value for the loss scale. It must be greater than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • weight_decay (Union[float, int]) – Weight decay value to multiply weight, must be zero or positive value. Default: 0.0.

Inputs:
  • grads (tuple[Tensor]) - The gradients of params in the optimizer, the shape is the same as the params in optimizer.

Outputs:

tuple[Parameter], the updated parameters, the shape is the same as params.

Raises
  • TypeError – If initial_accum, learning_rate, lr_power, l1, l2 or loss_scale is not a float.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_nesterov is not a bool.

  • ValueError – If lr_power is greater than 0.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If initial_accum, l1 or l2 is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.FTRL(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.FTRL(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use default weight decay of 0.0 and grad centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.

class tinyms.optimizers.RMSProp(*args, **kwargs)[source]

Implements Root Mean Squared Propagation (RMSProp) algorithm.

Update params according to the RMSProp algorithm.

The equation is as follows:

\[s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t+1}\]

The first equation calculates moving average of the squared gradient for each weight. Then dividing the gradient by \(\sqrt{ms_{t+1} + \epsilon}\).

if centered is True:

\[g_{t+1} = \rho g_{t} + (1 - \rho)\nabla Q_{i}(w)\]
\[s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} - g_{t+1}^2 + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t+1}\]

where \(w\) represents params, which will be updated. \(g_{t+1}\) is mean gradients, \(g_{t}\) is the last moment of \(g_{t+1}\). \(s_{t+1}\) is the mean square gradients, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) is moment, the delta of w, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) is learning rate, represents learning_rate. \(\nabla Q_{i}(w)\) is gradients, represents gradients.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 0.1.

  • decay (float) – Decay rate. Should be equal to or greater than 0. Default: 0.9.

  • momentum (float) – Hyperparameter of type float, means momentum for the moving average. Should be equal to or greater than 0. Default: 0.0.

  • epsilon (float) – Term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-10.

  • use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

  • centered (bool) – If true, gradients are normalized by the estimated variance of the gradient. Default: False.

  • loss_scale (float) – A floating point value for the loss scale. Should be greater than 0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • weight_decay (Union[float, int]) – Weight decay (L2 penalty). Should be equal to or greater than 0. Default: 0.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If decay, momentum, epsilon or loss_scale is not a float.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_locking or centered is not a bool.

  • ValueError – If epsilon is less than or equal to 0.

  • ValueError – If decay or momentum is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.RMSProp(params=net.trainable_params(), learning_rate=0.1)
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.RMSProp(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
class tinyms.optimizers.ProximalAdagrad(*args, **kwargs)[source]

Implements the ProximalAdagrad algorithm with ApplyProximalAdagrad Operator.

ProximalAdagrad is an online Learning and Stochastic Optimization. Refer to paper Efficient Learning using Forward-Backward Splitting.

\[accum_{t+1} = accum_{t} + grad * grad\]
\[\text{prox_v} = var_{t} - lr * grad * \frac{1}{\sqrt{accum_{t+1}}}\]
\[var_{t+1} = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0)\]

Here : where grad, lr, var, accum and t denote the gradients, learning_rate, params and accumulation and current step respectively.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

The sparse strategy is applied while the SparseGatherV2 operator being used for forward network. The sparse feature is under continuous development. If the sparse strategy wants to be executed on the host, set the target to the CPU.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • accum (float) – The starting value for accumulators, must be zero or positive values. Default: 0.1.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 0.001.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • use_locking (bool) – If true, use locks for updating operation. Default: False.

  • loss_scale (float) – Value for the loss scale. It must be greater than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • weight_decay (Union[float, int]) – Weight decay value to multiply weight, must be zero or positive value. Default: 0.0.

Inputs:
  • grads (tuple[Tensor]) - The gradients of params in the optimizer, the shape is the same as the params in optimizer.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If accum, l1, l2 or loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If accum, l1, l2 or weight_decay is less than 0.

Supported Platforms:

Ascend

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.ProximalAdagrad(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.ProximalAdagrad(group_params, learning_rate=0.1, weight_decay=0.0)
 >>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.

class tinyms.optimizers.Adagrad(*args, **kwargs)[source]

Implements the Adagrad algorithm with ApplyAdagrad Operator.

Adagrad is an online Learning and Stochastic Optimization. Refer to paper Efficient Learning using Forward-Backward Splitting. The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ h_{t+1} = h_{t} + g\\ w_{t+1} = w_{t} - lr*\frac{1}{\sqrt{h_{t+1}}}*g \end{array}\end{split}\]

\(h\) represents the cumulative sum of gradient squared, \(g\) represents gradients. \(lr\) represents learning_rate, \(w\) represents params.

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) –

    When the params is a list of Parameter which will be updated, the element in params must be class Parameter. When the params is a list of dict, the “params”, “lr”, “weight_decay” and “order_params” are the keys can be parsed.

    • params: Required. The value must be a list of Parameter.

    • lr: Optional. If “lr” in the keys, the value of corresponding learning rate will be used. If not, the learning_rate in the API will be used.

    • weight_decay: Optional. If “weight_decay” in the keys, the value of corresponding weight decay will be used. If not, the weight_decay in the API will be used.

    • order_params: Optional. If “order_params” in the keys, the value must be the order of parameters and the order will be followed in optimizer. There are no other keys in the dict and the parameters which in the value of ‘order_params’ must be in one of group parameters.

    • grad_centralization: Optional. The data type of “grad_centralization” is Bool. If “grad_centralization” is in the keys, the set value will be used. If not, the grad_centralization is False by default. This parameter only works on the convolution layer.

  • accum (float) – The starting value for accumulators, must be zero or positive values. Default: 0.1.

  • learning_rate (Union[float, Tensor, Iterable, LearningRateSchedule]) – A value or a graph for the learning rate. When the learning_rate is an Iterable or a Tensor in a 1D dimension, use dynamic learning rate, then the i-th step will take the i-th value as the learning rate. When the learning_rate is LearningRateSchedule, use dynamic learning rate, the i-th learning rate will be calculated during the process of training according to the formula of LearningRateSchedule. When the learning_rate is a float or a Tensor in a zero dimension, use fixed learning rate. Other cases are not supported. The float learning rate must be equal to or greater than 0. If the type of learning_rate is int, it will be converted to float. Default: 0.001.

  • update_slots (bool) – If true, update accumulation. Default: True.

  • loss_scale (float) – Value for the loss scale. It must be greater than 0.0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

  • weight_decay (Union[float, int]) – Weight decay value to multiply weight, must be zero or positive value. Default: 0.0.

Inputs:
  • grads (tuple[Tensor]) - The gradients of params in the optimizer, the shape is the same as the params in optimizer.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If accum or loss_scale is not a float.

  • TypeError – If update_slots is not a bool.

  • TypeError – If weight_decay is neither float nor int.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If accum or weight_decay is less than 0.

Supported Platforms:

Ascend CPU GPU

Examples

>>> net = Net()
>>> #1) All parameters use the same learning rate and weight decay
>>> optim = nn.Adagrad(params=net.trainable_params())
>>>
>>> #2) Use parameter groups and set different values
>>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
>>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01, 'grad_centralization':True},
...                 {'params': no_conv_params, 'lr': 0.01},
...                 {'order_params': net.trainable_params()}]
>>> optim = nn.Adagrad(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01 and grad
>>> # centralization of True.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0 and grad
>>> # centralization of False.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.
>>>
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
tinyms.optimizers.thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32, use_nesterov=False, decay_filter=<function <lambda>>, split_indices=None, enable_clip_grad=False, frequency=100)[source]

Updates gradients by second-order algorithm–THOR.

Trace-based Hardware-driven layer-ORiented Natural Gradient Descent Computation (THOR) algorithm is proposed in:

THOR: Trace-based Hardware-driven layer-ORiented Natural Gradient Descent Computation

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ A_i = a_i{a_i}^T \\ G_i = D_{s_i}{ D_{s_i}}^T \\ m_i = \beta * m_i + ({G_i^{(k)}}+\lambda I)^{-1}) g_i ({\overline A_{i-1}^{(k)}}+\lambda I)^{-1} \\ w_i = w_i - \alpha * m_i \\ \end{array}\end{split}\]

\(D_{s_i}\) represents the derivative of the loss function of the output of the i-th layer, \(a_{i-1}\) represents the input of i-th layer,and which is the activations of previous layer, \(\beta\) represents momentum, \(I\) represents the identity matrix, \(\overline A\) represents the transpose of matrix A, \(\lambda\) represents ‘damping’, \(g_i\) represents gradients of the i-th layer, \(\otimes\) represents Kronecker product, \(\alpha\) represents ‘learning rate’

Note

When separating parameter groups, the weight decay in each group will be applied on the parameters if the weight decay is positive. When not separating parameter groups, the weight_decay in the API will be applied on the parameters without ‘beta’ or ‘gamma’ in their names if weight_decay is positive.

When separating parameter groups, if you want to centralize the gradient, set grad_centralization to True, but the gradient centralization can only be applied to the parameters of the convolution layer. If the parameters of the non convolution layer are set to True, an error will be reported.

To improve parameter groups performance, the customized order of parameters can be supported.

Parameters
  • net (Cell) – The training network.

  • learning_rate (Tensor) – A value for the learning rate.

  • damping (Tensor) – A value for the damping.

  • momentum (float) – Hyper-parameter of type float, means momentum for the moving average. It must be at least 0.0.

  • weight_decay (int, float) – Weight decay (L2 penalty). It must be equal to or greater than 0.0. Default: 0.0.

  • loss_scale (float) – A value for the loss scale. It must be greater than 0.0. In general, use the default value. Default: 1.0.

  • batch_size (int) – The size of a batch. Default: 32

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • decay_filter (function) – A function to determine which layers the weight decay applied to. And it only works when the weight_decay > 0. Default: lambda x: x.name not in []

  • split_indices (list) – Set allreduce fusion strategy by A/G layer indices . Only works when distributed computing. ResNet50 as an example, there are 54 layers of A/G respectively, when split_indices is set to [26, 53], it means A/G is divided into two groups to allreduce, one is 0~26 layer, and the other is 27~53. Default: None

  • enable_clip_grad (bool) – Whether to clip the gradients. Default: False

  • frequency (int) – The update interval of A/G and $A^{-1}/G^{-1}$. When frequency equals N (N is greater than 1), A/G and $A^{-1}/G^{-1}$ will be updated every N steps, and other steps will use the stale A/G and $A^{-1}/G^{-1}$ to update weights. Default: 100.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

tuple[bool], all elements are True.

Raises
  • TypeError – If learning_rate is not Tensor.

  • TypeError – If loss_scale,`momentum` or frequency is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_nesterov is not a bool.

  • ValueError – If loss_scale is less than or equal to 0.

  • ValueError – If weight_decay or momentum is less than 0.

  • ValueError – If frequency is not int.

  • ValueError – If frequency is less than 2.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore.nn import thor
>>> from mindspore import Model
>>> from mindspore import FixedLossScaleManager
>>> from mindspore.train.callback import LossMonitor
>>> from mindspore.train.train_thor import ConvertModelUtils
>>> from mindspore import nn
>>> from mindspore import Tensor
>>>
>>> net = Net()
>>> dataset = create_dataset()
>>> temp = Tensor([4e-4, 1e-4, 1e-5, 1e-5], mstype.float32)
>>> optim = thor(net, learning_rate=temp, damping=temp, momentum=0.9, loss_scale=128, frequency=4)
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> loss_scale = FixedLossScaleManager(128, drop_overflow_update=False)
>>> model = Model(net, loss_fn=loss, optimizer=optim, loss_scale_manager=loss_scale, metrics={'acc'},
...               amp_level="O2", keep_batchnorm_fp32=False)
>>> model = ConvertModelUtils.convert_to_thor_model(model=model, network=net, loss_fn=loss, optimizer=optim,
...                                                 loss_scale_manager=loss_scale, metrics={'acc'},
...                                                 amp_level="O2", keep_batchnorm_fp32=False)
>>> loss_cb = LossMonitor()
>>> model.train(1, dataset, callbacks=loss_cb, sink_size=4, dataset_sink_mode=True)
class tinyms.optimizers.AdaFactor(*args, **kwargs)[source]

Updates gradients by the Adaptive Learning Rates with Sublinear Memory Cost (Adafactor) algorithm.

The Adafactor algorithm is proposed in Adafactor: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Adafactor for weight vector are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ \hat{V}_{t}=\hat{\beta}_{2} \hat{V}_{t-1}+\left(1-\hat{\beta}_{2_{t}}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n}\right) \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} \hat{U}_{t} \end{array}\end{split}\]

Adafactor for weight matrices are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ R_{t}=\hat{\beta}_{2 t} R_{t-1}+\left(1-\hat{\beta}_{2 t}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) 1_{m} \\ C_{t}=\hat{\beta}_{2 t} C_{t-1}+\left(1-\hat{\beta}_{2 t}\right) 1_{n}^{\top}\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) \\ \hat{V}_{t}=R_{t} C_{t} / 1_{n}^{\top} R_{t} \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} U_{t} \end{array}\end{split}\]

Where RMS is:

\[\begin{split}\operatorname{RMS}\left(U_{t}\right)=\operatorname{RMS}_{x \in X}\left(u_{x t}\right)= \\ \sqrt{\operatorname{Mean}_{x \in X}\left(\frac{\left(g_{x t}\right)^{2}}{\hat{v}_{x t}}\right)}\end{split}\]

\(x\) is each individual parameter, \(t\) is assumed to be the current number of steps, \(a_{t}\) is the learning rate, \(f(X)\) is the loss function, \(\epsilon1\) and \(\epsilon2\) is a small positive number to prevent errors, \(d\) is the clipping threshold, \(\beta_{2}\) is the moment decay, \(\rho\) is the relative step size, \(R\) is the running averages of the row sums of the squared gradient, \(C\) is the running averages of the column sums of the squared gradient.

Note

The learning rate depending of this optimizer will be control by the scale_parameter, relative_step and warmup_init options. To use a manual (external) learning rate schedule, it should be set scale_parameter=False and relative_step=False.

If parameters is not used in the network, please do not add it to the optimizer, otherwise the calculation result will be abnormal.

To improve parameter groups performance, the customized order of parameters is supported.

Parameters
  • params (Union[list[Parameter], list[dict]]) – When the params is a list of Parameter which will be updated, the element in params must be class Parameter.

  • learning_rate (Union[float, Tensor]) – A value or a graph for the learning rate. When the learning_rate is a Tensor in a 1D dimension. If the type of learning_rate is int, it will be converted to float. Default: None.

  • eps (float) – The regularization constans for square gradient and parameter scale respectively. default: (1e-30, 1e-3)

  • clip_threshold (Union[float, Tensor]) – The threshold of root mean square of final gradient update. default: 1.0

  • decay_rate (Union[float, Tensor]) – The coefficient used to compute running averages of square gradient. default: 0.8

  • beta1 (float) – The coefficient to computing running averages of gradient. Should be in range (0.0, 1.0). Default: None.

  • weight_decay (float) – Weight decay (L2 penalty). It must be equal to or greater than 0. Default: 0.0.

  • scale_parameter (bool) – If True, learning rate is scaled by root mean square of parameter. default: True

  • relative_step (bool) – If True, time-dependent learning rate is computed instead of external learning rate. default: True

  • warmup_init (bool) – The time-dependent learning rate computation depends on whether warm-up initialization is being used. default: False

  • compression (bool) – If True, the data type of the running averages exponent will be compression to float16. default: False

  • loss_scale (float) – A floating point value for the loss scale. Should be greater than 0. In general, use the default value. Only when FixedLossScaleManager is used for training and the drop_overflow_update in FixedLossScaleManager is set to False, then this value needs to be the same as the loss_scale in FixedLossScaleManager. Refer to class mindspore.FixedLossScaleManager for more details. Default: 1.0.

Inputs:
  • gradients (tuple[Tensor]) - The gradients of params, the shape is the same as params.

Outputs:

Tensor[bool], the value is True.

Raises
  • TypeError – If learning_rate is not one of int, float, Tensor, Iterable, LearningRateSchedule.

  • TypeError – If element of parameters is neither Parameter nor dict.

  • TypeError – If beta1, beta2, eps or loss_scale is not a float.

  • TypeError – If weight_decay is neither float nor int.

  • TypeError – If use_locking or use_nesterov is not a bool.

  • ValueError – If loss_scale or eps is less than or equal to 0.

  • ValueError – If beta1, beta2 is not in range (0.0, 1.0).

  • ValueError – If weight_decay is less than 0.

Supported Platforms:

Ascend

Examples

>>> net = Net()
>>> #1) Parameters use the default learning rate with None and weight decay with 0.
>>> optim = nn.AdaFactor(params=net.trainable_params())
>>>
>>> #2) Use parameter groups
>>> all_params = net.trainable_params()
>>> group_params = [{'params': [all_params[0]]}, {'params': [all_params[1]]}]
>>> optim = nn.AdaFactor(group_params, learning_rate=0.1, weight_decay=0.0, relative_step=False)
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=optim)
init_ada_factor_state(beta1)[source]

init adafactor variables

property target

The method is used to determine whether the parameter is updated on host or device. The input type is str and can only be ‘CPU’, ‘Ascend’ or ‘GPU’.