tinyms.initializers

class tinyms.initializers.Initializer(**kwargs)[源代码]

The abstract base class of the initializer.

参数:

kwargs (dict) – Keyword arguments for Initializer.

tinyms.initializers.initializer(init, shape=None, dtype=mindspore.float32)[源代码]

Create and initialize a tensor.

参数:
  • init (Union[Tensor, str, Initializer, numbers.Number]) –

    Initialize value.

    • str: The init should be the alias of the class inheriting from Initializer and the corresponding class will be called in practice. The value of ‘init’ can be “normal”, “ones” or “zeros”, etc.

    • Initializer: The init should be the class inheriting from Initializer to initialize tensor.

    • numbers.Number: The Constant will be called to initialize tensor.

    • Tensor: The tensor will be called to initialize tensor.

  • shape (Union[tuple, list, int]) – The shape of the initialized tensor. Default: None.

  • dtype (mindspore.dtype) – The type of data in initialized tensor. Default: mindspore.float32.

返回:

Tensor, return is Tensor object.

引发:
  • TypeError – The type of the argument ‘init’ is not correct.

  • ValueError – The shape of the tensor which is passed through ‘init’ is not the same as that passed by ‘shape’.

实际案例

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.common.initializer import initializer, One
>>> data = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> tensor1 = initializer(data, [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
>>> tensor3 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor4 = initializer(0, [1, 2, 3], mindspore.float32)
class tinyms.initializers.TruncatedNormal(sigma=0.01)[源代码]

Generates an array with values sampled from Truncated Normal distribution in order to initialize a tensor.

参数:

sigma (float) – The standard deviation of Truncated Normal distribution. Default: 0.01.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, TruncatedNormal
>>> tensor1 = initializer(TruncatedNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('truncatedNormal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Normal(sigma=0.01, mean=0.0)[源代码]

Generates an array with values sampled from Normal distribution \({N}(\text{sigma}, \text{mean})\) in order to initialize a tensor.

\[f(x) = \frac{1} {\sqrt{2*π} * sigma}exp(-\frac{(x - mean)^2} {2*{sigma}^2})\]
参数:
  • sigma (float) – The standard deviation of Normal distribution. Default: 0.01.

  • mean (float) – The mean of Normal distribution. Default: 0.0.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Normal
>>> tensor1 = initializer(Normal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('normal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Uniform(scale=0.07)[源代码]

Generates an array with values sampled from Uniform distribution \({U}(-\text{scale}, \text{scale})\) in order to initialize a tensor.

参数:

scale (float) – The bound of the Uniform distribution. Default: 0.07.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Uniform
>>> tensor1 = initializer(Uniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.HeUniform(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]

Generates an array with values sampled from HeKaiming Uniform distribution \({U}(-\text{boundary}, \text{boundary})\) in order to initialize a tensor, where

\[boundary = \text{gain} \times \sqrt{\frac{3}{fan\_mode}}\]

where \(gain\) is an optional scaling factor. If \(fan\_mode\) is ‘fan_in’, it is the number of input units of the weight tensor. If \(fan\_mode\) is ‘fan_out’, it is the number of output units of the weight tensor.

For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.

参数:
  • negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

  • mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: ‘fan_in’.

  • nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: ‘leaky_relu’.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeUniform
>>> tensor1 = initializer(HeUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.HeNormal(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]

Generates an array with values sampled from HeKaiming Normal distribution \({N}(0, \text{sigma}^2)\) in order to initialize a tensor, where

\[sigma = \frac{gain} {\sqrt{fan\_mode}}\]

where \(gain\) is an optional scaling factor. \(fan\_mode\) is the number of input or output units of the weight tensor, depending on the mode is ‘fan_in’ or ‘fan_out’.

For details of HeNormal algorithm, please check https://arxiv.org/abs/1502.01852.

参数:
  • negative_slope (int, float) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

  • mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: ‘fan_in’.

  • nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: ‘leaky_relu’.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeNormal
>>> tensor1 = initializer(HeNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_normal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.XavierUniform(gain=1)[源代码]

Generates an array with values sampled from Xavier uniform distribution \({U}(-\text{boundary}, \text{boundary})\) in order to initialize a tensor, where

\[boundary = gain * \sqrt{\frac{6}{n_{in} + n_{out}}}\]

where \(gain\) is an optional scaling factor. \(n_{in}\) is the number of input units in the weight tensor, \(n_{out}\) is the number of output units in the weight tensor.

For details of XavierUniform algorithm, please check http://proceedings.mlr.press/v9/glorot10a.html.

参数:

gain (float) – An optional scaling factor. Default: 1.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, XavierUniform
>>> tensor1 = initializer(XavierUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('xavier_uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.One(**kwargs)[源代码]

Generates an array with constant value of one in order to initialize a tensor.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Zero(**kwargs)[源代码]

Generates an array with constant value of zero in order to initialize a tensor.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Zero
>>> tensor1 = initializer(Zero(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('zeros', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Constant(value)[源代码]

Generates an array with constant value in order to initialize a tensor.

参数:

value (Union[int, numpy.ndarray]) – The value to initialize.

实际案例

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Constant
>>> tensor1 = initializer(Constant(3), [1, 2, 3], mindspore.float32)