tinyms.initializers¶
-
class
tinyms.initializers.
Initializer
(**kwargs)[源代码]¶ The base class of the initializer. Initialization of tensor basic attributes and model weight values.
- 参数
kwargs (dict) – Keyword arguments for Initializer.
-
tinyms.initializers.
initializer
(init, shape=None, dtype=mindspore.float32)[源代码]¶ Create and initialize a tensor.
- 参数
init (Union[Tensor, str, Initializer, numbers.Number]) –
Initialize value.
str: The init should be the alias of the class inheriting from Initializer and the corresponding class will be called. The value of ‘init’ can be “normal”, “ones” or “zeros”, etc.
Initializer: The init should be the class inheriting from Initializer to initialize tensor.
numbers.Number: The Constant will be called to initialize tensor.
shape (Union[tuple, list, int]) – A list of integers, a tuple of integers or an integer as the shape of output. Default: None.
dtype (
mindspore.dtype
) – The type of data in initialized tensor. Default: mindspore.float32.
- 返回
Union[Tensor], return is Tensor object.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, One >>> tensor1 = initializer('ones', [1, 2, 3], mindspore.float32) >>> tensor2 = initializer(One(), [1, 2, 3], mindspore.float32) >>> tensor3 = initializer(0, [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
TruncatedNormal
(sigma=0.01)[源代码]¶ Initialize a truncated normal distribution which is a bounded normal distribution within \({N}(\text{low}, \text{high})\).
- 参数
sigma (float) – The sigma of the array. Default: 0.01.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, TruncatedNormal >>> tensor1 = initializer(TruncatedNormal(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('truncatedNormal', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
Normal
(sigma=0.01, mean=0.0)[源代码]¶ Initialize a normal array, and obtain values \({N}(\text{sigma}, \text{mean})\) from the normal distribution to fill the input tensor.
\[f(x) = \frac{1} {\sqrt{2*π} * sigma}exp(-\frac{(x - mean)^2} {2*{sigma}^2})\]- 参数
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, Normal >>> tensor1 = initializer(Normal(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('normal', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
Uniform
(scale=0.07)[源代码]¶ Initialize a uniform array, and obtain values \({U}(-\text{scale}, \text{scale})\) from the uniform distribution to fill the input tensor.
- 参数
scale (float) – The scale of the array. Default: 0.07.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, Uniform >>> tensor1 = initializer(Uniform(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('uniform', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
HeUniform
(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]¶ Initialize the array with HeKaiming Uniform algorithm, and from a uniform distribution collect samples within \({U}(-\text{boundary}, \text{boundary})\) where
\[boundary = \sqrt{\frac{6}{(1 + a^2) \times \text{fan_in}}}\]where \(-boundary\) the lower bound of the HeUniform distribution.
where \(boundary\) the upper bound of the HeUniform distribution.
For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.
- 参数
negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.
mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: fan_in.
nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: leaky_relu.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, HeUniform >>> tensor1 = initializer(HeUniform(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('he_uniform', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
HeNormal
(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]¶ Initialize the array with HeKaiming Normal algorithm, and from a normal distribution collect samples within \({N}(0, \text{sigma}^2)\) where
\[sigma = \frac{gain} {\sqrt{mode}}\]where \(gain\) is an optional scaling factor.
where \(mode\) is the number of input units or output units in the weight tensor.
For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.
- 参数
negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.
mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: fan_in.
nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: leaky_relu.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, HeNormal >>> tensor1 = initializer(HeNormal(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('he_normal', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
XavierUniform
(gain=1)[源代码]¶ Initialize the array with xavier uniform algorithm, and from a uniform distribution collect samples within \({U}(-\text{boundary}, \text{boundary})\) where:
\[boundary = gain * \sqrt{\frac{6}{n_{in} + n_{out}}}\]where \(gain\) is an optional scaling factor.
where \(n_{in}\) is the number of input units in the weight tensor.
where \(n_{out}\) is the number of output units in the weight tensor.
For details of XavierUniform algorithm, please check http://proceedings.mlr.press/v9/glorot10a.html.
- 参数
gain (float) – An optional scaling factor. Default: 1.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, XavierUniform >>> tensor1 = initializer(XavierUniform(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('xavier_uniform', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
One
(**kwargs)[源代码]¶ Fills the input array with the values one.
- 参数
arr (Array) – The array to be assigned.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, One >>> tensor1 = initializer(One(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
Zero
(**kwargs)[源代码]¶ Fills the input array with the values zero.
- 参数
arr (Array) – The array to be assigned.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer, Zero >>> tensor1 = initializer(Zero(), [1, 2, 3], mindspore.float32) >>> tensor2 = initializer('zeros', [1, 2, 3], mindspore.float32)
-
class
tinyms.initializers.
Constant
(value)[源代码]¶ Initialize a constant.
- 参数
value (Union[int, numpy.ndarray]) – The value to initialize.
实际案例
>>> import mindspore >>> from mindspore.common.initializer import initializer >>> tensor1 = initializer(0, [1, 2, 3], mindspore.float32) >>> tensor2 = initializer(5, [1, 2, 3], mindspore.float32)