tinyms.primitives

Primitives module. Operators can be used in the construct function of Layer.

Examples

>>> import tinyms as ts
>>> from tinyms.primitives import tensor_add
>>>
>>> x = ts.ones([2, 3])
>>> y = ts.ones([2, 3])
>>> print(tensor_add(x, y))
[[2. 2. 2.]
[2. 2. 2.]]
tinyms.primitives.core(fn=None, **flags)[source]

A decorator that adds a flag to the function.

By default, the function is marked as True, enabling to use this decorator to set flag to a graph.

Parameters
  • fn (Function) – Function to add flag. Default: None.

  • flags (dict) – The following flags can be set core, which indicates that this is a core function or other flag. Default: None.

Supported Platforms:

Ascend GPU

Examples

>>> net = Net()
>>> net = core(net, predit=True)
>>> print(hasattr(net, '_mindspore_flags'))
True
tinyms.primitives.add_flags(fn=None, **flags)[source]

A decorator that adds a flag to the function.

Note

Only supports bool value.

Parameters
  • fn (Function) – Function or cell to add flag. Default: None.

  • flags (dict) – Flags use kwargs. Default: None.

Returns

Function, the function with added flags.

Examples

>>> net = Net();
>>> net = add_flags(net, predit=True)
>>> print(hasattr(net, '_mindspore_flags'))
True
class tinyms.primitives.MultitypeFuncGraph(name, read_value=False)[source]

Generates overloaded functions.

MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs. Initialize an MultitypeFuncGraph object with name, and use register with input types as the decorator for the function to be registed. And the object can be called with different types of inputs, and work with HyperMap and Map.

Parameters
  • name (str) – Operator name.

  • read_value (bool) – If the registered function not need to set value on Parameter, and all inputs will pass by value, set read_value to True. Default: False.

Raises

ValueError – If failed to find find a matching function for the given arguments.

Examples

>>> # `add` is a metagraph object which will add two objects according to
>>> # input type using ".register" decorator.
>>> from mindspore import Tensor
>>> from mindspore.ops import Primitive, operations as P
>>> from mindspore import dtype as mstype
>>>
>>> tensor_add = P.Add()
>>> add = MultitypeFuncGraph('add')
>>> @add.register("Number", "Number")
... def add_scala(x, y):
...     return x + y
>>> @add.register("Tensor", "Tensor")
... def add_tensor(x, y):
...     return tensor_add(x, y)
>>> output = add(1, 2)
>>> print(output)
3
>>> output = add(Tensor([0.1, 0.6, 1.2], dtype=mstype.float32), Tensor([0.1, 0.6, 1.2], dtype=mstype.float32))
>>> print(output)
[0.2 1.2 2.4]
register(*type_names)[source]

Register a function for the given type string.

Parameters

type_names (Union[str, mindspore.dtype]) – Inputs type names or types list.

Returns

decorator, a decorator to register the function to run, when called under the types described in type_names.

class tinyms.primitives.GradOperation(get_all=False, get_by_list=False, sens_param=False)[source]

A higher-order function which is used to generate the gradient function for the input function.

The gradient function generated by GradOperation higher-order function can be customized by construction arguments.

Given an input function net = Net() that takes x and y as inputs, and has a parameter z, see Net in Examples.

To generate a gradient function that returns gradients with respect to the first input (see GradNetWrtX in Examples).

  1. Construct a GradOperation higher-order function with default arguments: grad_op = GradOperation().

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to the first input: grad_op(net)(x, y).

To generate a gradient function that returns gradients with respect to all inputs (see GradNetWrtXY in Examples).

  1. Construct a GradOperation higher-order function with get_all=True which indicates getting gradients with respect to all inputs, they are x and y in example function Net(): grad_op = GradOperation(get_all=True).

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to given parameters (see GradNetWithWrtParams in Examples).

  1. Construct a GradOperation higher-order function with get_by_list=True: grad_op = GradOperation(get_by_list=True).

  2. Construct a ParameterTuple that will be passed to the input function when constructing GradOperation higher-order function, it will be used as a parameter filter that determine which gradient to return: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

4. Call the gradient function with input function’s inputs to get the gradients with respect to given parameters: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to all inputs and given parameters in the format of ((dx, dy), (dz))(see GradNetWrtInputsAndParams in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and get_by_list=True: grad_op = GradOperation(get_all=True, get_by_list=True).

  2. Construct a ParameterTuple that will be passed along input function when constructing GradOperation higher-order function: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

  4. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs and given parameters: gradient_function(x, y).

We can configure the sensitivity(gradient with respect to output) by setting sens_param as True and passing an extra sensitivity input to the gradient function, the sensitivity input should has the same shape and type with input function’s output(see GradNetWrtXYWithSensParam in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and sens_param=True: grad_op = GradOperation(get_all=True, sens_param=True).

  2. Define grad_wrt_output as sens_param which works as the gradient with respect to output: grad_wrt_output = Tensor(np.ones([2, 2]).astype(np.float32)).

  3. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  4. Call the gradient function with input function’s inputs and sens_param to get the gradients with respect to all inputs: gradient_function(x, y, grad_wrt_output).

Parameters
  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter variables at the same time in the form of ((gradients with respect to inputs), (gradients with respect to parameters)). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sensor_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the location parameter or key-value pair parameter. If the value is transferred through the key-value pair parameter, the key must be sens.

Returns

The higher-order function which takes a function as argument and returns gradient function for it.

Examples

>>> from mindspore.common import ParameterTuple
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
...     def construct(self, x, y):
...         x = x * self.z
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation()
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> output = GradNetWrtX(Net())(x, y)
>>> print(output)
[[1.4100001 1.5999999 6.6      ]
 [1.4100001 1.5999999 6.6      ]]
>>>
>>> class GradNetWrtXY(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXY, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXY(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00],
 [ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 2.59999990e+00,  2.59999990e+00,  2.59999990e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]]))
>>>
>>> class GradNetWrtXYWithSensParam(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXYWithSensParam, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True, sens_param=True)
...         self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y, self.grad_wrt_output)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXYWithSensParam(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.21099997e+00,  5.09999990e-01,  1.49000001e+00],
 [ 5.58799982e+00,  2.68000007e+00,  4.07000017e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.51999998e+00,  2.81999993e+00,  2.14000010e+00],
 [ 1.09999990e+00,  2.04999971e+00,  1.54999995e+00],
 [ 9.00000036e-01,  1.54999995e+00,  1.25000000e+00]]))
>>>
>>> class GradNetWithWrtParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWithWrtParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWithWrtParams(Net())(x, y)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
>>>
>>> class GradNetWrtInputsAndParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtInputsAndParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_all=True, get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.1, 0.6, 1.2], [0.5, 1.3, 0.1]], dtype=mstype.float32)
>>> y = Tensor([[0.12, 2.3, 1.1], [1.3, 0.2, 2.4], [0.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtInputsAndParams(Net())(x, y)
>>> print(output)
((Tensor(shape=[2, 3], dtype=Float32, value=
[[ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00],
 [ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 6.00000024e-01,  6.00000024e-01,  6.00000024e-01],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]])), (Tensor(shape=[1], dtype=Float32, value=
 [ 1.29020004e+01]),))
class tinyms.primitives.HyperMap(ops=None)[source]

Hypermap will apply the set operation to input sequences.

Apply the operations to every elements of the sequence or nested sequence. Different from Map, the HyperMap supports to apply on nested structure.

Parameters

ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance.

Inputs:
  • args (Tuple[sequence]) - If ops is None, all the inputs should be sequences with the same length. And each row of the sequences will be the inputs of the operation.

    If ops is not None, the first input is the operation, and the others are inputs.

Outputs:

Sequence or nested sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Examples

>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),
... (Tensor(3, mstype.float32), Tensor(4, mstype.float32)))
>>> # square all the tensor in the nested list
>>>
>>> square = MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return F.square(x)
>>>
>>> common_map = HyperMap()
>>> output = common_map(square, nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
>>> square_map = HyperMap(square)
>>> output = square_map(nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters
  • shape (tuple) – The shape of random tensor to be generated.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Tensor) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> stddev = Tensor(1.0, mstype.float32)
>>> output = C.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters
  • shape (tuple) – The shape of random tensor to be generated.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be the broadcasted shape of Input “shape” and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore.ops import composite as C
>>> import mindspore.common.dtype as mstype
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mstype.float32)
>>> lambda_param = Tensor(1.0, mstype.float32)
>>> output = C.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters
  • shape (tuple) – The shape of random tensor to be generated.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mstype.float32.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Supported Platforms:

Ascend GPU

Examples

>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mstype.int32)
>>> maxval = Tensor(2, mstype.int32)
>>> output = C.uniform(shape, minval, maxval, seed=5, dtype=mstype.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> maxval = Tensor([8.0, 10.0], mstype.float32)
>>> output = C.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters
  • shape (tuple) – The shape of random tensor to be generated.

  • alpha (Tensor) – The alpha α distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The beta β distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of alpha and beta. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> beta = Tensor(np.array([1.0]), mstype.float32)
>>> output = C.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.poisson(shape, mean, seed=None)[source]

Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters
  • shape (tuple) – The shape of random tensor to be generated.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mstype.float32)
>>> output = C.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
tinyms.primitives.multinomial(inputs, num_sample, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters
  • inputs (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_sample (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: 0.

Outputs:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Supported Platforms:

GPU

Examples

>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = C.multinomial(input, 2, True)
>>> print(output)
[1 1]
tinyms.primitives.clip_by_value(x, clip_value_min, clip_value_max)[source]

Clips tensor values to a specified min and max.

Limits the value of \(x\) to a range, whose lower limit is ‘clip_value_min’ and upper limit is ‘clip_value_max’.

\[\begin{split}out_i= \left\{ \begin{array}{align} clip\_value_{max} & \text{ if } x_i\ge clip\_value_{max} \\ x_i & \text{ if } clip\_value_{min} \lt x_i \lt clip\_value_{max} \\ clip\_value_{min} & \text{ if } x_i \le clip\_value_{min} \\ \end{array}\right.\end{split}\]

Note

‘clip_value_min’ needs to be less than or equal to ‘clip_value_max’.

Parameters
  • x (Tensor) – Input data.

  • clip_value_min (Tensor) – The minimum value.

  • clip_value_max (Tensor) – The maximum value.

Returns

Tensor, a clipped Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import composite as C
>>> import mindspore.common.dtype as mstype
>>> min_value = Tensor(5, mstype.float32)
>>> max_value = Tensor(20, mstype.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mstype.float32)
>>> output = C.clip_by_value(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.clip_by_global_norm(x, clip_norm=1.0, use_norm=None)[source]

Clips tensor values by the ratio of the sum of their norms.

Note

Input ‘x’ should be a tuple or list of tensors. Otherwise, it will raise an error.

Parameters
  • x (Union(tuple[Tensor], list[Tensor])) – Input data to clip.

  • clip_norm (Union(float, int)) – The clipping ratio, it should be greater than 0. Default: 1.0

  • use_norm (None) – The global norm. Default: None. Currently only none is supported.

Returns

tuple[Tensor], a clipped Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> x1 = np.array([[2., 3.], [1., 2.]]).astype(np.float32)
>>> x2 = np.array([[1., 4.], [3., 1.]]).astype(np.float32)
>>> input_x = (Tensor(x1), Tensor(x2))
>>> out = clip_by_global_norm(input_x, 1.0)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.98142403e-01,  4.47213590e-01],
 [ 1.49071202e-01,  2.98142403e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.49071202e-01,  5.96284807e-01],
 [ 4.47213590e-01,  1.49071202e-01]]))
tinyms.primitives.count_nonzero(x, axis=(), keep_dims=False, dtype=mindspore.int32)[source]

Count number of nonzero elements across axis of input tensor

Parameters
  • x (Tensor) – Input data is used to count non-zero numbers.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Only constant value is allowed. Default: (), reduce all dimensions.

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • dtype (Union[Number, mstype.bool_]) – The data type of the output tensor. Only constant value is allowed. Default: mstype.int32

Returns

Tensor, number of nonzero element. The data type is dtype.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[0, 1, 0], [1, 1, 0]]).astype(np.float32))
>>> nonzero_num = count_nonzero(x=input_x, axis=[0, 1], keep_dims=True, dtype=mstype.int32)
>>> print(nonzero_num)
[[3]]
tinyms.primitives.tensor_dot(x1, x2, axes)[source]

Computation of Tensor contraction on arbitrary axes between tensors a and b.

Contraction allows for the summation of products of elements of a and b on specified axes. The same number of axes must be specified for both x1 and x2, and values must be within range of number of dims of both a and b.

Selected dims in both inputs must also match.

axes = 0 leads to outer product axes = 1 leads to normal matrix multiplication when inputs both 2D. axes = 1 is the same as axes = ((1,),(0,) where both a and b are 2D. axes = 2 is the same as axes = ((1,2),(0,1)) where both a and b are 3D.

Inputs:
  • x1 (Tensor) - First tensor in tensor_dot with datatype float16 or float32

  • x2 (Tensor) - Second tensor in tensor_dot with datatype float16 or float32

  • axes (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]) - Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and first N dims from b input shape in order as axes for each respectively.

Outputs:

Tensor, the shape of the output tensor is \((N + M)\). Where \(N\) and \(M\) are the free axes not contracted in both inputs

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 1, 2]), mindspore.float32)
>>> output = C.tensor_dot(input_x1, input_x2, ((0,1),(1,2)))
>>> print(output)
[[2. 2. 2]
 [2. 2. 2]
 [2. 2. 2]]
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat.

Parameters
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive, required.

  • axis (int) – The axis along which to repeat, default 0.

Outputs:

One tensor with values repeated along the specified axis. If x has shape (s1, s2, …, sn) and axis is i, the output will have shape (s1, s2, …, si * rep, …, sn). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = C.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
tinyms.primitives.sequence_mask(lengths, maxlen)[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape [d_1, d_2, …, d_n], then the resulting tensor mask has type dtype and shape [d_1, d_2, …, d_n, maxlen], with mask[i_1, i_2, …, i_n, j] = (j < lengths[i_1, i_2, …, i_n])

Inputs:
  • lengths (Tensor) - Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen. Must be type int32 or int64.

  • maxlen (int) - size of the last dimension of returned tensor. Must be positive and same type as elements in lengths.

Outputs:

One mask tensor of shape lengths.shape + (maxlen,).

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = C.sequence_mask(x, 3)
>>> print(output)
[[[True, False, False],
  [True, True, True]],
 [[True, True, False],
  [False, False, False]]]
class tinyms.primitives.ACos(*args, **kwargs)[source]

Computes arccosine of input tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> acos = ops.ACos()
>>> input_x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = acos(input_x)
>>> print(output)
[0.7377037 1.5307858 1.2661037 0.97641146]
class tinyms.primitives.Abs(*args, **kwargs)[source]

Returns absolute value of a tensor element-wise.

\[out_i = |x_i|\]
Inputs:
  • input_x (Tensor) - The input tensor. The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> abs = ops.Abs()
>>> output = abs(input_x)
>>> print(output)
[1. 1. 0.]
class tinyms.primitives.AccumulateNV2(*args, **kwargs)[source]

Computes accumulation of all input tensors element-wise.

AccumulateNV2 is similar to AddN, but there is a significant difference among them: AccumulateNV2 will not wait for all of its inputs to be ready before summing. That is to say, AccumulateNV2 is able to save memory when inputs are ready at different time since the minimum temporary storage is proportional to the output size rather than the input size.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors whose dtype is number to be added together.

Outputs:

Tensor, has the same shape and dtype as each entry of the input_x.

Supported Platforms:

Ascend

Examples

>>> class NetAccumulateNV2(nn.Cell):
...     def __init__(self):
...         super(NetAccumulateNV2, self).__init__()
...         self.accumulateNV2 = ops.AccumulateNV2()
...
...     def construct(self, *z):
...         return self.accumulateNV2(z)
...
>>> net = NetAccumulateNV2()
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> input_y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(input_x, input_y, input_x, input_y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Acosh(*args, **kwargs)[source]

Computes inverse hyperbolic cosine of the input element-wise.

\[out_i = cosh^{-1}(input_i)\]
Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type should be one of the following types: float16, float32.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> acosh = ops.Acosh()
>>> input_x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = acosh(input_x)
>>> print(output)
[0. 0.9624236 1.7627472 5.298292]
class tinyms.primitives.Adam(*args, **kwargs)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated.

  • m (Tensor) - The 1st moment vector in the updating formula, has the same type as var.

  • v (Tensor) - the 2nd moment vector in the updating formula. Mean square gradients with the same type as var.

  • beta1_power (float) - \(beta_1^t\) in the updating formula.

  • beta2_power (float) - \(beta_2^t\) in the updating formula.

  • lr (float) - \(l\) in the updating formula.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • gradient (Tensor) - Gradient, has the same type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adam = ops.Adam()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                               epsilon, grad)
...
...         return out
>>> np.random.seed(0)
>>> net = Net()
>>> gradient = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(0.9, 0.999, 0.001, 0.9, 0.999, 1e-8, gradient)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.99697924e-01,  9.99692678e-01],
 [ 9.99696255e-01,  9.99698043e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.54881310e-01,  9.71518934e-01],
 [ 9.60276306e-01,  9.54488277e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.99301195e-01,  9.99511480e-01],
 [ 9.99363303e-01,  9.99296904e-01]]))
class tinyms.primitives.AdamNoUpdateParam(*args, **kwargs)[source]

Updates gradients by Adaptive Moment Estimation (Adam) algorithm. This operator do not update the parameter, but calculate the value that should be added to the parameter instead.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ \Delta{w} = - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents the parameter to be updated, \(\epsilon\).

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • m (Tensor) - The 1st moment vector in the updating formula. The data type must be float32.

  • v (Tensor) - the 2nd moment vector in the updating formula. The shape must be the same as m. The data type must be float32.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula. The data type must be float32.

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula. The data type must be float32.

  • lr (Tensor) - \(l\) in the updating formula. The data type must be float32.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations. The data type must be float32.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations. The data type must be float32.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability. The data type must be float32.

  • gradient (Tensor) - Gradient, the shape must be the same as m, the data type must be float32.

Outputs:

Tensor, whose shape and data type are the same with gradient, is a value that should be added to the parameter to be updated.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>>
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.adam = ops.AdamNoUpdateParam()
>>>         self.m = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
>>>                            name="m")
>>>         self.v = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
>>>                            name="v")
>>>     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
>>>         out = self.adam(self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad)
>>>         return out
>>> net = Net()
>>> beta1_power = Tensor(0.9, ms.float32)
>>> beta2_power = Tensor(0.999, ms.float32)
>>> lr = Tensor(0.001, ms.float32)
>>> beta1 = Tensor(0.9, ms.float32)
>>> beta2 = Tensor(0.999, ms.float32)
>>> epsilon = Tensor(1e-8, ms.float32)
>>> gradient = Tensor(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]).astype(np.float32))
>>>
>>> result = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient)
>>> print(result)
[[-0.00010004 -0.00010004 -0.00010004]
[-0.00013441 -0.00013441 -0.00013441]]
class tinyms.primitives.Add(*args, **kwargs)[source]

Adds two input tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} + y_{i}\]
Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> add = ops.Add()
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> input_y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(input_x, input_y)
>>> print(output)
[5. 7. 9.]
class tinyms.primitives.AddN(*args, **kwargs)[source]

Computes addition of all input tensors element-wise.

All input tensors must have the same shape.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors whose dtype is number or bool to be added together.

Outputs:

Tensor, has the same shape and dtype as each entry of the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetAddN(nn.Cell):
...     def __init__(self):
...         super(NetAddN, self).__init__()
...         self.addN = ops.AddN()
...
...     def construct(self, *z):
...         return self.addN(z)
...
>>> net = NetAddN()
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> input_y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(input_x, input_y, input_x, input_y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.AllGather(*args, **kwargs)[source]

Gathers tensors from the specified communication group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters

group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If group is not a string.

  • ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn.
>>> import numpy as np
>>> import mindspore.ops.operations as ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore import Tensor, context
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
... class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allgather = ops.AllGather()
...
...     def construct(self, x):
...         return self.allgather(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]]
class tinyms.primitives.AllReduce(*args, **kwargs)[source]

Reduces the tensor data across all devices in such a way that all devices will get the same final result.

Note

The operation of AllReduce does not support “prod” currently. The tensors must have the same shape and format in all processes of the collection.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like sum, max, and min. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If any of operation and group is not a string, or fusion is not an integer, or the input’s dtype is bool.

  • ValueError – If the operation is “prod”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>> from mindspore.ops.operations.comm_ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops.operations as ops
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM, group="nccl_world_group")
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[4. 5. 6. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0.]]
class tinyms.primitives.AllSwap(*args, **kwargs)[source]

AllSwap is a collective operation.

AllSwap sends data from the all processes to the all processes in the specified group. It has two phases:

  • The scatter phase: On each process, the operand is split into the send size of blocks along the 0-th axis, and the blocks are scattered to all processes, e.g., the ith block is send to the ith process.

  • The gather phase: Each process concatenates the received blocks along the 0-th axis.

Note

The tensors must have the same format in all processes of the collection.

Parameters

group (str) – The communication group name.

Inputs:

tensor_in (tensor): A 2-D tensor. On each process, divide blocks into number of the send size. send_size (tensor): A 1-D int64 tensor. The element is the send data size for each process. recv_size (tensor): A 1-D int64 tensor. The element is the receive data size for each process.

Returns

The result tensor.

Return type

tensor_out (tensor)

Raises

TypeError – If group is not a string.

class tinyms.primitives.ApplyAdaMax(*args, **kwargs)[source]

Updates relevant entries according to the adamax scheme.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t} = \beta_1 * m_{t-1} + (1 - \beta_1) * g \\ v_{t} = \max(\beta_2 * v_{t-1}, \left| g \right|) \\ var = var - \frac{l}{1 - \beta_1^t} * \frac{m_{t}}{v_{t} + \epsilon} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t-1}\) is the last momentent of \(m_{t}\), \(v\) represents the 2nd moment vector, \(v_{t-1}\) is the last momentent of \(v_{t}\), \(l\) represents scaling factor lr, \(g\) represents grad, \(\beta_1, \beta_2\) represent beta1 and beta2, \(beta_1^t\) represents beta1_power, \(var\) represents the variable to be updated, \(\epsilon\) represents epsilon.

Inputs of var, m, v and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and type as var. With float32 or float16 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients with the same shape and type as var. With float32 or float16 data type.

  • beta1_power (Union[Number, Tensor]) - \(beta_1^t\) in the updating formula, must be scalar. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - Learning rate, \(l\) in the updating formula, must be scalar. With float32 or float16 data type.

  • beta1 (Union[Number, Tensor]) - The exponential decay rate for the 1st moment estimations, must be scalar. With float32 or float16 data type.

  • beta2 (Union[Number, Tensor]) - The exponential decay rate for the 2nd moment estimations, must be scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient, has the same shape and type as var. With float32 or float16 data type.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_ada_max = ops.ApplyAdaMax()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="v")
...     def construct(self, beta1_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_ada_max(self.var, self.m, self.v, beta1_power, lr, beta1, beta2, epsilon, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> beta1_power =Tensor(0.9, mstype.float32)
>>> lr = Tensor(0.001, mstype.float32)
>>> beta1 = Tensor(0.9, mstype.float32)
>>> beta2 = Tensor(0.99, mstype.float32)
>>> epsilon = Tensor(1e-10, mstype.float32)
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(beta1_power, lr, beta1, beta2, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.44221461e-01,  7.07908988e-01],
 [ 5.97648144e-01,  5.29388547e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.38093781e-01,  6.73864365e-01],
 [ 4.00932074e-01,  8.11308622e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.54026103e-01,  9.25596654e-01],
 [ 7.83807814e-01,  5.23605943e-01]]))
class tinyms.primitives.ApplyAdadelta(*args, **kwargs)[source]

Updates relevant entries according to the adadelta scheme.

\[accum = \rho * accum + (1 - \rho) * grad^2\]
\[\text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{grad}{\sqrt{accum + \epsilon}}\]
\[\text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * update^2\]
\[var -= lr * update\]

Inputs of var, accum, accum_update and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type.

  • accum (Parameter) - Accumulation to be updated, has the same shape and type as var. With float32 or float16 data type.

  • accum_update (Parameter) - Accum_update to be updated, has the same shape and type as var. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - Learning rate, must be scalar. With float32 or float16 data type.

  • rho (Union[Number, Tensor]) - Decay rate, must be scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - Gradients, has the same shape and type as var. With float32 or float16 data type.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adadelta = ops.ApplyAdadelta()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum_update")
...     def construct(self, lr, rho, epsilon, grad):
...         out = self.apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, epsilon, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> lr = Tensor(0.001, mstype.float32)
>>> rho = Tensor(0.0, mstype.float32)
>>> epsilon = Tensor(1e-6, mstype.float32)
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(lr, rho, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.47831833e-01,  7.14570105e-01],
 [ 6.01873636e-01,  5.44156015e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.22674602e-01,  8.56729150e-01],
 [ 5.04612131e-03,  7.59151531e-03]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.63660717e-01,  3.83442074e-01],
 [ 7.91569054e-01,  5.28826237e-01]]))
class tinyms.primitives.ApplyAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme.

\[accum += grad * grad\]
\[var -= lr * grad * \frac{1}{\sqrt{accum}}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent.. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type.

  • accum (Parameter) - Accumulation to be updated. The shape and dtype must be the same as var. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - The learning rate value, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient. The shape and dtype must be the same as var. With float32 or float16 data type.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Supported Platforms:

Ascend CPU GPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad = ops.ApplyAdagrad()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad(self.var, self.accum, lr, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> lr = Tensor(0.001, mstype.float32)
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.47984838e-01,  7.14758754e-01],
 [ 6.01995945e-01,  5.44394553e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.35230064e+00,  7.92921484e-01],
 [ 1.06441569e+00,  1.17150283e+00]]))
class tinyms.primitives.ApplyAdagradV2(*args, **kwargs)[source]

Updates relevant entries according to the adagradv2 scheme.

\[accum += grad * grad\]
\[var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • epsilon (float) – A small value added for numerical stability.

  • update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type.

  • accum (Parameter) - Accumulation to be updated. The shape and dtype must be the same as var. With float16 or float32 data type.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. The shape and dtype must be the same as var. With float16 or float32 data type.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as m.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad_v2 = ops.ApplyAdagradV2(epsilon=1e-6)
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad_v2(self.var, self.accum, lr, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> lr = Tensor(0.001, mstype.float32)
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.47984838e-01,  7.14758754e-01],
 [ 6.01995945e-01,  5.44394553e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.35230064e+00,  7.92921484e-01],
 [ 1.06441569e+00,  1.17150283e+00]]))
class tinyms.primitives.ApplyAddSign(*args, **kwargs)[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t} = \beta * m_{t-1} + (1 - \beta) * g \\ \text{update} = (\alpha + \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t-1}\) is the last momentent of \(m_{t}\), \(lr\) represents scaling factor lr, \(g\) represents grad.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type.

  • m (Parameter) - Variable tensor to be updated, has the same dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float32 or float16 data type.

  • alpha (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_add_sign = ops.ApplyAddSign()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.alpha = 1.0
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_add_sign(self.var, self.m, self.lr, self.alpha, self.sign_decay, self.beta, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.46895862e-01,  7.14426279e-01],
 [ 6.01187825e-01,  5.43830693e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.77655590e-01,  6.19648814e-01],
 [ 4.73001003e-01,  8.55485201e-01]]))
class tinyms.primitives.ApplyCenteredRMSProp(*args, **kwargs)[source]

Optimizer that implements the centered RMSProp algorithm. Please refer to the usage in source code of nn.RMSProp.

Note

Update var according to the centered RMSProp algorithm.

\[g_{t} = \rho g_{t-1} + (1 - \rho)\nabla Q_{i}(w)\]
\[s_{t} = \rho s_{t-1} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t} = \beta m_{t-1} + \frac{\eta} {\sqrt{s_{t} - g_{t}^2 + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t}\]

where \(w\) represents var, which will be updated. \(g_{t}\) represents mean_gradient, \(g_{t-1}\) is the last momentent of \(g_{t}\). \(s_{t}\) represents mean_square, \(s_{t-1}\) is the last momentent of \(s_{t}\), \(m_{t}\) represents moment, \(m_{t-1}\) is the last momentent of \(m_{t}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Parameters

use_locking (bool) – Whether to enable a lock to protect the variable and accumlation tensors from being updated. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_gradient (Tensor) - Mean gradients, must have the same type as var.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • grad (Tensor) - Gradient, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be update.

Supported Platforms:

Ascend GPU

Examples

>>> centered_rms_prop = ops.ApplyCenteredRMSProp()
>>> input_x = Tensor(np.arange(-2, 2).astype(np.float32).reshape(2, 2), mindspore.float32)
>>> mean_grad = Tensor(np.arange(4).astype(np.float32).reshape(2, 2), mindspore.float32)
>>> mean_square = Tensor(np.arange(-3, 1).astype(np.float32).reshape(2, 2), mindspore.float32)
>>> moment = Tensor(np.arange(4).astype(np.float32).reshape(2, 2), mindspore.float32)
>>> grad = Tensor(np.arange(4).astype(np.float32).reshape(2, 2), mindspore.float32)
>>> learning_rate = Tensor(0.9, mindspore.float32)
>>> decay = 0.0
>>> momentum = 1e-10
>>> epsilon = 0.05
>>> output = centered_rms_prop(input_x, mean_grad, mean_square, moment, grad,
...                            learning_rate, decay, momentum, epsilon)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[-2.00000000e+00, -5.02492237e+00],
 [-8.04984474e+00, -1.10747662e+01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 0.00000000e+00,  1.00000000e+00],
 [ 2.00000000e+00,  3.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 0.00000000e+00,  1.00000000e+00],
 [ 4.00000000e+00,  9.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 0.00000000e+00,  4.02492237e+00],
 [ 8.04984474e+00,  1.20747662e+01]]))
class tinyms.primitives.ApplyFtrl(*args, **kwargs)[source]

Updates relevant entries according to the FTRL scheme.

Parameters

use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - The accumulation to be updated, must be same type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.

  • grad (Tensor) - Gradient. The data type must be float16 or float32.

  • lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.

Outputs:

There are three outputs for Ascend environment.

  • var (Tensor) - represents the updated var.

  • accum (Tensor) - represents the updated accum.

  • linear (Tensor) - represents the updated linear.

There is only one output for GPU environment.

  • var (Tensor) - This value is always zero and the input parameters has been updated in-place.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter, Tensor
>>> import mindspore.context as context
>>> from mindspore.ops import operations as ops
>>> class ApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(ApplyFtrlNet, self).__init__()
...         self.apply_ftrl = ops.ApplyFtrl()
...         self.lr = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...         self.lr_power = -0.5
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="linear")
...
...     def construct(self, grad):
...         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
...                               self.lr_power)
...         return out
...
>>> np.random.seed(0)
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.random.randint(-4, 4, (2, 2)), mindspore.float32)
>>> output = net(input_x)
>>> is_tbe = context.get_context("device_target") == "Ascend"
>>> if is_tbe:
...     print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.61418092e-01,  5.30964255e-01],
 [ 2.68715084e-01,  3.82065028e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.64236546e+01,  9.64589405e+00],
 [ 1.43758726e+00,  9.89177322e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[-1.86994812e+03, -1.64906018e+03],
 [-3.22187836e+02, -1.20163989e+03]]))
 ... else:
 ...    print(net.var.asnumpy())
 [[0.4614181  0.5309642 ]
  [0.2687151  0.38206503]]
 ...    print(net.accum.asnumpy())
 [[16.423655  9.645894 ]
  [ 1.4375873 9.891773 ]]
 ...    print(net.linear.asnumpy())
 [[-1869.9479 -1649.0599]
  [ -322.1879 -1201.6399]]
class tinyms.primitives.ApplyGradientDescent(*args, **kwargs)[source]

Updates relevant entries according to the following.

\[var = var - \alpha * \delta\]

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same type as var.

Outputs:

Tensor, represents the updated var.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_gradient_descent = ops.ApplyGradientDescent()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.alpha = 0.001
...     def construct(self, delta):
...         out = self.apply_gradient_descent(self.var, self.alpha, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(delta)
>>> print(output.shape)
(2, 2)
class tinyms.primitives.ApplyMomentum(*args, **kwargs)[source]

Optimizer that implements the Momentum algorithm.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • gradient_scale (float) – The scale of the gradient. Default: 1.0.

Inputs:
  • variable (Parameter) - Weights to be updated. data type must be float.

  • accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.

Outputs:

Tensor, parameters to be updated.

Raises

TypeError – If the use_locking or use_nesterov is not a bool or gradient_scale is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

Please refer to the usage in mindspore.nn.Momentum.

class tinyms.primitives.ApplyPowerSign(*args, **kwargs)[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t} = \beta * m_{t-1} + (1 - \beta) * g \\ \text{update} = \exp(\text{logbase} * \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t-1}\) is the last momentent of \(m_{t}\), \(lr\) represents scaling factor lr, \(g\) represents grad.

All of inputs comply with the implicit type conversion rules to make the data types consistent. If lr, logbase, sign_decay or beta is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. If inputs are tensors and have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. If data type of var is float16, all inputs must have the same data type as var.

  • m (Parameter) - Variable tensor to be updated, has the same dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float32 or float16 data type.

  • logbase (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_power_sign = ops.ApplyPowerSign()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.logbase = np.e
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_power_sign(self.var, self.m, self.lr, self.logbase,
...                                        self.sign_decay, self.beta, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.34601569e-01,  7.09534407e-01],
 [ 5.91087103e-01,  5.37083089e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.77655590e-01,  6.19648814e-01],
 [ 4.73001003e-01,  8.55485201e-01]]))
class tinyms.primitives.ApplyProximalAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the proximal adagrad algorithm.

\[accum += grad * grad\]
\[\text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}}\]
\[var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0)\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - Accumulation to be updated. Must has the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be scalar. The data type must be float16 or float32.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be scalar. The data type must be float16 or float32.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be scalar. The data type must be float16 or float32.

  • grad (Tensor) - Gradient with the same shape and dtype as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_adagrad = ops.ApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="accum")
...         self.lr = 0.01
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, grad):
...         out = self.apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1, self.l2, grad)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.40526688e-01,  7.10883260e-01],
 [ 5.95089436e-01,  5.39996684e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.35230064e+00,  7.92921484e-01],
 [ 1.06441569e+00,  1.17150283e+00]]))
class tinyms.primitives.ApplyProximalGradientDescent(*args, **kwargs)[source]

Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm.

\[\text{prox_v} = var - \alpha * \delta\]
\[var = \frac{sign(\text{prox_v})}{1 + \alpha * l2} * \max(\left| \text{prox_v} \right| - alpha * l1, 0)\]

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be scalar. With float32 or float16 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same type as var.

Outputs:

Tensor, represents the updated var.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_gradient_descent = ops.ApplyProximalGradientDescent()
...         self.var = Parameter(Tensor(np.random.rand(2, 2).astype(np.float32)), name="var")
...         self.alpha = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, delta):
...         out = self.apply_proximal_gradient_descent(self.var, self.alpha, self.l1, self.l2, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.random.rand(2, 2).astype(np.float32))
>>> output = net(delta)
>>> print(output.shape)
(2, 2)
class tinyms.primitives.ApplyRMSProp(*args, **kwargs)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of nn.RMSProp.

Note

Update var according to the RMSProp algorithm.

\[s_{t} = \rho s_{t-1} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t} = \beta m_{t-1} + \frac{\eta} {\sqrt{s_{t} + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t}\]

where \(w\) represents var, which will be updated. \(s_{t}\) represents mean_square, \(s_{t-1}\) is the last momentent of \(s_{t}\), \(m_{t}\) represents moment, \(m_{t-1}\) is the last momentent of \(m_{t}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Parameters

use_locking (bool) – Whether to enable a lock to protect the variable and accumlation tensors from being updated. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - Gradient, must have the same type as var.

  • decay (float) - Decay rate. Only constant value is allowed.

  • momentum (float) - Momentum. Only constant value is allowed.

  • epsilon (float) - Ridge term. Only constant value is allowed.

Outputs:

Tensor, parameters to be update.

Supported Platforms:

Ascend GPU

Examples

>>> apply_rms = ops.ApplyRMSProp()
>>> input_x = Tensor(1., mindspore.float32)
>>> mean_square = Tensor(2., mindspore.float32)
>>> moment = Tensor(1., mindspore.float32)
>>> grad = Tensor(2., mindspore.float32)
>>> learning_rate = Tensor(0.9, mindspore.float32)
>>> decay = 0.0
>>> momentum = 1e-10
>>> epsilon = 0.001
>>> output = apply_rms(input_x, mean_square, moment, learning_rate, grad, decay, momentum, epsilon)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 0.100112), Tensor(shape=[], dtype=Float32, value= 4),
Tensor(shape=[], dtype=Float32, value= 0.899888))
class tinyms.primitives.ApproximateEqual(*args, **kwargs)[source]

Returns True if abs(x1-x2) is smaller than tolerance element-wise, otherwise False.

Inputs of x1 and x2 comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Inputs:
  • x1 (Tensor) - A tensor. Must be one of the following types: float32, float16.

  • x2 (Tensor) - A tensor of the same type and shape as ‘x1’.

Outputs:

Tensor, the shape is the same as the shape of ‘x1’, and the data type is bool.

Supported Platforms:

Ascend

Examples

>>> x1 = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> x2 = Tensor(np.array([2, 4, 6]), mindspore.float32)
>>> approximate_equal = ops.ApproximateEqual(2.)
>>> output = approximate_equal(x1, x2)
>>> print(output)
[ True  True  False]
class tinyms.primitives.ArgMaxWithValue(*args, **kwargs)[source]

Calculates the maximum value with the corresponding index.

Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor. - index (Tensor) - The index for the maximum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\). - output_x (Tensor) - The maximum value of input tensor, with the same shape as index.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMaxWithValue()(input_x)
>>> print(index, output)
3 0.7
class tinyms.primitives.ArgMinWithValue(*args, **kwargs)[source]

Calculates the minimum value with corresponding index, and returns indices and values.

Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor. - index (Tensor) - The index for the minimum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\). - output_x (Tensor) - The minimum value of input tensor, with the same shape as index.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output = ops.ArgMinWithValue()(input_x)
>>> print(output)
(Tensor(shape=[], dtype=Int32, value= 0), Tensor(shape=[], dtype=Float32, value= 0.0))
class tinyms.primitives.Argmax(*args, **kwargs)[source]

Returns the indices of the maximum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor will be \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis where the Argmax operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, indices of the max value of input tensor across the axis.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.Argmax(output_type=mindspore.int32)(input_x)
>>> print(output)
[1 0 0]
class tinyms.primitives.Argmin(*args, **kwargs)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis where the Argmin operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, indices of the min value of input tensor across the axis.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.Argmin()(input_x)
>>> print(index)
2
class tinyms.primitives.Asin(*args, **kwargs)[source]

Computes arcsine of input tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> asin = ops.Asin()
>>> input_x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = asin(input_x)
>>> print(output)
[0.8330927  0.04001068  0.30469266  0.59438497]
class tinyms.primitives.Asinh(*args, **kwargs)[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = sinh^{-1}(input_i)\]
Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type should be one of the following types: float16, float32.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> asinh = ops.Asinh()
>>> input_x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = asinh(input_x)
>>> print(output)
[-2.3124385  1.1947632  1.8184465  5.298342 ]
class tinyms.primitives.Assert(*args, **kwargs)[source]

Asserts that the given condition is True. If input condition evaluates to false, print the list of tensor in data.

Parameters

summarize (int) – Print this many entries of each tensor.

Inputs:
  • condition [Union[Tensor[bool], bool]] - The condition to evaluate.

  • input_data (Union(tuple[Tensor], list[Tensor])) - The tensors to print out when condition is false.

Examples

>>> class AssertDemo(nn.Cell):
...     def __init__(self):
...         super(AssertDemo, self).__init__()
...         self.assert1 = ops.Assert(summarize=10)
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         data = self.add(x, y)
...         self.assert1(True, [data])
...         return data
...
class tinyms.primitives.Assign(*args, **kwargs)[source]

Assigns Parameter with a value.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Tensor) - The value to be assigned.

Outputs:

Tensor, has the same type as original variable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.y = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="y")
...
...     def construct(self, x):
...         ops.Assign()(self.y, x)
...         return self.y
...
>>> x = Tensor([2.0], mindspore.float32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
Parameter (name=y)
class tinyms.primitives.AssignAdd(*args, **kwargs)[source]

Updates a Parameter by adding a value to it.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Union[numbers.Number, Tensor]) - The value to be added to the variable. It must have the same shape as variable if it is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignAdd = ops.AssignAdd()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
...
...     def construct(self, x):
...         self.AssignAdd(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> output = net(value)
>>> print(output)
Parameter (name=global_step)
class tinyms.primitives.AssignSub(*args, **kwargs)[source]

Updates a Parameter by subtracting a value from it.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Union[numbers.Number, Tensor]) - The value to be subtracted from the variable. It must have the same shape as variable if it is a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignSub = ops.AssignSub()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
...
...     def construct(self, x):
...         self.AssignSub(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int32)*100)
>>> output = net(value)
>>> print(output)
Parameter (name=global_step)
class tinyms.primitives.Atan(*args, **kwargs)[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Inputs:
  • input_x (Tensor): The input tensor. The data type should be one of the following types: float16, float32.

Outputs:

A Tensor, has the same type as the input.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> atan = ops.Atan()
>>> output = atan(input_x)
>>> print(output)
[0.7853982 0.       ]
class tinyms.primitives.Atan2(*args, **kwargs)[source]

Returns arctangent of input_x/input_y element-wise.

It returns \(\theta\ \in\ [-\pi, \pi]\) such that \(x = r*\sin(\theta), y = r*\cos(\theta)\), where \(r = \sqrt{x^2 + y^2}\).

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x (Tensor) - The input tensor.

  • input_y (Tensor) - The input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is same as input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0, 1]), mindspore.float32)
>>> input_y = Tensor(np.array([1, 1]), mindspore.float32)
>>> atan2 = ops.Atan2()
>>> output = atan2(input_x, input_y)
>>> print(output)
[0.        0.7853982]
class tinyms.primitives.Atanh(*args, **kwargs)[source]

Computes inverse hyperbolic tangent of the input element-wise.

Inputs:
  • input_x (Tensor): The input tensor.

Outputs:

A Tensor, has the same type as the input.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([1.047, 0.785]), mindspore.float32)
>>> atanh = ops.Atanh()
>>> output = atanh(input_x)
>>> print(output)
[1.8869909 1.058268 ]
class tinyms.primitives.AvgPool(*args, **kwargs)[source]

Average pooling operation.

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), AvgPool2d outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \frac{1}{h_{ker} * w_{ker}} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents height and width are both kernel_size, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str) – default is ‘NCHW’.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.avgpool_op = ops.AvgPool(pad_mode="VALID", kernel_size=2, strides=1)
...
...     def construct(self, x):
...         result = self.avgpool_op(x)
...         return result
...
>>> input_x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
class tinyms.primitives.BNTrainingReduce(*args, **kwargs)[source]

For the BatchNorm operation this operator update the moving averages for training and is used in conjunction with BNTrainingUpdate.

Inputs:
  • x (Tensor) - A 4-D Tensor with float16 or float32 data type. Tensor of shape \((N, C, A, B)\).

Outputs:
  • sum (Tensor) - A 1-D Tensor with float32 data type. Tensor of shape \((C,)\).

  • square_sum (Tensor) - A 1-D Tensor with float32 data type. Tensor of shape \((C,)\).

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.ones([128, 3, 32, 3]), mindspore.float32)
>>> bn_training_reduce = ops.BNTrainingReduce()
>>> output = bn_training_reduce(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value=
[ 1.22880000e+04,  1.22880000e+04,  1.22880000e+04]), Tensor(shape=[3], dtype=Float32, value=
[ 1.22880000e+04,  1.22880000e+04,  1.22880000e+04]))
class tinyms.primitives.BNTrainingUpdate(*args, **kwargs)[source]

For the BatchNorm operation, this operator update the moving averages for training and is used in conjunction with BNTrainingReduce.

Parameters
  • isRef (bool) – If a ref. Default: True.

  • epsilon (float) – A small value added to variance avoid dividing by zero. Default: 1e-5.

  • factor (float) – A weight for updating the mean and variance. Default: 0.1.

Inputs:
  • x (Tensor) - A 4-D Tensor with float16 or float32 data type. Tensor of shape \((N, C, A, B)\).

  • sum (Tensor) - A 1-D Tensor with float16 or float32 data type for the output of operator BNTrainingReduce. Tensor of shape \((C,)\).

  • square_sum (Tensor) - A 1-D Tensor with float16 or float32 data type for the output of operator BNTrainingReduce. Tensor of shape \((C,)\).

  • scale (Tensor) - A 1-D Tensor with float16 or float32, for the scaling factor. Tensor of shape \((C,)\).

  • offset (Tensor) - A 1-D Tensor with float16 or float32, for the scaling offset. Tensor of shape \((C,)\).

  • mean (Tensor) - A 1-D Tensor with float16 or float32, for the scaling mean. Tensor of shape \((C,)\).

  • variance (Tensor) - A 1-D Tensor with float16 or float32, for the update variance. Tensor of shape \((C,)\).

Outputs:
  • y (Tensor) - Tensor, has the same shape data type as x.

  • mean (Tensor) - Tensor for the updated mean, with float32 data type. Has the same shape as variance.

  • variance (Tensor) - Tensor for the updated variance, with float32 data type. Has the same shape as variance.

  • batch_mean (Tensor) - Tensor for the mean of x, with float32 data type. Has the same shape as variance.

  • batch_variance (Tensor) - Tensor for the mean of variance, with float32 data type. Has the same shape as variance.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.ones([1, 2, 2, 2]), mindspore.float32)
>>> sum = Tensor(np.ones([2]), mindspore.float32)
>>> square_sum = Tensor(np.ones([2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> offset = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> bn_training_update = ops.BNTrainingUpdate()
>>> output = bn_training_update(input_x, sum, square_sum, scale, offset, mean, variance)
>>> print(output)
(Tensor(shape=[1, 2, 2, 2], dtype=Float32, value=
[[[[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]],
  [[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]]]]), Tensor(shape=[1, 2, 2, 2], dtype=Float32, value=
[[[[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]],
  [[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]]]]), Tensor(shape=[1, 2, 2, 2], dtype=Float32, value=
[[[[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]],
  [[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]]]]), Tensor(shape=[2], dtype=Float32, value=
   [ 2.50000000e-01,  2.50000000e-01]), Tensor(shape=[2], dtype=Float32, value=
   [ 1.87500000e-01,  1.87500000e-01]))
class tinyms.primitives.BasicLSTMCell(*args, **kwargs)[source]

Applies the long short-term memory (LSTM) to the input.

\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ix} x_t + b_{ix} + W_{ih} h_{(t-1)} + b_{ih}) \\ f_t = \sigma(W_{fx} x_t + b_{fx} + W_{fh} h_{(t-1)} + b_{fh}) \\ \tilde{c}_t = \tanh(W_{cx} x_t + b_{cx} + W_{ch} h_{(t-1)} + b_{ch}) \\ o_t = \sigma(W_{ox} x_t + b_{ox} + W_{oh} h_{(t-1)} + b_{oh}) \\ c_t = f_t * c_{(t-1)} + i_t * \tilde{c}_t \\ h_t = o_t * \tanh(c_t) \\ \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\). Details can be found in paper LONG SHORT-TERM MEMORY and Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling.

Parameters
  • keep_prob (float) – If not 1.0, append Dropout layer on the outputs of each LSTM layer except the last layer. Default 1.0. The range of dropout is [0.0, 1.0].

  • forget_bias (float) – Add forget bias to forget gate biases in order to decrease former scale. Default: 1.0.

  • state_is_tuple (bool) – If true, the state is a tuple of 2 tensors, containing h and c; If false, the state is a tensor and it needs to be split first. Default: True.

  • activation (str) – Activation. Default: “tanh”. Only “tanh” is currently supported.

Inputs:
  • x (Tensor) - Current words. Tensor of shape (batch_size, input_size). The data type must be float16 or float32.

  • h (Tensor) - Hidden state last moment. Tensor of shape (batch_size, hidden_size). The data type must be float16 or float32.

  • c (Tensor) - Cell state last moment. Tensor of shape (batch_size, hidden_size). The data type must be float16 or float32.

  • w (Tensor) - Weight. Tensor of shape (input_size + hidden_size, 4 x hidden_size). The data type must be float16 or float32.

  • b (Tensor) - Bias. Tensor of shape (4 x hidden_size). The data type must be the same as c.

Outputs:
  • ct (Tensor) - Forward \(c_t\) cache at moment t. Tensor of shape (batch_size, hidden_size). Has the same type with input c.

  • ht (Tensor) - Cell output. Tensor of shape (batch_size, hidden_size). With data type of float16.

  • it (Tensor) - Forward \(i_t\) cache at moment t. Tensor of shape (batch_size, hidden_size). Has the same type with input c.

  • jt (Tensor) - Forward \(j_t\) cache at moment t. Tensor of shape (batch_size, hidden_size). Has the same type with input c.

  • ft (Tensor) - Forward \(f_t\) cache at moment t. Tensor of shape (batch_size, hidden_size). Has the same type with input c.

  • ot (Tensor) - Forward \(o_t\) cache at moment t. Tensor of shape (batch_size, hidden_size). Has the same type with input c.

  • tanhct (Tensor) - Forward \(tanh c_t\) cache at moment t. Tensor of shape (batch_size, hidden_size), has the same type with input c.

Supported Platforms:

Ascend

Examples

>>> np.random.seed(0)
>>> x = Tensor(np.random.rand(1, 32).astype(np.float16))
>>> h = Tensor(np.random.rand(1, 2).astype(np.float16))
>>> c = Tensor(np.random.rand(1, 2).astype(np.float16))
>>> w = Tensor(np.random.rand(34, 8).astype(np.float16))
>>> b = Tensor(np.random.rand(8, ).astype(np.float16))
>>> lstm = ops.BasicLSTMCell(keep_prob=1.0, forget_bias=1.0, state_is_tuple=True, activation='tanh')
>>> output = lstm(x, h, c, w, b)
>>> print(output)
(Tensor(shape=[1, 2], dtype=Float16, value=
([[7.6953e-01, 9.2432e-01]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[1.0000e+00, 1.0000e+00]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[1.0000e+00, 1.0000e+00]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[1.0000e+00, 1.0000e+00]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[1.0000e+00, 1.0000e+00]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[7.6953e-01, 9.2432e-01]]), Tensor(shape=[1, 2], dtype=Float16, value=
 [[0.0000e+00, 0.0000e+00]]))
class tinyms.primitives.BatchMatMul(*args, **kwargs)[source]

Computes matrix multiplication between two tensors by batch

result[…, :, :] = tensor(a[…, :, :]) * tensor(b[…, :, :]).

The two input tensors must have the same rank and the rank must be not less than 3.

Parameters
  • transpose_a (bool) – If true, the last two dimensions of a is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, the last two dimensions of b is transposed before multiplication. Default: False.

Inputs:
  • input_x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions. If transpose_a is True, its shape must be \((*B, C, N)\).

  • input_y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\). If transpose_b is True, its shape must be \((*B, M, C)\).

Outputs:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.ones(shape=[2, 4, 1, 3]), mindspore.float32)
>>> input_y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul()
>>> output = batmatmul(input_x, input_y)
>>> print(output)
[[[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]],
 [[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]]
>>>
>>> input_x = Tensor(np.ones(shape=[2, 4, 3, 1]), mindspore.float32)
>>> input_y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul(transpose_a=True)
>>> output = batmatmul(input_x, input_y)
>>> print(output)
[[[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]],
 [[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]]
class tinyms.primitives.BatchNorm(*args, **kwargs)[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over input to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters which can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • is_training (bool) – If is_training is True, mean and variance are computed during training. If is_training is False, they’re loaded from checkpoint during inference. Default: False.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Tensor) - Tensor of shape \((C,)\), with float16 or float32 data type.

  • bias (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • mean (Tensor) - Tensor of shape \((C,)\), with float16 or float32 data type.

  • variance (Tensor) - Tensor of shape \((C,)\), has the same data type with mean.

Outputs:

Tuple of 5 Tensor, the normalized inputs and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

  • updated_scale (Tensor) - Tensor of shape \((C,)\).

  • updated_bias (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_1 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_2 (Tensor) - Tensor of shape \((C,)\).

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> bias = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> batch_norm = ops.BatchNorm()
>>> output = batch_norm(input_x, scale, bias, mean, variance)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00, 1.00000000e+00],
 [ 1.00000000e+00, 1.00000000e+00]]), Tensor(shape=[2], dtype=Float32, value=
 [ 1.00000000e+00, 1.00000000e+00]), Tensor(shape=[2], dtype=Float32, value=
 [ 1.00000000e+00, 1.00000000e+00]), Tensor(shape=[2], dtype=Float32, value=
 [ 1.00000000e+00, 1.00000000e+00]), Tensor(shape=[2], dtype=Float32, value=
 [ 1.00000000e+00, 1.00000000e+00]))
class tinyms.primitives.BatchToSpace(*args, **kwargs)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_size, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_size with given amount to crop from dimension, respectively.

Parameters
  • block_size (int) – The block size of division, has the value not less than 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction lists. Each list contains 2 integers. All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_size and crops. The output shape will be (n’, c’, h’, w’), where

\(n' = n//(block\_size*block\_size)\)

\(c' = c\)

\(h' = h*block\_size-crops[0][0]-crops[0][1]\)

\(w' = w*block\_size-crops[1][0]-crops[1][1]\)

Supported Platforms:

Ascend

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpace(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceND(*args, **kwargs)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_shape, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_shape with given amount to crop from dimension, respectively.

Parameters
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is a int, the block size of M dimendions are the same, equal to block_shape. M must be 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction list, each containing 2 int value. All values must be >= 0. crops[i] specifies the crop values for spatial dimension i, which corresponds to input dimension i+2. It is required that input_shape[i+2]*block_shape[i] > crops[i][0]+crops[i][1].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_shape and crops. The output shape will be (n’, c’, h’, w’), where

\(n' = n//(block\_shape[0]*block\_shape[1])\)

\(c' = c\)

\(h' = h*block\_shape[0]-crops[0][0]-crops[0][1]\)

\(w' = w*block\_shape[1]-crops[1][0]-crops[1][1]\)

Supported Platforms:

Ascend

Examples

>>> block_shape = [2, 2]
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space_nd = ops.BatchToSpaceND(block_shape, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space_nd(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BesselI0e(*args, **kwargs)[source]

Computes BesselI0e of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). Data type must be float16 or float32.

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> bessel_i0e = ops.BesselI0e()
>>> input_x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0e(input_x)
>>> print(output)
[0.7979961  0.5144438  0.75117415  0.9157829 ]
class tinyms.primitives.BesselI1e(*args, **kwargs)[source]

Computes BesselI1e of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). Data type must be float16 or float32.

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> bessel_i1e = ops.BesselI1e()
>>> input_x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1e(input_x)
>>> print(output)
[0.09507662 0.19699717 0.11505538 0.04116856]
class tinyms.primitives.BiasAdd(*args, **kwargs)[source]

Returns sum of input and bias tensor.

Adds the 1-D bias tensor to the input tensor, and broadcasts the shape on all axis except for the channel axis.

Inputs:
  • input_x (Tensor) - The input tensor. The shape can be 2-4 dimensions.

  • bias (Tensor) - The bias tensor, with shape \((C)\).

  • data_format (str) - The format of input and output data. It should be ‘NHWC’ or ‘NCHW’,default is ‘NCHW’. The shape of bias must be the same as input_x in the second dimension.

Outputs:

Tensor, with the same shape and type as input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3,)), mindspore.float32)
>>> bias_add = ops.BiasAdd()
>>> output = bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.BinaryCrossEntropy(*args, **kwargs)[source]

Computes the binary cross entropy between the target and the output.

Sets input as \(x\), input label as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

Inputs:
  • input_x (Tensor) - The input Tensor. The data type must be float16 or float32.

  • input_y (Tensor) - The label Tensor which has same shape and data type as input_x.

  • weight (Tensor, optional) - A rescaling weight applied to the loss of each batch element. And it must have same shape and data type as input_x. Default: None.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as input_x. Otherwise, the output is a scalar.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.binary_cross_entropy = ops.BinaryCrossEntropy()
...     def construct(self, x, y, weight):
...         result = self.binary_cross_entropy(x, y, weight)
...         return result
...
>>> net = Net()
>>> input_x = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> input_y = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = net(input_x, input_y, weight)
>>> print(output)
0.38240486
class tinyms.primitives.BitwiseAnd(*args, **kwargs)[source]

Returns bitwise and of two tensors element-wise.

Inputs of input_x1 and input_x2 comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x1 (Tensor) - The input tensor with int16, int32 or uint16 data type.

  • input_x2 (Tensor) - The input tensor with same type as the input_x1.

Outputs:

Tensor, has the same type as the input_x1.

Supported Platforms:

Ascend

Examples

>>> input_x1 = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> input_x2 = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_and = ops.BitwiseAnd()
>>> output = bitwise_and(input_x1, input_x2)
>>> print(output)
[ 0  0  1 -1  1  0  1]
class tinyms.primitives.BitwiseOr(*args, **kwargs)[source]

Returns bitwise or of two tensors element-wise.

Inputs of input_x1 and input_x2 comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x1 (Tensor) - The input tensor with int16, int32 or uint16 data type.

  • input_x2 (Tensor) - The input tensor with same type as the input_x1.

Outputs:

Tensor, has the same type as the input_x1.

Supported Platforms:

Ascend

Examples

>>> input_x1 = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> input_x2 = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_or = ops.BitwiseOr()
>>> output = bitwise_or(input_x1, input_x2)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
class tinyms.primitives.BitwiseXor(*args, **kwargs)[source]

Returns bitwise xor of two tensors element-wise.

Inputs of input_x1 and input_x2 comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x1 (Tensor) - The input tensor with int16, int32 or uint16 data type.

  • input_x2 (Tensor) - The input tensor with same type as the input_x1.

Outputs:

Tensor, has the same type as the input_x1.

Supported Platforms:

Ascend

Examples

>>> input_x1 = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> input_x2 = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_xor = ops.BitwiseXor()
>>> output = bitwise_xor(input_x1, input_x2)
>>> print(output)
[ 0  1  0  0 -2  3  2]
class tinyms.primitives.BoundingBoxDecode(*args, **kwargs)[source]

Decodes bounding boxes locations.

Parameters
  • means (tuple) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • max_shape (tuple) – The max size limit for decoding box calculation.

  • wh_ratio_clip (float) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).

  • deltas (Tensor) - Delta of boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, decoded boxes.

Supported Platforms:

Ascend GPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_decode = ops.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0),
...                                          max_shape=(768, 1280), wh_ratio_clip=0.016)
>>> output = boundingbox_decode(anchor_box, deltas)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
class tinyms.primitives.BoundingBoxEncode(*args, **kwargs)[source]

Encodes bounding boxes locations.

Parameters
  • means (tuple) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).

  • groundtruth_box (Tensor) - Ground truth boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, encoded bounding boxes.

Supported Platforms:

Ascend GPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
>>> output = boundingbox_encode(anchor_box, groundtruth_box)
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
class tinyms.primitives.Broadcast(*args, **kwargs)[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters
  • root_rank (int) – Source rank. Required in all processes except the one that is sending the data.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises

TypeError – If root_rank is not a integer or group is not a string.

Supported Platforms:

Ascend, GPU

Examples

>>> # This example should be run with multiple processes.
>>> # Please refer to the tutorial > Distributed Training on mindspore.cn.
>>> from mindspore import Tensor
>>> from mindspore import context
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops.operations as ops
>>> import numpy as np
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.broadcast = ops.Broadcast(1)
...
...     def construct(self, x):
...         return self.broadcast((x,))
...
>>> input_ = Tensor(np.ones([2, 4]).astype(np.int32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
(Tensor(shape[2,4], dtype=Int32, value=
[[1, 1, 1, 1],
 [1, 1, 1, 1]]),)
class tinyms.primitives.BroadcastTo(*args, **kwargs)[source]

Broadcasts input tensor to a given shape.

Input shape can be broadcast to target shape if for each dimension pair they are either equal or input is one or the target dimension is -1. In case of -1 in target shape, it will be replaced by the input shape’s value in that dimension.

When input shape is broadcast to target shape, it starts with the trailing dimensions.

Parameters

shape (tuple) – The target shape to broadcast. Can be fully specified, or have -1 in one position where it will be substituted by the input tensor’s shape in that position, see example.

Inputs:
  • input_x (Tensor) - The input tensor. The data type should be one of the following types: float16, float32, int32, int8, uint8.

Outputs:

Tensor, with the given shape and the same data type as input_x.

Raises

ValueError – Given a shape tuple, if it has several -1; or if the -1 is in an invalid position such as one that does not have a opposing dimension in an input tensor; or if the target and input shapes are incompatible.

Supported Platforms:

Ascend GPU

Examples

>>> shape = (2, 3)
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> broadcast_to = ops.BroadcastTo(shape)
>>> output = broadcast_to(input_x)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>> shape = (2, -1)
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> broadcast_to = ops.BroadcastTo(shape)
>>> output = broadcast_to(input_x)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
class tinyms.primitives.CTCGreedyDecoder(*args, **kwargs)[source]

Performs greedy decoding on the logits given in inputs.

Parameters

merge_repeated (bool) – If true, merge repeated classes in output. Default: True.

Inputs:
  • inputs (Tensor) - The input Tensor must be a 3-D tensor whose shape is (max_time, batch_size, num_classes). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float32 or float64.

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of (batch_size). The type must be int32. Each value in the tensor must be equal to or less than max_time.

Outputs:
  • decoded_indices (Tensor) - A tensor with shape of (total_decoded_outputs, 2). Data type is int64.

  • decoded_values (Tensor) - A tensor with shape of (total_decoded_outputs), it stores the decoded classes. Data type is int64.

  • decoded_shape (Tensor) - The value of tensor is [batch_size, max_decoded_legth]. Data type is int64.

  • log_probability (Tensor) - A tensor with shape of (batch_size, 1), containing sequence log-probability, has the same type as inputs.

Examples

>>> class CTCGreedyDecoderNet(nn.Cell):
...    def __init__(self):
...        super(CTCGreedyDecoderNet, self).__init__()
...        self.ctc_greedy_decoder = P.CTCGreedyDecoder()
...        self.assert_op = ops.Assert(300)
...
...    def construct(self, inputs, sequence_length):
...        out = self.ctc_greedy_decoder(inputs,sequence_length)
...        self.assert_op(True, (out[0], out[1], out[2], out[3]))
...        return out[2]
...
>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> net = CTCGreedyDecoderNet()
>>> output = net(inputs, sequence_length)
>>> print(output)
class tinyms.primitives.CTCLoss(*args, **kwargs)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

Parameters
  • preprocess_collapse_repeated (bool) – If true, repeated labels will be collapsed prior to the CTC calculation. Default: False.

  • ctc_merge_repeated (bool) – If false, during CTC calculation, repeated non-blank labels will not be merged and these labels will be interpreted as individual ones. This is a simplfied version of CTC. Default: True.

  • ignore_longer_outputs_than_inputs (bool) – If true, sequences with longer outputs than inputs will be ignored. Default: False.

Inputs:
  • inputs (Tensor) - The input Tensor must be a 3-D tensor whose shape is (max_time, batch_size, num_classes). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float16, float32 or float64.

  • labels_indices (Tensor) - The indices of labels. labels_indices[i, :] == [b, t] means labels_values[i] stores the id for (batch b, time t). The type must be int64 and rank must be 2.

  • labels_values (Tensor) - A 1-D input tensor. The values are associated with the given batch and time. The type must be int32. labels_values[i] must in the range of [0, num_classes).

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of (batch_size). The type must be int32. Each value in the tensor must not be greater than max_time.

Outputs:
  • loss (Tensor) - A tensor containing log-probabilities, the shape is (batch_size). The tensor has the same type with inputs.

  • gradient (Tensor) - The gradient of loss, has the same type and shape with inputs.

Supported Platforms:

Ascend GPU

Examples

>>> np.random.seed(0)
>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
>>> labels_indices = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int64)
>>> labels_values = Tensor(np.array([2, 2]), mindspore.int32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_loss = ops.CTCLoss()
>>> loss, gradient = ctc_loss(inputs, labels_indices, labels_values, sequence_length)
>>> print(loss)
[ 0.7864997  0.720426 ]
>>> print(gradient)
[[[ 0.30898064  0.36491138  -0.673892  ]
  [ 0.33421117  0.2960548  -0.63026595 ]]
 [[ 0.23434742  0.36907154  0.11261538 ]
  [ 0.27316454  0.41090325  0.07584976 ]]]
class tinyms.primitives.Cast(*args, **kwargs)[source]

Returns a tensor with the new specified data type.

Inputs:
  • input_x (Union[Tensor, Number]) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The tensor to be cast.

  • type (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is the same as input_x, \((x_1, x_2, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> type_dst = mindspore.int32
>>> cast = ops.Cast()
>>> output = cast(input_x, type_dst)
>>> print(output.dtype)
Int32
>>> print(output.shape)
(2, 3, 4, 5)
class tinyms.primitives.Ceil(*args, **kwargs)[source]

Rounds a tensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Inputs:
  • input_x (Tensor) - The input tensor. It’s element data type must be float16 or float32.

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> ceil_op = ops.Ceil()
>>> output = ceil_op(input_x)
>>> print(output)
[ 2.  3. -1.]
class tinyms.primitives.CheckBprop(*args, **kwargs)[source]

Checks whether the data type and the shape of corresponding elements from tuples x and y are the same.

Raises

TypeError – If tuples x and y are not the same.

Inputs:
  • input_x (tuple[Tensor]) - The input_x contains the outputs of bprop to be checked.

  • input_y (tuple[Tensor]) - The input_y contains the inputs of bprop to check against.

Outputs:

(tuple[Tensor]), the input_x, if data type and shape of corresponding elements from input_x and input_y are the same.

Examples

>>> input_x = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> input_y = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> out = ops.CheckBprop()(input_x, input_y)
class tinyms.primitives.CheckValid(*args, **kwargs)[source]

Checks bounding box.

Checks whether the bounding box cross data and data border are valid.

Inputs:
  • bboxes (Tensor) - Bounding boxes tensor with shape (N, 4). Data type must be float16 or float32.

  • img_metas (Tensor) - Raw image size information with the format of (height, width, ratio). Data type must be float16 or float32.

Outputs:

Tensor, with shape of (N,) and dtype of bool.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.check_valid = ops.CheckValid()
...     def construct(self, x, y):
...         valid_result = self.check_valid(x, y)
...         return valid_result
...
>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> net = Net()
>>> output = net(bboxes, img_metas)
>>> print(output)
[ True False False]
class tinyms.primitives.ComputeAccidentalHits(*args, **kwargs)[source]

Compute accidental hits of sampled classes which happen to match target classes.

When a target class matches the sample class, we call it “accidental hit”. The result of calculating accidental hit contains three parts (index, id, weight), where index represents the row number in true_classes, and id represents the position in sampled_candidates, the weight is -FLOAT_MAX.

Parameters

num_true (int) – The number of target classes per training example.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int32 or int64 and shape [batch_size, num_true].

  • sampled_candidates (Tensor) - The sampled_candidates output of CandidateSampler, with data type of int32 or int64 and shape [num_sampled].

Outputs:

Tuple of 3 Tensors.

  • indices (Tensor) - A Tensor with shape (num_accidental_hits,), with the same type as true_classes.

  • ids (Tensor) - A Tensor with shape (num_accidental_hits,), with the same type as true_classes.

  • weights (Tensor) - A Tensor with shape (num_accidental_hits,), with the type float32.

Supported Platforms:

Ascend

Examples

>>> x = np.array([[1, 2], [0, 4], [3, 3]])
>>> y = np.array([0, 1, 2, 3, 4])
>>> sampler = ops.ComputeAccidentalHits(2)
>>> output1, output2, output3 = sampler(Tensor(x), Tensor(y))
>>> print(output1, output2, output3)
[0 0 1 1 2 2]
[1 2 0 4 3 3]
[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
class tinyms.primitives.Concat(*args, **kwargs)[source]

Connect tensor in the specified axis.

Connect input tensors along with the given axis.

The input data is a tuple of tensors. These tensors have the same rank R. Set the given axis as m, and \(0 \le m < R\). Set the number of input tensors as N. For the \(i\)-th tensor \(t_i\), it has the shape of \((x_1, x_2, ..., x_{mi}, ..., x_R)\). \(x_{mi}\) is the \(m\)-th dimension of the \(i\)-th tensor. Then, the shape of the output tensor is

\[(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\]
Parameters

axis (int) – The specified axis. Default: 0.

Inputs:
  • input_x (tuple, list) - A tuple or a list of input tensors.

Outputs:

Tensor, the shape is \((x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> data2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> op = ops.Concat()
>>> output = op((data1, data2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
class tinyms.primitives.ConfusionMatrix(*args, **kwargs)[source]

Calculates the confusion matrix from labels and predictions.

Parameters
  • num_classes (int) – The num of classes.

  • dtype (str) – Data type of confusion matrix. Default: ‘int32’.

Inputs:
  • labels (Tensor) - real labels, tensor of 1-D. the dtype must be non-negative Integer.

  • predictions (Tensor) - the labels from prediction, tensor of 1-D. the shape same as labels and the dtype must be non-negative Integer.

  • weights (Tensor) - tensor of 1-D. the shape same as predictions.

Outputs:

Tensor, the confusion matrix, with shape (num_classes, num_classes).

Examples

>>> confusion_matrix = ops.ConfusionMatrix(4)
>>> labels = Tensor([0, 1, 1, 3], mindspore.int32)
>>> predictions = Tensor([1, 2, 1, 3], mindspore.int32)
>>> output = confusion_matrix(labels, predictions)
>>> print(output)
[[0 1 0 0]
 [0 1 1 0]
 [0 0 0 0]
 [0 0 0 1]]
class tinyms.primitives.ControlDepend(**kwargs)[source]

Adds control dependency relation between source and destination operations.

In many cases, we need to control the execution order of operations. ControlDepend is designed for this. ControlDepend will instruct the execution engine to run the operations in a specific order. ControlDepend tells the engine that the destination operations must depend on the source operation which means the source operations must be executed before the destination.

Note

This operation does not work in PYNATIVE_MODE. ControlDepend is deprecated from version 1.1 and will be removed in a future version, use Depend instead.

Parameters

depend_mode (int) – Use 0 for a normal dependency relation and 1 for a user-defined dependency relation. Default: 0.

Inputs:
  • src (Any) - The source input. It can be a tuple of operations output or a single operation output. We do not concern about the input data, but concern about the operation that generates the input data. If depend_mode is 1 and the source input is Parameter, we will try to find the operations that used the parameter as input.

  • dst (Any) - The destination input. It can be a tuple of operations output or a single operation output. We do not concern about the input data, but concern about the operation that generates the input data. If depend_mode is 1 and the source input is Parameter, we will try to find the operations that used the parameter as input.

Outputs:

This operation has no actual data output, it will be used to setup the order of relative operations.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.control_depend = P.ControlDepend()
...         self.softmax = ops.Softmax()
...
...     def construct(self, x, y):
...         mul = x * y
...         softmax = self.softmax(x)
...         ret = self.control_depend(mul, softmax)
...         return ret
...
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
[[1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]]
class tinyms.primitives.Conv2D(*args, **kwargs)[source]

2D convolution layer.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size and \(C_{in}\) is channel number. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{ks_h}, \text{ks_w})\), where \(\text{ks_h}\) and \(\text{ks_w}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} // \text{group}, \text{ks_h}, \text{ks_w})\), where group is the group number to split the input in the channel dimension.

If the ‘pad_mode’ is set to be “valid”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: http://cs231n.github.io/convolutional-networks/.

Parameters
  • out_channel (int) – The dimension of the output.

  • kernel_size (Union[int, tuple[int]]) – The kernel size of the 2D convolution.

  • mode (int) – Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • stride (Union(int, tuple[int])) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

Outputs:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> conv2d = ops.Conv2D(out_channel=32, kernel_size=3)
>>> output = conv2d(input, weight)
>>> print(output.shape)
(10, 32, 30, 30)
class tinyms.primitives.Conv2DBackpropInput(*args, **kwargs)[source]

Computes the gradients of convolution with respect to the input.

Parameters
  • out_channel (int) – The dimensionality of the output space.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution window.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • mode (int) – Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • stride (Union[int. tuple[int]]) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union[int. tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) –

Inputs:
  • dout (Tensor) - the gradients w.r.t the output of the convolution. The shape conforms to the default data_format \((N, C_{out}, H_{out}, W_{out})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

  • input_size (Tensor) - A tuple describes the shape of the input which conforms to the format \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the gradients w.r.t the input of convolution. It has the same shape as the input.

Supported Platforms:

Ascend GPU

Examples

>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_backprop_input = ops.Conv2DBackpropInput(out_channel=32, kernel_size=3)
>>> output = conv2d_backprop_input(dout, weight, F.shape(x))
>>> print(output.shape)
(10, 32, 32, 32)
class tinyms.primitives.Cos(*args, **kwargs)[source]

Computes cosine of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> cos = ops.Cos()
>>> input_x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cos(input_x)
>>> print(output)
[0.971338 0.67487574 0.95233357 0.9959527 ]
class tinyms.primitives.Cosh(*args, **kwargs)[source]

Computes hyperbolic cosine of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> cosh = ops.Cosh()
>>> input_x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cosh(input_x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
class tinyms.primitives.CropAndResize(*args, **kwargs)[source]

Extracts crops from the input image tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant.

Parameters
  • method (str) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. Default: “bilinear”

  • extrapolation_value (float) – An optional float value used extrapolation, if applicable. Default: 0.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape [batch, image_height, image_width, depth]. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • boxes (Tensor) - A 2-D tensor of shape [num_boxes, 4]. The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the [0, 1] range are allowed, in which case we use extrapolation_value to extrapolate the input image values. Types allowed: float32.

  • box_index (Tensor) - A 1-D tensor of shape [num_boxes] with int32 values in [0, batch). The value of box_ind[i] specifies the image that the i-th box refers to. Types allowed: int32.

  • crop_size (Tuple[int]) - A tuple of two int32 elements: (crop_height, crop_width). Only constant value is allowed. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive.

Outputs:

A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth] with type: float32.

Supported Platforms:

Ascend

Examples

>>> class CropAndResizeNet(nn.Cell):
...     def __init__(self, crop_size):
...         super(CropAndResizeNet, self).__init__()
...         self.crop_and_resize = ops.CropAndResize()
...         self.crop_size = crop_size
...
...     def construct(self, x, boxes, box_index):
...         return self.crop_and_resize(x, boxes, box_index, self.crop_size)
...
>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> crop_and_resize = CropAndResizeNet(crop_size=crop_size)
>>> output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
>>> print(output.shape)
(5, 24, 24, 3)
class tinyms.primitives.CumProd(*args, **kwargs)[source]

Computes the cumulative product of the tensor x along axis.

Parameters
  • exclusive (bool) – If true, perform exclusive cumulative product. Default: False.

  • reverse (bool) – If true, reverse the result along axis. Default: False

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (int) - The dimensions to compute the cumulative product. Only constant value is allowed.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Supported Platforms:

Ascend

Examples

>>> a, b, c, = 1, 2, 3
>>> input_x = Tensor(np.array([a, b, c]).astype(np.float32))
>>> op0 = ops.CumProd()
>>> output0 = op0(input_x, 0) # output=[a, a * b, a * b * c]
>>> op1 = ops.CumProd(exclusive=True)
>>> output1 = op1(input_x, 0) # output=[1, a, a * b]
>>> op2 = ops.CumProd(reverse=True)
>>> output2 = op2(input_x, 0) # output=[a * b * c, b * c, c]
>>> op3 = ops.CumProd(exclusive=True, reverse=True)
>>> output3 = op3(input_x, 0) # output=[b * c, c, 1]
>>> print(output0)
[1. 2. 6.]
>>> print(output1)
[1. 1. 2.]
>>> print(output2)
[6. 6. 3.]
>>> print(output3)
[6. 3. 1.]
class tinyms.primitives.CumSum(*args, **kwargs)[source]

Computes the cumulative sum of input tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]
Parameters
  • exclusive (bool) – If true, perform exclusive mode. Default: False.

  • reverse (bool) – If true, perform inverse cumulative sum. Default: False.

Inputs:
  • input (Tensor) - The input tensor to accumulate.

  • axis (int) - The axis to accumulate the tensor’s value. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Outputs:

Tensor, the shape of the output tensor is consistent with the input tensor’s.

Supported Platforms:

Ascend GPU

Examples

>>> input = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> cumsum = ops.CumSum()
>>> output = cumsum(input, 1)
>>> print(output)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
class tinyms.primitives.DType(*args, **kwargs)[source]

Returns the data type of the input tensor as mindspore.dtype.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

mindspore.dtype, the data type of a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.DType()(input_tensor)
>>> print(output)
Float32
class tinyms.primitives.DataFormatDimMap(*args, **kwargs)[source]

Returns the dimension index in the destination data format given in the source data format.

Parameters
  • src_format (string) – An optional value for source data format. Default: ‘NHWC’.

  • dst_format (string) – An optional value for destination data format. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor with each element as a dimension index in source data format. The suggested values is in the range [-4, 4). It’s type is int32.

Outputs:

Tensor, has the same type as the input_x.

Supported Platforms:

Ascend

Examples

>>> x = Tensor([0, 1, 2, 3], mindspore.int32)
>>> dfdm = ops.DataFormatDimMap()
>>> output = dfdm(x)
>>> print(output)
[0 3 1 2]
class tinyms.primitives.Depend(*args, **kwargs)[source]

Depend is used for processing dependency operations.

In some side-effect scenarios, we need to ensure the execution order of operators. In order to ensure that operator A is executed before operator B, it is recommended to insert the Depend operator between operators A and B.

Previously, the ControlDepend operator was used to control the execution order. Since the ControlDepend operator is deprecated from version 1.1, it is recommended to use the Depend operator instead. The replacement method is as follows:

a = A(x)                --->        a = A(x)
b = B(y)                --->        y = Depend(y, a)
ControlDepend(a, b)     --->        b = B(y)
Inputs:
  • value (Tensor) - the real value to return for depend operator.

  • expr (Expression) - the expression to execute with no outputs.

Outputs:

Tensor, the value passed by last operator.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops.operations as P
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.softmax = P.Softmax()
...         self.depend = P.Depend()
...
...     def construct(self, x, y):
...         mul = x * y
...         y = self.depend(y, mul)
...         ret = self.softmax(y)
...         return ret
...
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
[[0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]]
class tinyms.primitives.DepthToSpace(*args, **kwargs)[source]

Rearranges blocks of depth data into spatial dimensions.

This is the reverse operation of SpaceToDepth.

The depth of output tensor is \(input\_depth / (block\_size * block\_size)\).

The output tensor’s height dimension is \(height * block\_size\).

The output tensor’s weight dimension is \(weight * block\_size\).

The input tensor’s depth must be divisible by block_size * block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide depth data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. It must be a 4-D tensor with shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor of shape \((N, C_{in} / \text{block_size}, H_{in} * \text{block_size}, W_{in} * \text{block_size})\).

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(1, 12, 1, 1), mindspore.float32)
>>> block_size = 2
>>> depth_to_space = ops.DepthToSpace(block_size)
>>> output = depth_to_space(x)
>>> print(output.shape)
(1, 3, 2, 2)
class tinyms.primitives.DepthwiseConv2dNative(*args, **kwargs)[source]

Returns the depth-wise convolution value for the input.

Applies depthwise conv2d for the input, which will generate more channels with channel_multiplier. Given an input tensor of shape \((N, C_{in}, H_{in}, W_{in})\) where \(N\) is the batch size and a filter tensor with kernel size \((ks_{h}, ks_{w})\), containing \(C_{in} * \text{channel_multiplier}\) convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels for each input channel has the default value 1), then concatenates the results together. The output has \(\text{in_channels} * \text{channel_multiplier}\) channels.

Parameters
  • channel_multiplier (int) – The multipiler for the original output convolution. Its value must be greater than 0.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution kernel.

  • mode (int) – Modes for different convolutions. 0 Math convolution, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 3.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly. Default: 0.

  • stride (Union[int, tuple[int]]) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union[int, tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set the size of kernel as \((K_1, K_2)\), then the shape is \((K, C_{in}, K_1, K_2)\), K must be 1.

Outputs:

Tensor of shape \((N, C_{in} * \text{channel_multiplier}, H_{out}, W_{out})\).

Supported Platforms:

Ascend

Examples

>>> input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([1, 32, 3, 3]), mindspore.float32)
>>> depthwise_conv2d = ops.DepthwiseConv2dNative(channel_multiplier=3, kernel_size=(3, 3))
>>> output = depthwise_conv2d(input, weight)
>>> print(output.shape)
(10, 96, 30, 30)
class tinyms.primitives.Diag(*args, **kwargs)[source]

Constructs a diagonal tensor with a given diagonal values.

Assume input_x has dimensions \([D_1,... D_k]\), the output is a tensor of rank 2k with dimensions \([D_1,..., D_k, D_1,..., D_k]\) where: \(output[i_1,..., i_k, i_1,..., i_k] = input_x[i_1,..., i_k]\) and 0 everywhere else.

Inputs:
  • input_x (Tensor) - The input tensor. The input shape must be less than 5d.

Outputs:

Tensor, has the same dtype as the input_x.

Examples

>>> input_x = Tensor([1, 2, 3, 4])
>>> diag = ops.Diag()
>>> output = diag(input_x)
>>> print(output)
[[1, 0, 0, 0],
 [0, 2, 0, 0],
 [0, 0, 3, 0],
 [0, 0, 0, 4]]
class tinyms.primitives.DiagPart(*args, **kwargs)[source]

Extracts the diagonal part from given tensor.

Assume input has dimensions \([D_1,..., D_k, D_1,..., D_k]\), the output is a tensor of rank k with dimensions \([D_1,..., D_k]\) where: \(output[i_1,..., i_k] = input[i_1,..., i_k, i_1,..., i_k]\).

Inputs:
  • input_x (Tensor) - tensor of rank k where k is even and not zero.

Outputs:

Tensor, the extracted diagonal has the same dtype as the input_x.

Examples
>>> input_x = Tensor([[1, 0, 0, 0],
...                   [0, 2, 0, 0],
...                   [0, 0, 3, 0],
...                   [0, 0, 0, 4]])
>>> diag_part = ops.DiagPart()
>>> output = diag_part(input_x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.Div(*args, **kwargs)[source]

Computes the quotient of dividing the first input tensor by the second input tensor element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = \frac{x_i}{y_i}\]
Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - When the first input is a tensor, The second input could be a number, a bool, or a tensor whose data type is number or bool. When the first input is a number or a bool, the second input must be a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = ops.Div()
>>> output = div(input_x, input_y)
>>> print(output)
[-1.3333334  2.5        2.        ]
class tinyms.primitives.DivNoNan(*args, **kwargs)[source]

Computes a safe divide and returns 0 if the y is zero.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
>>> input_y = Tensor(np.array([0., 0., 0., 2.0, 3.0]), mindspore.float32)
>>> div_no_nan = ops.DivNoNan()
>>> output = div_no_nan(input_x, input_y)
>>> print(output)
[0.  0.  0.  2.5 2. ]
class tinyms.primitives.Dropout(*args, **kwargs)[source]

During training, randomly zeroes some of the elements of the input tensor with probability.

Parameters

keep_prob (float) – The keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units.

Inputs:
  • input (Tensor) - The input tensor.

Outputs:
  • output (Tensor) - with the same shape as the input tensor.

  • mask (Tensor) - with the same shape as the input tensor.

Examples

>>> dropout = ops.Dropout(keep_prob=0.5)
>>> x = Tensor((20, 16, 50, 50), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output)
[0. 32. 0. 0.]
>>> print(mask)
[0. 1. 0. 0.]
class tinyms.primitives.DropoutDoMask(*args, **kwargs)[source]

Applies dropout mask on the input tensor.

Take the mask output of DropoutGenMask as input, and apply dropout on the input.

Inputs:
  • input_x (Tensor) - The input tensor.

  • mask (Tensor) - The mask to be applied on input_x, which is the output of DropoutGenMask. And the shape of input_x must be the same as the value of DropoutGenMask’s input shape. If input wrong mask, the output of DropoutDoMask are unpredictable.

  • keep_prob (Union[Tensor, float]) - The keep rate, greater than 0 and less equal than 1, e.g. keep_prob = 0.9, means dropping out 10% of input units. The value of keep_prob is the same as the input keep_prob of DropoutGenMask.

Outputs:

Tensor, the value that applied dropout on.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> shape = (2, 2, 3)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> dropout_gen_mask = ops.DropoutGenMask()
>>> dropout_do_mask = ops.DropoutDoMask()
>>> mask = dropout_gen_mask(shape, keep_prob)
>>> output = dropout_do_mask(x, mask, keep_prob)
>>> print(output.shape)
(2, 2, 3)
class tinyms.primitives.DropoutGenMask(*args, **kwargs)[source]

Generates the mask value for the input shape.

Parameters
  • Seed0 (int) – Seed0 value for random generating. Default: 0.

  • Seed1 (int) – Seed1 value for random generating. Default: 0.

Inputs:
  • shape (tuple[int]) - The shape of target mask.

  • keep_prob (Tensor) - The keep rate, greater than 0 and less equal than 1, e.g. keep_prob = 0.9, means dropping out 10% of input units.

Outputs:

Tensor, the value of generated mask for input shape.

Supported Platforms:

Ascend

Examples

>>> dropout_gen_mask = ops.DropoutGenMask()
>>> shape = (2, 4, 5)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> output = dropout_gen_mask(shape, keep_prob)
>>> print(output.shape)
(16,)
class tinyms.primitives.DynamicGRUV2(*args, **kwargs)[source]

Applies a single-layer gated recurrent unit (GRU) to an input sequence.

\[\begin{split}\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array}\end{split}\]

where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, \(h_{(t-1)}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(r_t\), \(z_t\), \(n_t\) are the reset, update, and new gates, respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.

Parameters
  • direction (str) – A string identifying the direction in the op. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the op. Default: 1.

  • keep_prob (float) – A float identifying the keep prob in the op. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the op. Default: -1.0.

  • num_proj (int) – An integer identifying the num proj in the op. Default: 0.

  • time_major (bool) – A bool identifying the time major in the op. Default: True.

  • activation (str) – A string identifying the type of activation function in the op. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • gate_order (str) – A string identifying the gate order in weight and bias. Default: ‘rzh. ‘zrh’ is another option.

  • reset_after (bool) – A bool identifying whether to apply reset gate after matrix multiplication. Default: True.

  • is_training (bool) – A bool identifying is training in the op. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((\text{num_step}, \text{batch_size}, \text{input_size})\). The data type must be float16.

  • weight_input (Tensor) - Input-hidden weight. Tensor of shape \((\text{input_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • weight_hidden (Tensor) - Hidden-hidden weight. Tensor of shape \((\text{hidden_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • bias_input (Tensor) - Input-hidden bias. Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • bias_hidden (Tensor) - Hidden-hidden bias. Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((\text{batch_size})\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((\text{batch_size}, \text{hidden_size})\). The data type must be float16 or float32.

Outputs:
  • y (Tensor) - A Tensor of shape :math: if num_proj > 0 (num_step, batch_size, min(hidden_size, num_proj), if num_proj == 0 (num_step, batch_size, hidden_size). Has the same data type with input bias_type.

  • output_h (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • update (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • reset (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • hidden_new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

A note about the bias_type:

  • If bias_input and bias_hidden both are None, bias_type is date type of init_h.

  • If bias_input is not None, bias_type is the date type of bias_input.

  • If bias_input is None and bias_hidden is not None, `bias_type is the date type of bias_hidden.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 8, 64).astype(np.float16))
>>> weight_i = Tensor(np.random.rand(64, 48).astype(np.float16))
>>> weight_h = Tensor(np.random.rand(16, 48).astype(np.float16))
>>> bias_i = Tensor(np.random.rand(48).astype(np.float16))
>>> bias_h = Tensor(np.random.rand(48).astype(np.float16))
>>> init_h = Tensor(np.random.rand(8, 16).astype(np.float16))
>>> dynamic_gru_v2 = ops.DynamicGRUV2()
>>> output = dynamic_gru_v2(x, weight_i, weight_h, bias_i, bias_h, None, init_h)
>>> print(output[0].shape)
(2, 8, 16)
class tinyms.primitives.DynamicRNN(*args, **kwargs)[source]

DynamicRNN Operator.

Parameters
  • cell_type (str) – A string identifying the cell type in the op. Default: ‘LSTM’. Only ‘LSTM’ is currently supported.

  • direction (str) – A string identifying the direction in the op. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the op. Default: 1.

  • use_peephole (bool) – A bool identifying if use peephole in the op. Default: False.

  • keep_prob (float) – A float identifying the keep prob in the op. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the op. Default: -1.0.

  • num_proj (int) – An integer identifying the num proj in the op. Default: 0.

  • time_major (bool) – A bool identifying the time major in the op. Default: True. Only True is currently supported.

  • activation (str) – A string identifying the type of activation function in the op. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • forget_bias (float) – A float identifying the forget bias in the op. Default: 0.0.

  • is_training (bool) – A bool identifying is training in the op. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape (num_step, batch_size, input_size). The data type must be float16.

  • w (Tensor) - Weight. Tensor of shape (input_size + hidden_size, 4 x hidden_size). The data type must be float16.

  • b (Tensor) - Bias. Tensor of shape (4 x hidden_size). The data type must be float16 or float32.

  • seq_length (Tensor) - The length of each batch. Tensor of shape (batch_size). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape (1, batch_size, hidden_size). The data type must be float16.

  • init_c (Tensor) - Cell state of initial time. Tensor of shape (1, batch_size, hidden_size). The data type must be float16.

Outputs:
  • y (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • output_h (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). With data type of float16.

  • output_c (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • i (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • j (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • f (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • o (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

  • tanhct (Tensor) - A Tensor of shape (num_step, batch_size, hidden_size). Has the same type with input b.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 16, 64).astype(np.float16))
>>> w = Tensor(np.random.rand(96, 128).astype(np.float16))
>>> b = Tensor(np.random.rand(128).astype(np.float16))
>>> init_h = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> init_c = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> dynamic_rnn = ops.DynamicRNN()
>>> output = dynamic_rnn(x, w, b, None, init_h, init_c)
>>> print(output[0].shape)
(2, 16, 32)
class tinyms.primitives.DynamicShape(*args, **kwargs)[source]

Returns the shape of the input tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor[int], 1-dim Tensor of type int32

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.DynamicShape()
>>> output = shape(input_tensor)
>>> print(output)
[3 2 1]
class tinyms.primitives.EditDistance(*args, **kwargs)[source]

Computes the Levenshtein Edit Distance. It is used to measure the similarity of two sequences. The inputs are variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape) and (truth_indices, truth_values, truth_shape).

Parameters

normalize (bool) – If true, edit distances are normalized by length of truth. Default: True.

Inputs:
  • hypothesis_indices (Tensor) - The indices of the hypothesis list SparseTensor. With int64 data type. The shape of tensor is \((N, R)\).

  • hypothesis_values (Tensor) - The values of the hypothesis list SparseTensor. Must be 1-D vector with length of N.

  • hypothesis_shape (Tensor) - The shape of the hypothesis list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

  • truth_indices (Tensor) - The indices of the truth list SparseTensor. With int64 data type. The shape of tensor is \((M, R)\).

  • truth_values (Tensor) - The values of the truth list SparseTensor. Must be 1-D vector with length of M.

  • truth_shape (Tensor) - The shape of the truth list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

Outputs:

Tensor, a dense tensor with rank R-1 and float32 data type.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> from mindspore import context
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> import mindspore.ops.operations as ops
>>> context.set_context(mode=context.GRAPH_MODE)
>>> class EditDistance(nn.Cell):
...     def __init__(self, hypothesis_shape, truth_shape, normalize=True):
...         super(EditDistance, self).__init__()
...         self.edit_distance = P.EditDistance(normalize)
...         self.hypothesis_shape = hypothesis_shape
...         self.truth_shape = truth_shape
...
...     def construct(self, hypothesis_indices, hypothesis_values, truth_indices, truth_values):
...         return self.edit_distance(hypothesis_indices, hypothesis_values, self.hypothesis_shape,
...                                   truth_indices, truth_values, self.truth_shape)
...
>>> hypothesis_indices = Tensor(np.array([[0, 0, 0], [1, 0, 1], [1, 1, 1]]).astype(np.int64))
>>> hypothesis_values = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> hypothesis_shape = Tensor(np.array([1, 1, 2]).astype(np.int64))
>>> truth_indices = Tensor(np.array([[0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]).astype(np.int64))
>>> truth_values = Tensor(np.array([1, 3, 2, 1]).astype(np.float32))
>>> truth_shape = Tensor(np.array([2, 2, 2]).astype(np.int64))
>>> edit_distance = EditDistance(hypothesis_shape, truth_shape)
>>> output = edit_distance(hypothesis_indices, hypothesis_values, truth_indices, truth_values)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Elu(*args, **kwargs)[source]

Computes exponential linear:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

The data type of input tensor must be float.

Parameters

alpha (float) – The coefficient of negative factor whose type is float, only support ‘1.0’ currently. Default: 1.0.

Inputs:
  • input_x (Tensor) - The input tensor whose data type must be float.

Outputs:

Tensor, has the same shape and data type as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> elu = ops.Elu()
>>> output = elu(input_x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
class tinyms.primitives.EmbeddingLookup(*args, **kwargs)[source]

Returns a slice of input tensor based on the specified indices.

This Primitive has the similar functionality as GatherV2 operating on axis = 0, but has one more inputs: offset.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). This represents a Tensor slice, instead of the entire Tensor. Currently, the dimension is restricted to be 2.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of input_params, and the exceeding part will be filled with 0 in the output. Values does not support negative and the result is undefined if values are negative.

  • offset (int) - Specifies the offset value of this input_params slice. Thus the real indices are equal to input_indices minus offset.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Supported Platforms:

Ascend CPU

Examples

>>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
>>> input_indices = Tensor(np.array([[5, 2], [8, 5]]), mindspore.int32)
>>> offset = 4
>>> output = ops.EmbeddingLookup()(input_params, input_indices, offset)
>>> print(output)
[[[10. 11.]
  [ 0.  0.]]
 [[ 0.  0.]
  [10. 11.]]]
class tinyms.primitives.Eps(*args, **kwargs)[source]

Creates a tensor filled with input_x dtype minimum value.

Inputs:
  • input_x (Tensor) - Input tensor. The data type must be float16 or float32.

Outputs:

Tensor, has the same type and shape as input_x, but filled with input_x dtype minimum val.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor([4, 1, 2, 3], mindspore.float32)
>>> output = ops.Eps()(input_x)
>>> print(output)
[1.5258789e-05 1.5258789e-05 1.5258789e-05 1.5258789e-05]
class tinyms.primitives.Equal(*args, **kwargs)[source]

Computes the equivalence between two tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.

  • input_y (Union[Tensor, Number]) - The second input is a number when the first input is a tensor or a tensor whose data type is number. The data type is the same as the first input.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> equal = ops.Equal()
>>> output = equal(input_x, 2.0)
>>> print(output)
[False True False]
>>>
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal = ops.Equal()
>>> output = equal(input_x, input_y)
>>> print(output)
[ True  True False]
class tinyms.primitives.EqualCount(*args, **kwargs)[source]

Computes the number of the same elements of two tensors.

The two input tensors must have the same data type and shape.

Inputs:
  • input_x (Tensor) - The first input tensor.

  • input_y (Tensor) - The second input tensor.

Outputs:

Tensor, with the type same as input tensor and size as (1,).

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal_count = ops.EqualCount()
>>> output = equal_count(input_x, input_y)
>>> print(output)
[2]
class tinyms.primitives.Erf(*args, **kwargs)[source]

Computes the Gauss error function of input_x element-wise.

\[erf(x)=\frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Inputs:
  • input_x (Tensor) - The input tensor. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erf = ops.Erf()
>>> output = erf(input_x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
class tinyms.primitives.Erfc(*args, **kwargs)[source]

Computes the complementary error function of input_x element-wise.

\[erfc(x) = 1 - \frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Inputs:
  • input_x (Tensor) - The input tensor. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erfc = ops.Erfc()
>>> output = erfc(input_x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
class tinyms.primitives.Exp(*args, **kwargs)[source]

Returns exponential of a tensor element-wise.

\[out_i = e^{x_i}\]
Inputs:
  • input_x (Tensor) - The input tensor. The data type mast be float16 or float32.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> exp = ops.Exp()
>>> output = exp(input_x)
>>> print(output)
[ 2.718282  7.389056 54.598152]
class tinyms.primitives.ExpandDims(*args, **kwargs)[source]

Adds an additional dimension at the given axis.

Note

If the specified axis is a negative number, the index is counted backward from the end and starts at 1.

Raises

ValueError – If axis is not an integer or not in the valid range.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (int) - Specifies the dimension index at which to expand the shape of input_x. The value of axis must be in the range [-input_x.ndim-1, input_x.ndim]. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((1, x_1, x_2, ..., x_R)\) if the value of axis is 0. It has the same type as input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> expand_dims = ops.ExpandDims()
>>> output = expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
class tinyms.primitives.Expm1(*args, **kwargs)[source]

Returns exponential then minus 1 of a tensor element-wise.

\[out_i = e^{x_i} - 1\]
Inputs:
  • input_x (Tensor) - The input tensor. With float16 or float32 data type.

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0.0, 1.0, 2.0, 4.0]), mindspore.float32)
>>> expm1 = ops.Expm1()
>>> output = expm1(input_x)
>>> print(output)
[ 0.        1.718282  6.389056 53.598152]
class tinyms.primitives.Eye(*args, **kwargs)[source]

Creates a tensor with ones on the diagonal and zeros the rest.

Inputs:
  • n (int) - The number of rows of returned tensor

  • m (int) - The number of columns of returned tensor

  • t (mindspore.dtype) - MindSpore’s dtype, The data type of the returned tensor.

Outputs:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> eye = ops.Eye()
>>> output = eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
class tinyms.primitives.FastGeLU(*args, **kwargs)[source]

Fast Gaussian Error Linear Units activation function.

FastGeLU is defined as follows:

\[\text{output} = \frac {x} {1 + \exp(-1.702 * \left| x \right|)} * \exp(0.851 * (x - \left| x \right|)),\]

where \(x\) is the element of the input.

Inputs:
  • input_x (Tensor) - Input to compute the FastGeLU with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as input.

Supported Platforms:

Ascend

Examples

>>> tensor = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = P.FastGeLU()
>>> output = fast_gelu(tensor)
>>> print(output)
[[-1.5420423e-01  3.9955849e+00 -9.7664278e-06]
 [ 1.9356585e+00 -1.0070159e-03  8.9999981e+00]]
tinyms.primitives.FastGelu()[source]

Fast Gaussian Error Linear Units activation function.

The usage of FastGelu is deprecated. Please use FastGeLU.

class tinyms.primitives.Fill(*args, **kwargs)[source]

Creates a tensor filled with a scalar value.

Creates a tensor with shape described by the first argument and fills it with values in the second argument.

Inputs:
  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

  • shape (tuple) - The specified shape of output tensor. Only constant value is allowed.

  • value (scalar) - Value to fill the returned tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> fill = ops.Fill()
>>> output = fill(mindspore.float32, (2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Flatten(*args, **kwargs)[source]

Flattens a tensor without changing its batch size on the 0-th axis.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\) to be flattened.

Outputs:

Tensor, the shape of the output tensor is \((N, X)\), where \(X\) is the product of the remaining dimension.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> flatten = ops.Flatten()
>>> output = flatten(input_tensor)
>>> print(output.shape)
(1, 24)
class tinyms.primitives.FloatStatus(*args, **kwargs)[source]

Determines if the elements contain Not a Number(NaN), infinite or negative infinite. 0 for normal, 1 for overflow.

Inputs:
  • input_x (Tensor) - The input tensor. The data type must be float16 or float32.

Outputs:

Tensor, has the shape of (1,), and has the same dtype of input mindspore.dtype.float32 or mindspore.dtype.float16.

Supported Platforms:

GPU

Examples

>>> float_status = ops.FloatStatus()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = float_status(input_x)
>>> print(result)
[1.]
class tinyms.primitives.Floor(*args, **kwargs)[source]

Rounds a tensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Inputs:
  • input_x (Tensor) - The input tensor. Its element data type must be float.

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> floor = ops.Floor()
>>> output = floor(input_x)
>>> print(output)
[ 1.  2. -2.]
class tinyms.primitives.FloorDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_div = ops.FloorDiv()
>>> output = floor_div(input_x, input_y)
>>> print(output)
[ 0  1 -1]
class tinyms.primitives.FloorMod(*args, **kwargs)[source]

Computes the remainder of division element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool , and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_mod = ops.FloorMod()
>>> output = floor_mod(input_x, input_y)
>>> print(output)
[2 1 2]
class tinyms.primitives.FusedBatchNorm(*args, **kwargs)[source]

FusedBatchNorm is a BatchNorm. Moving mean and moving variance will be computed instead of being loaded.

Batch Normalization is widely used in convolutional networks. This operation applies Batch Normalization over input to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • mode (int) – Mode of batch normalization, value is 0 or 1. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = momentum * running\_mean + (1 - momentum) * current\_mean\)). Momentum value must be [0, 1]. Default: 0.9.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C)\).

  • scale (Parameter) - Tensor of shape \((C,)\).

  • bias (Parameter) - Tensor of shape \((C,)\).

  • mean (Parameter) - Tensor of shape \((C,)\).

  • variance (Parameter) - Tensor of shape \((C,)\).

Outputs:

Tuple of 5 Tensor, the normalized input and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x.

  • updated_scale (Tensor) - Tensor of shape \((C,)\).

  • updated_bias (Tensor) - Tensor of shape \((C,)\).

  • updated_moving_mean (Tensor) - Tensor of shape \((C,)\).

  • updated_moving_variance (Tensor) - Tensor of shape \((C,)\).

Supported Platforms:

CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class FusedBatchNormNet(nn.Cell):
>>>     def __init__(self):
>>>         super(FusedBatchNormNet, self).__init__()
>>>         self.fused_batch_norm = ops.FusedBatchNorm()
>>>         self.scale = Parameter(Tensor(np.ones([64]), mindspore.float32), name="scale")
>>>         self.bias = Parameter(Tensor(np.ones([64]), mindspore.float32), name="bias")
>>>         self.mean = Parameter(Tensor(np.ones([64]), mindspore.float32), name="mean")
>>>         self.variance = Parameter(Tensor(np.ones([64]), mindspore.float32), name="variance")
>>>
>>>     def construct(self, input_x):
>>>         out = self.fused_batch_norm(input_x, self.scale, self.bias, self.mean, self.variance)
>>>         return out
>>>
>>> input_x = Tensor(np.ones([128, 64, 32, 64]), mindspore.float32)
>>> net = FusedBatchNormNet()
>>> output = net(input_x)
>>> result = output[0].shape
>>> print(result)
(128, 64, 32, 64)
class tinyms.primitives.FusedBatchNormEx(*args, **kwargs)[source]

FusedBatchNormEx is an extension of FusedBatchNorm, FusedBatchNormEx has one more output(output reserve) than FusedBatchNorm, reserve will be used in backpropagation phase. FusedBatchNorm is a BatchNorm that moving mean and moving variance will be computed instead of being loaded.

Batch Normalization is widely used in convolutional networks. This operation applies Batch Normalization over input to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • mode (int) – Mode of batch normalization, value is 0 or 1. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = momentum * running\_mean + (1 - momentum) * current\_mean\)). Momentum value must be [0, 1]. Default: 0.9.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • input_x (Tensor) - The input of FusedBatchNormEx, Tensor of shape \((N, C)\), data type: float16 or float32.

  • scale (Parameter) - Parameter scale, same with gamma above-mentioned, Tensor of shape \((C,)\), data type: float32.

  • bias (Parameter) - Parameter bias, same with beta above-mentioned, Tensor of shape \((C,)\), data type: float32.

  • mean (Parameter) - mean value, Tensor of shape \((C,)\), data type: float32.

  • variance (Parameter) - variance value, Tensor of shape \((C,)\), data type: float32.

Outputs:

Tuple of 6 Tensors, the normalized input, the updated parameters and reserve.

  • output_x (Tensor) - The output of FusedBatchNormEx, same type and shape as the input_x.

  • updated_scale (Tensor) - Updated parameter scale, Tensor of shape \((C,)\), data type: float32.

  • updated_bias (Tensor) - Updated parameter bias, Tensor of shape \((C,)\), data type: float32.

  • updated_moving_mean (Tensor) - Updated mean value, Tensor of shape \((C,)\), data type: float32.

  • updated_moving_variance (Tensor) - Updated variance value, Tensor of shape \((C,)\), data type: float32.

  • reserve (Tensor) - reserve space, Tensor of shape \((C,)\), data type: float32.

Supported Platforms:

GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class FusedBatchNormExNet(nn.Cell):
>>>     def __init__(self):
>>>         super(FusedBatchNormExNet, self).__init__()
>>>         self.fused_batch_norm_ex = ops.FusedBatchNormEx()
>>>         self.scale = Parameter(Tensor(np.ones([64]), mindspore.float32), name="scale")
>>>         self.bias = Parameter(Tensor(np.ones([64]), mindspore.float32), name="bias")
>>>         self.mean = Parameter(Tensor(np.ones([64]), mindspore.float32), name="mean")
>>>         self.variance = Parameter(Tensor(np.ones([64]), mindspore.float32), name="variance")
>>>
>>>     def construct(self, input_x):
>>>         out = self.fused_batch_norm_ex(input_x, self.scale, self.bias, self.mean, self.variance)
>>>         return out
>>>
>>> input_x = Tensor(np.ones([128, 64, 32, 64]), mindspore.float32)
>>> net = FusedBatchNormExNet()
>>> output = net(input_x)
>>> result = output[0].shape
>>> print(result)
(128, 64, 32, 64)
class tinyms.primitives.FusedSparseAdam(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same type as var with float32 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type.

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type.

  • lr (Tensor) - \(l\) in the updating formula. With float32 data type.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type.

  • gradient (Tensor) - Gradient value with float32 data type.

  • indices (Tensor) - Gradient indices with int32 data type.

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape (1,).

  • m (Tensor) - A Tensor with shape (1,).

  • v (Tensor) - A Tensor with shape (1,).

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adam = ops.FusedSparseAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                                      epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mstype.float32)
>>> beta2_power = Tensor(0.999, mstype.float32)
>>> lr = Tensor(0.001, mstype.float32)
>>> beta1 = Tensor(0.9, mstype.float32)
>>> beta2 = Tensor(0.999, mstype.float32)
>>> epsilon = Tensor(1e-8, mstype.float32)
>>> gradient = Tensor(np.random.rand(2, 1, 2), mstype.float32)
>>> indices = Tensor([0, 1], mstype.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(output)
class tinyms.primitives.FusedSparseFtrl(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float32.

  • accum (Parameter) - The accumulation to be updated, must be same type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The shape of indices must be the same as grad in first dimension. The type must be int32.

Outputs:

Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape (1,).

  • accum (Tensor) - A Tensor with shape (1,).

  • linear (Tensor) - A Tensor with shape (1,).

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.FusedSparseFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.random.rand(3, 1, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(3, 1, 2).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.random.rand(3, 1, 2).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.random.rand(2, 1, 2).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [0.00000000e+00]),
 Tensor(shape=[1], dtype=Float32, value= [0.00000000e+00]),
 Tensor(shape=[1], dtype=Float32, value= [0.00000000e+00]))
class tinyms.primitives.FusedSparseLazyAdam(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (LazyAdam) algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the original Adam algorithm, as only the current indices parameters will be updated.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same type as var with float32 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type.

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type.

  • lr (Tensor) - \(l\) in the updating formula with float32 data type.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type.

  • gradient (Tensor) - Gradient value with float32 data type.

  • indices (Tensor) - Gradient indices with int32 data type.

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape (1,).

  • m (Tensor) - A Tensor with shape (1,).

  • v (Tensor) - A Tensor with shape (1,).

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_lazyadam = ops.FusedSparseLazyAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_lazyadam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1,
...                                          beta2, epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mstype.float32)
>>> beta2_power = Tensor(0.999, mstype.float32)
>>> lr = Tensor(0.001, mstype.float32)
>>> beta1 = Tensor(0.9, mstype.float32)
>>> beta2 = Tensor(0.999, mstype.float32)
>>> epsilon = Tensor(1e-8, mstype.float32)
>>> gradient = Tensor(np.random.rand(2, 1, 2), mstype.float32)
>>> indices = Tensor([0, 1], mstype.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(output)
class tinyms.primitives.FusedSparseProximalAdagrad(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm.

\[accum += grad * grad\]
\[\text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}}\]
\[var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0)\]

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the variable and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float32.

  • accum (Parameter) - Variable tensor to be updated, has the same dtype as var.

  • lr (Tensor) - The learning rate value. The data type must be float32.

  • l1 (Tensor) - l1 regularization strength. The data type must be float32.

  • l2 (Tensor) - l2 regularization strength. The data type must be float32.

  • grad (Tensor) - A tensor of the same type as var, for the gradient. The data type must be float32.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The data type must be int32.

Outputs:

Tuple of 2 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape (1,).

  • accum (Tensor) - A Tensor with shape (1,).

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.FusedSparseProximalAdagrad()
...         self.var = Parameter(Tensor(np.random.rand(3, 1, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(3, 1, 2).astype(np.float32)), name="accum")
...         self.lr = Tensor(0.01, mstype.float32)
...         self.l1 = Tensor(0.0, mstype.float32)
...         self.l2 = Tensor(0.0, mstype.float32)
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.random.rand(2, 1, 2).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [0.00000000e+00]),
 Tensor(shape=[1], dtype=Float32, value= [0.00000000e+00]))
class tinyms.primitives.Gamma(*args, **kwargs)[source]

Produces random positive floating-point values x, distributed according to probability density function:

\[\text{P}(x|α,β) = \frac{\exp(-x/β)}{{β^α}\cdot{\Gamma(α)}}\cdot{x^{α-1}},\]
Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • alpha (Tensor) - The α distribution parameter. It must be greater than 0. It is also known as the shape parameter with float32 data type.

  • beta (Tensor) - The β distribution parameter. It must be greater than 0. It is also known as the scale parameter with float32 data type.

Outputs:

Tensor. The shape must be the broadcasted shape of Input “shape” and shapes of alpha and beta. The dtype is float32.

Supported Platforms:

Ascend

Examples

>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> beta = Tensor(np.array([1.0]), mstype.float32)
>>> gamma = ops.Gamma(seed=3)
>>> output = gamma(shape, alpha, beta)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
class tinyms.primitives.Gather(*args, **kwargs)[source]

Returns a slice of the input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The original Tensor.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Must be in the range [0, input_param.shape[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[ 2.  7.]
 [ 4. 54.]
 [ 2. 55.]]
class tinyms.primitives.GatherD(*args, **kwargs)[source]

Gathers values along an axis specified by dim.

Inputs:
  • x (Tensor) - The source tensor.

  • dim (int) - The axis along which to index. It must be int32. Only constant value is allowed.

  • index (Tensor) - The indices of elements to gather. It can be one of the following data types: int32, int64. The value range of each index element is [-x_rank[dim], x_rank[dim]).

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.GatherD()(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
class tinyms.primitives.GatherNd(*args, **kwargs)[source]

Gathers slices from a tensor by indices.

Using given indices to gather slices from a tensor with a specified shape.

Inputs:
  • input_x (Tensor) - The target tensor to gather values.

  • indices (Tensor) - The index tensor, with int data type.

Outputs:

Tensor, has the same type as input_x and the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> op = ops.GatherNd()
>>> output = op(input_x, indices)
>>> print(output)
[-0.1  0.5]
tinyms.primitives.GatherV2()[source]

Returns a slice of the input tensor based on the specified indices and axis.

The usage of GatherV2 is deprecated. Please use Gather.

class tinyms.primitives.GeLU(*args, **kwargs)[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

GeLU is defined as follows:

\[\text{output} = 0.5 * x * (1 + erf(x / \sqrt{2})),\]

where \(erf\) is the “Gauss error function” .

Inputs:
  • input_x (Tensor) - Input to compute the GeLU with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as input.

Supported Platforms:

Ascend GPU

Examples

>>> tensor = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> gelu = ops.GeLU()
>>> result = gelu(tensor)
>>> print(result)
[0.841192  1.9545976  2.9963627]
class tinyms.primitives.GeSwitch(*args, **kwargs)[source]

Adds control switch to data.

Switch data flows into false or true branch depending on the condition. If the condition is true, the true branch will be activated, or vise verse.

Inputs:
  • data (Union[Tensor, Number]) - The data to be used for switch control.

  • pred (Tensor) - It must be a scalar whose type is bool and shape is (), It is used as condition for switch control.

Outputs:

tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data. The false_output connects with the false_branch and the true_output connects with the true_branch.

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.square = ops.Square()
...         self.add = ops.Add()
...         self.value = Tensor(np.full((1), 3), mindspore.float32)
...         self.switch = ops.GeSwitch()
...         self.merge = ops.Merge()
...         self.less = ops.Less()
...
...     def construct(self, x, y):
...         cond = self.less(x, y)
...         st1, sf1 = self.switch(x, cond)
...         st2, sf2 = self.switch(y, cond)
...         add_ret = self.add(st1, st2)
...         st3, sf3 = self.switch(self.value, cond)
...         sq_ret = self.square(sf3)
...         ret = self.merge((add_ret, sq_ret))
...         return ret[0]
...
>>> x = Tensor(10.0, dtype=mindspore.float32)
>>> y = Tensor(5.0, dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
tinyms.primitives.Gelu()[source]

Gaussian Error Linear Units activation function.

The usage of Gelu is deprecated. Please use GeLU.

class tinyms.primitives.GetNext(*args, **kwargs)[source]

Returns the next element in the dataset queue.

Note

The GetNext operation needs to be associated with network and it also depends on the init_dataset interface, it can’t be used directly as a single operation. For details, please refer to connect_network_with_dataset source code.

Parameters
  • types (list[mindspore.dtype]) – The type of the outputs.

  • shapes (list[tuple[int]]) – The dimensionality of the outputs.

  • output_num (int) – The output number, length of types and shapes.

  • shared_name (str) – The queue name of init_dataset interface.

Inputs:

No inputs.

Outputs:

tuple[Tensor], the output of Dataset. The shape is described in shapes and the type is described in types.

Supported Platforms:

Ascend GPU

Examples

>>> train_dataset = create_custom_dataset()
>>> dataset_helper = mindspore.DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> dataset = dataset_helper.iter.dataset
>>> dataset_types, dataset_shapes = dataset_helper.types_shapes()
>>> queue_name = dataset.__transfer_dataset__.queue_name
>>> get_next = P.GetNext(dataset_types, dataset_shapes, len(dataset_types), queue_name)
>>> data, label = get_next()
>>> relu = P.ReLU()
>>> result = relu(data).asnumpy()
>>> print(result.shape)
(32, 1, 32, 32)
class tinyms.primitives.Greater(*args, **kwargs)[source]

Computes the boolean value of \(x > y\) element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater = ops.Greater()
>>> output = greater(input_x, input_y)
>>> print(output)
[False  True False]
class tinyms.primitives.GreaterEqual(*args, **kwargs)[source]

Computes the boolean value of \(x >= y\) element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater_equal = ops.GreaterEqual()
>>> output = greater_equal(input_x, input_y)
>>> print(output)
[ True  True False]
class tinyms.primitives.HSigmoid(*args, **kwargs)[source]

Hard sigmoid activation function.

Applies hard sigmoid activation element-wise. The input is a Tensor with any valid shape.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{x_{i} + 3}{6})),\]

where \(x_{i}\) is the \(i\)-th slice in the given dimension of the input Tensor.

Inputs:
  • input_data (Tensor) - The input of HSigmoid, data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the input_data.

Supported Platforms:

GPU

Examples

>>> hsigmoid = ops.HSigmoid()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hsigmoid(input_x)
>>> print(result)
[0.3333  0.1666  0.5  0.833  0.6665]
class tinyms.primitives.HSwish(*args, **kwargs)[source]

Hard swish activation function.

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6},\]

where \(x_{i}\) is the \(i\)-th slice in the given dimension of the input Tensor.

Inputs:
  • input_data (Tensor) - The input of HSwish, data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the input_data.

Supported Platforms:

GPU

Examples

>>> hswish = ops.HSwish()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hswish(input_x)
>>> print(result)
[-0.3333  -0.3333  0  1.666  0.6665]
class tinyms.primitives.HistogramFixedWidth(*args, **kwargs)[source]

Returns a rank 1 histogram counting the number of entries in values that fall into every bin. The bins are equal width and determined by the arguments range and nbins.

Parameters
  • dtype (str) – An optional attribute. Must be one of the following types: “int32”, “int64”. Default: “int32”.

  • nbins (int) – The number of histogram bins, the type is a positive integer.

Inputs:
  • x (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.

  • range (Tensor) - Must has the same data type as x, and the shape is [2]. x <= range[0] will be mapped to hist[0], x >= range[1] will be mapped to hist[-1].

Outputs:

Tensor, the type is int32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor([-1.0, 0.0, 1.5, 2.0, 5.0, 15], mindspore.float16)
>>> range = Tensor([0.0, 5.0], mindspore.float16)
>>> hist = ops.HistogramFixedWidth(5)
>>> output = hist(x, range)
>>> print(output)
[2 1 1 0 2]
class tinyms.primitives.HistogramSummary(*args, **kwargs)[source]

Outputs the tensor to protocol buffer through histogram summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.HistogramSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.HookBackward(hook_fn, cell_id='')[source]

This operation is used as a tag to hook gradient in intermediate variables. Note that this function is only supported in Pynative Mode.

Note

The hook function must be defined like hook_fn(grad) -> Tensor or None, where grad is the gradient passed to the primitive and gradient may be modified and passed to next primitive. The difference between a hook function and callback of InsertGradientOf is that a hook function is executed in the python environment while callback will be parsed and added to the graph.

Parameters

hook_fn (Function) – Python function. hook function.

Inputs:
  • inputs (Tensor) - The variable to hook.

Examples

>>> def hook_fn(grad_out):
...     print(grad_out)
...
>>> grad_all = GradOperation(get_all=True)
>>> hook = ops.HookBackward(hook_fn)
>>> def hook_test(x, y):
...     z = x * y
...     z = hook(z)
...     z = z * y
...     return z
...
>>> def backward(x, y):
...     return grad_all(hook_test)(x, y)
...
>>> output = backward(1, 2)
>>> print(output)
class tinyms.primitives.IOU(*args, **kwargs)[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

\[ \begin{align}\begin{aligned}\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\\\text{IOF} = \frac{\text{Area of Overlap}}{\text{Area of Ground Truth}}\end{aligned}\end{align} \]
Parameters

mode (string) – The mode is used to specify the calculation method, now supporting ‘iou’ (intersection over union) or ‘iof’ (intersection over foreground) mode. Default: ‘iou’.

Inputs:
  • anchor_boxes (Tensor) - Anchor boxes, tensor of shape (N, 4). “N” indicates the number of anchor boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be float16 or float32.

  • gt_boxes (Tensor) - Ground truth boxes, tensor of shape (M, 4). “M” indicates the number of ground truth boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be float16 or float32.

Outputs:

Tensor, the ‘iou’ values, tensor of shape (M, N), with the same data type as anchor_boxes.

Raises

KeyError – When mode is not ‘iou’ or ‘iof’.

Supported Platforms:

Ascend GPU

Examples

>>> iou = ops.IOU()
>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> output = iou(anchor_boxes, gt_boxes)
>>> print(output.shape)
(3, 3)
class tinyms.primitives.Identity(*args, **kwargs)[source]

Returns a Tensor with the same shape and contents as input.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, the shape of tensor is the same as input_x, \((x_1, x_2, ..., x_R)\).

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> output = ops.Identity()(x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.ImageSummary(*args, **kwargs)[source]

Outputs the image tensor to protocol buffer through image summary operator.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of image, the rank of tensor must be 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.summary = ops.ImageSummary()
...
...     def construct(self, x):
...         name = "image"
...         out = self.summary(name, x)
...         return out
...
class tinyms.primitives.InTopK(*args, **kwargs)[source]

Determines whether the targets are in the top k predictions.

Parameters

k (int) – Specifies the number of top elements to be used for computing precision.

Inputs:
  • x1 (Tensor) - A 2D Tensor defines the predictions of a batch of samples with float16 or float32 data type.

  • x2 (Tensor) - A 1D Tensor defines the labels of a batch of samples with int32 data type. The size of x2 must be equal to x1’s first dimension. The values of x2 can not be negative and must be equal to or less than index of x1’s second dimension.

Outputs:

Tensor has 1 dimension of type bool and the same shape with x2. For labeling sample i in x2, if the label in the first k predictions for sample i is in x1, then the value is True, otherwise False.

Supported Platforms:

Ascend

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> in_top_k = ops.InTopK(3)
>>> output = in_top_k(x1, x2)
>>> print(output)
[ True  False]
class tinyms.primitives.InplaceAdd(*args, **kwargs)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to add with v. It is an integer or a tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • input_x (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.

  • input_v (Tensor) - The second input is a tensor that has the same dimension sizes as x except the first dimension, which must be the same as indices’s size. It has the same data type with input_x.

Outputs:

Tensor, has the same shape and dtype as input_x.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> input_x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceAdd = ops.InplaceAdd(indices)
>>> output = inplaceAdd(input_x, input_v)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceSub(*args, **kwargs)[source]

Subtracts v into specified rows of x. Computes y = x; y[i, :] -= v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to subtract with v. It is a int or tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • input_x (Tensor) - The first input is a tensor whose data type is float16, float32 or int32.

  • input_v (Tensor) - The second input is a tensor who has the same dimension sizes as x except the first dimension, which must be the same as indices’s size. It has the same data type with input_x.

Outputs:

Tensor, has the same shape and dtype as input_x.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> input_x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceSub = ops.InplaceSub(indices)
>>> output = inplaceSub(input_x, input_v)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
class tinyms.primitives.InplaceUpdate(*args, **kwargs)[source]

Updates specified rows with values in v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to update with v. It is a int or tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • x (Tensor) - A tensor which to be inplace updated. It can be one of the following data types: float32, float16 and int32.

  • v (Tensor) - A tensor with the same type as x and the same dimension size as x except the first dimension, which must be the same as the size of indices.

Outputs:

Tensor, with the same type and shape as the input x.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplace_update = ops.InplaceUpdate(indices)
>>> output = inplace_update(x, v)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
class tinyms.primitives.InsertGradientOf(*args, **kwargs)[source]

Attaches callback to the graph node that will be invoked on the node’s gradient.

Parameters

f (Function) – MindSpore’s Function. Callback function.

Inputs:
  • input_x (Any) - The graph node to attach to.

Outputs:

Tensor, returns input_x directly. InsertGradientOf does not affect the forward result.

Supported Platforms:

Ascend GPU CPU

Examples

>>> def clip_gradient(dx):
...     ret = dx
...     if ret > 1.0:
...         ret = 1.0
...
...     if ret < 0.2:
...         ret = 0.2
...
...     return ret
...
>>> clip = ops.InsertGradientOf(clip_gradient)
>>> grad_all = ops.GradOperation(get_all=True)
>>> def InsertGradientOfClipDemo():
...     def clip_test(x, y):
...         x = clip(x)
...         y = clip(y)
...         c = x * y
...         return c
...
...     @ms_function
...     def f(x, y):
...         return clip_test(x, y)
...
...     def fd(x, y):
...         return grad_all(clip_test)(x, y)
...
...     print("forward: ", f(1.1, 0.1))
...     print("clip_gradient:", fd(1.1, 0.1))
...
class tinyms.primitives.Inv(*args, **kwargs)[source]

Computes Inv(Reciprocal) of input tensor element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). Must be one of the following types: float16, float32, int32.

Outputs:

Tensor, has the same shape and data type as input_x.

Supported Platforms:

Ascend

Examples

>>> inv = ops.Inv()
>>> input_x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = inv(input_x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
class tinyms.primitives.Invert(*args, **kwargs)[source]

Flips all bits of input tensor element-wise.

Inputs:
  • input_x (Tensor[int16], Tensor[uint16]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> invert = ops.Invert()
>>> input_x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = invert(input_x)
>>> print(output)
[-26 -5 -14 -10]
class tinyms.primitives.InvertPermutation(*args, **kwargs)[source]

Computes the inverse of an index permutation.

Given a tuple input, this operation inserts a dimension of 1 at the dimension This operation calculates the inverse of the index replacement. It requires a 1-dimensional tuple x, which represents the array starting at zero, and swaps each value with its index position. In other words, for the output tuple y and the input tuple x, this operation calculates the following: \(y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]\).

Note

These values must include 0. There must be no duplicate values and the values can not be negative.

Inputs:
  • input_x (Union(tuple[int], list[int]) - The input is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\) representing the indices. The values must include 0. There can be no duplicate values or negative values. Only constant value is allowed. The maximum value must be equal to length of input_x.

Outputs:

tuple[int]. It has the same length as the input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.InvertPermutation()
>>> input_data = (3, 4, 0, 2, 1)
>>> output = invert(input_data)
>>> print(output)
(2, 4, 3, 0, 1)
class tinyms.primitives.IsFinite(*args, **kwargs)[source]

Determines which elements are finite for each position.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_finite = ops.IsFinite()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_finite(input_x)
>>> print(output)
[False  True False]
class tinyms.primitives.IsInf(*args, **kwargs)[source]

Determines which elements are inf or -inf for each position

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Supported Platforms:

GPU

Examples

>>> is_inf = ops.IsInf()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_inf(input_x)
>>> print(output)
[False False True]
class tinyms.primitives.IsInstance(*args, **kwargs)[source]

Checks whether an object is an instance of a target type.

Inputs:
  • inst (Any Object) - The instance to be checked. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = 1
>>> output = ops.IsInstance()(a, mindspore.int32)
>>> print(output)
False
class tinyms.primitives.IsNan(*args, **kwargs)[source]

Determines which elements are NaN for each position.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Supported Platforms:

GPU

Examples

>>> is_nan = ops.IsNan()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_nan(input_x)
>>> print(output)
[True False False]
class tinyms.primitives.IsSubClass(*args, **kwargs)[source]

Checks whether this type is a sub-class of another type.

Inputs:
  • sub_type (mindspore.dtype) - The type to be checked. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.IsSubClass()(mindspore.int32,  mindspore.intc)
>>> print(output)
True
class tinyms.primitives.KLDivLoss(*args, **kwargs)[source]

Computes the Kullback-Leibler divergence between the target and the output.

Note

Sets input as \(x\), input label as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = y_n \cdot (\log y_n - x_n)\]

Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

Inputs:
  • input_x (Tensor) - The input Tensor. The data type must be float32.

  • input_y (Tensor) - The label Tensor which has the same shape as input_x. The data type must be float32.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as input_x. Otherwise it is a scalar.

Supported Platforms:

GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.kldiv_loss = ops.KLDivLoss()
...     def construct(self, x, y):
...         result = self.kldiv_loss(x, y)
...         return result
...
>>> net = Net()
>>> input_x = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> input_y = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = net(input_x, input_y)
>>> print(output)
-0.23333333
class tinyms.primitives.L2Loss(*args, **kwargs)[source]

Calculates half of the L2 norm of a tensor without using the sqrt.

Set input_x as x and output as loss.

\[loss = sum(x ** 2) / 2\]
Inputs:
  • input_x (Tensor) - A input Tensor. Data type must be float16 or float32.

Outputs:

Tensor, has the same dtype as input_x. The output tensor is the value of loss which is a scalar tensor.

Supported Platforms:

Ascend GPU

Examples
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
>>> l2_loss = ops.L2Loss()
>>> output = l2_loss(input_x)
>>> print(output)
7.0
class tinyms.primitives.L2Normalize(*args, **kwargs)[source]

L2 normalization Operator.

This operator will normalize the input using the given axis. The function is shown as follows:

\[\text{output} = \frac{x}{\sqrt{\text{max}(\text{sum} (\text{input_x}^2), \epsilon)}},\]

where \(\epsilon\) is epsilon.

Parameters
  • axis (Union[list(int), tuple(int), int]) – The starting axis for the input to apply the L2 normalization.

  • Default

  • epsilon (float) – A small value added for numerical stability. Default: 1e-4.

Inputs:
  • input_x (Tensor) - Input to compute the normalization. Data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the input.

Supported Platforms:

Ascend

Examples

>>> l2_normalize = ops.L2Normalize()
>>> input_x = Tensor(np.random.randint(-256, 256, (2, 3, 4)), mindspore.float32)
>>> output = l2_normalize(input_x)
>>> print(output.shape)
(2, 3, 4)
class tinyms.primitives.LARSUpdate(*args, **kwargs)[source]

Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient.

Parameters
  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • hyperpara (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

Inputs:
  • weight (Tensor) - The weight to be updated.

  • gradient (Tensor) - The gradient of weight, which has the same shape and dtype with weight.

  • norm_weight (Tensor) - A scalar tensor, representing the sum of squares of weight.

  • norm_gradient (Tensor) - A scalar tensor, representing the sum of squares of gradient.

  • weight_decay (Union[Number, Tensor]) - Weight decay. It must be a scalar tensor or number.

  • learning_rate (Union[Number, Tensor]) - Learning rate. It must be a scalar tensor or number.

Outputs:

Tensor, represents the new gradient.

Supported Platforms:

Ascend

Examples

>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> import mindspore.nn as nn
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.lars = ops.LARSUpdate()
...         self.reduce = ops.ReduceSum()
...         self.square = ops.Square()
...     def construct(self, weight, gradient):
...         w_square_sum = self.reduce(self.square(weight))
...         grad_square_sum = self.reduce(self.square(gradient))
...         grad_t = self.lars(weight, gradient, w_square_sum, grad_square_sum, 0.0, 1.0)
...         return grad_t
...
>>> np.random.seed(0)
>>> weight = np.random.random(size=(2, 3)).astype(np.float32)
>>> gradient = np.random.random(size=(2, 3)).astype(np.float32)
>>> net = Net()
>>> output = net(Tensor(weight), Tensor(gradient))
>>> print(output)
[[0.00036534 0.00074454 0.00080456]
 [0.00032014 0.00066101 0.00044157]]
class tinyms.primitives.LRN(*args, **kwargs)[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]
Parameters
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D.

  • bias (float) – An offset (usually positive to avoid dividing by 0).

  • alpha (float) – A scale factor, usually positive.

  • beta (float) – An exponent.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

Inputs:
  • x (Tensor) - A 4D Tensor with float16 or float32 data type.

Outputs:

Tensor, with the same shape and data type as the input tensor.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(1, 2, 2, 2), mindspore.float32)
>>> lrn = ops.LRN()
>>> output = lrn(x)
>>> print(output.shape)
(1, 2, 2, 2)
class tinyms.primitives.LSTM(*args, **kwargs)[source]

Performs the Long Short-Term Memory (LSTM) on the input.

For detailed information, please refer to nn.LSTM.

Supported Platforms:

GPU CPU

Examples

>>> input_size = 10
>>> hidden_size = 2
>>> num_layers = 1
>>> seq_len = 5
>>> batch_size = 2
>>>
>>> net = P.LSTM(input_size, hidden_size, num_layers, True, False, 0.0)
>>> input = Tensor(np.ones([seq_len, batch_size, input_size]).astype(np.float32))
>>> h0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> c0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> w = Tensor(np.ones([112, 1, 1]).astype(np.float32))
>>> output, hn, cn, _, _ = net(input, h0, c0, w)
>>> print(output)
[[[0.9640267  0.9640267 ]
  [0.9640267  0.9640267 ]]
 [[0.9950539  0.9950539 ]
  [0.9950539  0.9950539 ]]
 [[0.99932843 0.99932843]
  [0.99932843 0.99932843]]
 [[0.9999084  0.9999084 ]
  [0.9999084  0.9999084 ]]
 [[0.9999869  0.9999869 ]
  [0.9999869  0.9999869 ]]]
class tinyms.primitives.LayerNorm(*args, **kwargs)[source]

Applies the Layer Normalization to the input tensor.

This operator will normalize the input tensor on given axis. LayerNorm is described in the paper Layer Normalization.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • begin_norm_axis (int) – The begin axis of the input_x to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • begin_params_axis (int) – The begin axis of the parameter input (gamma, beta) to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • epsilon (float) – A value added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\). The input of LayerNorm.

  • gamma (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter gamma as the scale on norm.

  • beta (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter beta as the scale on norm.

Outputs:

tuple[Tensor], tuple of 3 tensors, the normalized input and the updated parameters.

  • output_x (Tensor) - The normalized input, has the same type and shape as the input_x. The shape is \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [1, 2, 3]]), mindspore.float32)
>>> gamma = Tensor(np.ones([3]), mindspore.float32)
>>> beta = Tensor(np.ones([3]), mindspore.float32)
>>> layer_norm = ops.LayerNorm()
>>> output, mean, variance = layer_norm(input_x, gamma, beta)
>>> print(output)
[[-0.2247448  1.         2.2247448]
 [-0.2247448  1.         2.2247448]]
>>> print(mean)
[[2.]
 [2.]]
>>> print(variance)
[[0.6666667]
 [0.6666667]])
class tinyms.primitives.Less(*args, **kwargs)[source]

Computes the boolean value of \(x < y\) element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less = ops.Less()
>>> output = less(input_x, input_y)
>>> print(output)
[False False  True]
class tinyms.primitives.LessEqual(*args, **kwargs)[source]

Computes the boolean value of \(x <= y\) element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool , and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less_equal = ops.LessEqual()
>>> output = less_equal(input_x, input_y)
>>> print(output)
[ True False  True]
class tinyms.primitives.LinSpace(*args, **kwargs)[source]

Generates values in an interval (inclusive of start and stop) and returns the corresponding interpolated array with num number of ticks.

Inputs:
  • start (Tensor[float32]) - Start value of interval, With shape of 0-D.

  • stop (Tensor[float32]) - Last value of interval, With shape of 0-D.

  • num (int) - Number of ticks in the interval, inclusive of start and stop.

Outputs:

Tensor, has the same shape as start.

Supported Platforms:

Ascend GPU

Examples

>>> linspace = P.LinSpace()
>>> start = Tensor(1, mindspore.float32)
>>> stop = Tensor(10, mindspore.float32)
>>> num = 5
>>> output = linspace(start, stop, num)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
class tinyms.primitives.Log(*args, **kwargs)[source]

Returns the natural logarithm of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor. With float16 or float32 data type. The value must be greater than 0.

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log = ops.Log()
>>> output = log(input_x)
>>> print(output)
[0.        0.6931472 1.3862944]
class tinyms.primitives.Log1p(*args, **kwargs)[source]

Returns the natural logarithm of one plus the input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor. With float16 or float32 data type. The value must be greater than -1.

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log1p = ops.Log1p()
>>> output = log1p(input_x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
class tinyms.primitives.LogSoftmax(*args, **kwargs)[source]

Log Softmax activation function.

Applies the Log Softmax function to the input tensor on the specified axis. Suppose a slice in the given aixs, \(x\) for each element \(x_i\), the Log Softmax function is shown as follows:

\[\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),\]

where \(N\) is the length of the Tensor.

Parameters

axis (int) – The axis to perform the Log softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - The input of Log Softmax, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the logits.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> log_softmax = ops.LogSoftmax()
>>> output = log_softmax(input_x)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
class tinyms.primitives.LogUniformCandidateSampler(*args, **kwargs)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Random sampling a tensor of sampled classes from the range of integers [0, range_max).

Parameters
  • num_true (int) – The number of target classes per training example. Default: 1.

  • num_sampled (int) – The number of classes to randomly sample. Default: 5.

  • unique (bool) – Determines whether sample with rejection. If unique is True, all sampled classes in a batch are unique. Default: True.

  • range_max (int) – The number of possible classes. When unique is True, range_max must be greater than or equal to num_sampled. Default: 5.

  • seed (int) – Random seed, must be non-negative.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int64 and shape [batch_size, num_true].

Outputs:

Tuple of 3 Tensors.

  • sampled_candidates (Tensor) - A Tensor with shape (num_sampled,) and the same type as true_classes.

  • true_expected_count (Tensor) - A Tensor with the same shape as true_classes and type float32.

  • sampled_expected_count (Tensor) - A Tensor with the same shape as sampled_candidates and type float32.

Supported Platforms:

Ascend

Examples

>>> sampler = ops.LogUniformCandidateSampler(2, 5, True, 5)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1, 7], [0, 4], [3, 3]])))
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
class tinyms.primitives.LogicalAnd(*args, **kwargs)[source]

Computes the “logical AND” of two tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

Inputs:
  • input_x (Union[Tensor, bool]) - The first input is a bool or a tensor whose data type is bool.

  • input_y (Union[Tensor, bool]) - The second input is a bool when the first input is a tensor or a tensor whose data type is bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> input_y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_and = ops.LogicalAnd()
>>> output = logical_and(input_x, input_y)
>>> print(output)
[ True False False]
class tinyms.primitives.LogicalNot(*args, **kwargs)[source]

Computes the “logical NOT” of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is bool.

Outputs:

Tensor, the shape is the same as the input_x, and the dtype is bool.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> logical_not = ops.LogicalNot()
>>> output = logical_not(input_x)
>>> print(output)
[False  True False]
class tinyms.primitives.LogicalOr(*args, **kwargs)[source]

Computes the “logical OR” of two tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

Inputs:
  • input_x (Union[Tensor, bool]) - The first input is a bool or a tensor whose data type is bool.

  • input_y (Union[Tensor, bool]) - The second input is a bool when the first input is a tensor or a tensor whose data type is bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> input_y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_or = ops.LogicalOr()
>>> output = logical_or(input_x, input_y)
>>> print(output)
[ True  True  True]
class tinyms.primitives.MakeRefKey(*args, **kwargs)[source]

Makes a RefKey instance by string. RefKey stores the name of Parameter, can be passed through the functions, and used for Assign target.

Parameters

tag (str) – Parameter name to make the RefKey.

Inputs:

No inputs.

Outputs:

RefKeyType, made from the Parameter name.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Parameter, Tensor
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.y = Parameter(Tensor(np.ones([2, 3]), mstype.int32), name="y")
...         self.make_ref_key = ops.MakeRefKey("y")
...
...     def construct(self, x):
...         key = self.make_ref_key()
...         ref = ops.make_ref(key, x, self.y)
...         return ref * x
...
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindspore.int32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
[[ 1  4  9]
 [16 25 36]]
class tinyms.primitives.MatMul(*args, **kwargs)[source]

Multiplies matrix a and matrix b.

The rank of input tensors must equal to 2.

Parameters
  • transpose_a (bool) – If true, a is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, b is transposed before multiplication. Default: False.

Inputs:
  • input_x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((N, C)\). If transpose_a is True, its shape must be \((N, C)\) after transpose.

  • input_y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((C, M)\). If transpose_b is True, its shape must be \((C, M)\) after transpose.

Outputs:

Tensor, the shape of the output tensor is \((N, M)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> matmul = ops.MatMul()
>>> output = matmul(input_x1, input_x2)
>>> print(output)
[[3. 3. 3. 3.]]
class tinyms.primitives.MaxPool(*args, **kwargs)[source]

Max pooling operation.

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width are both kernel_size, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) – The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

  • format (str) –

    The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_op = ops.MaxPool(pad_mode="VALID", kernel_size=2, strides=1)
>>> output = maxpool_op(input_tensor)
>>> print(output)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPoolWithArgmax(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

Performs max pooling on the input Tensor and returns both max values and indices.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents height and width are both kernel_size, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\). Data type must be float16 or float32.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N, C_{out}, H_{out}, W_{out})\). It has the same data type as input.

  • mask (Tensor) - Max values’ index represented by the mask. Data type is int32.

Raises

TypeError – If the input data type is not float16 or float32.

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_arg_op = ops.MaxPoolWithArgmax(pad_mode="VALID", kernel_size=2, strides=1)
>>> output_tensor, argmax = maxpool_arg_op(input_tensor)
>>> print(output_tensor)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.Maximum(*args, **kwargs)[source]

Computes the maximum of input tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> maximum = ops.Maximum()
>>> output = maximum(input_x, input_y)
>>> print(output)
[4. 5. 6.]
class tinyms.primitives.Merge(*args, **kwargs)[source]

Merges all input data to one.

One and only one of the inputs must be selected as the output

Inputs:
  • inputs (Union(Tuple, List)) - The data to be merged. All tuple elements must have the same data type.

Outputs:

tuple. Output is tuple(data, output_index). The data has the same shape of inputs element.

Examples

>>> merge = ops.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)
>>> input_y = Tensor(np.random.randint(-4, 4, (2, 4)), mindspore.float32)
>>> result = merge((input_x, input_y))
class tinyms.primitives.Meshgrid(*args, **kwargs)[source]

Generates coordinate matrices from given coordinate tensors.

Given N one-dimensional coordinate tensors, returns a tuple outputs of N N-D coordinate tensors for evaluating expressions on an N-D grid.

Parameters

indexing (str) – Either ‘xy’ or ‘ij’. Default: ‘xy’. When the indexing argument is set to ‘xy’ (the default), the broadcasting instructions for the first two dimensions are swapped.

Inputs:
  • input (Union[tuple]) - A Tuple of N 1-D Tensor objects. The length of input should be greater than 1

Outputs:

Tensors, A Tuple of N N-D Tensor objects.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> inputs = (x, y, z)
>>> meshgrid = ops.Meshgrid(indexing="xy")
>>> output = meshgrid(inputs)
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
class tinyms.primitives.Minimum(*args, **kwargs)[source]

Computes the minimum of input tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> minimum = ops.Minimum()
>>> output = minimum(input_x, input_y)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.MirrorPad(*args, **kwargs)[source]

Pads the input tensor according to the paddings and mode.

Parameters

mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Inputs:
  • input_x (Tensor) - The input tensor.

  • paddings (Tensor) - The paddings tensor. The value of paddings is a matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Outputs:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]].

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> import mindspore.nn as nn
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.pad = ops.MirrorPad(mode="REFLECT")
...     def construct(self, x, paddings):
...         return self.pad(x, paddings)
...
>>> x = np.random.random(size=(2, 3)).astype(np.float32)
>>> paddings = Tensor([[1, 1], [2, 2]])
>>> pad = Net()
>>> output = pad(Tensor(x), paddings)
>>> print(output.shape)
(4, 7)
class tinyms.primitives.Mod(*args, **kwargs)[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.

  • input_y (Union[Tensor, Number]) - When the first input is a tensor, The second input could be a number or a tensor whose data type is number. When the first input is a number, the second input must be a tensor whose data type is number.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

ValueError – When input_x and input_y are not the same dtype.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> mod = ops.Mod()
>>> output = mod(input_x, input_y)
>>> print(output)
[-1.  1.  0.]
class tinyms.primitives.Mul(*args, **kwargs)[source]

Multiplies two tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} * y_{i}\]
Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> mul = ops.Mul()
>>> output = mul(input_x, input_y)
>>> print(output)
[ 4. 10. 18.]
class tinyms.primitives.Multinomial(*args, **kwargs)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • input (Tensor[float32]) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • num_samples (int32) - number of samples to draw.

Outputs:

Tensor with the same rows as input, each row has num_samples sampled indices.

Supported Platforms:

GPU

Examples

>>> input = Tensor([0., 9., 4., 0.], mstype.float32)
>>> multinomial = ops.Multinomial(seed=10)
>>> output = multinomial(input, 2)
>>> print(output)
[2 1]
class tinyms.primitives.NMSWithMask(*args, **kwargs)[source]

Selects some bounding boxes in descending order of score.

Parameters

iou_threshold (float) – Specifies the threshold of overlap boxes with respect to IOU. Default: 0.5.

Raises

ValueError – If the iou_threshold is not a float number, or if the first dimension of input Tensor is less than or equal to 0, or if the data type of the input Tensor is not float16 or float32.

Inputs:
  • bboxes (Tensor) - The shape of tensor is \((N, 5)\). Input bounding boxes. N is the number of input bounding boxes. Every bounding box contains 5 values, the first 4 values are the coordinates of bounding box, and the last value is the score of this bounding box. The data type must be float16 or float32.

Outputs:

tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask.

  • selected_boxes (Tensor) - The shape of tensor is \((N, 5)\). The list of bounding boxes after non-max suppression calculation.

  • selected_idx (Tensor) - The shape of tensor is \((N,)\). The indexes list of valid input bounding boxes.

  • selected_mask (Tensor) - The shape of tensor is \((N,)\). A mask list of valid output bounding boxes.

Supported Platforms:

Ascend GPU

Examples

>>> bbox = np.array([[0.4, 0.2, 0.4, 0.3, 0.1], [0.4, 0.3, 0.6, 0.8, 0.7]])
>>> bbox[:, 2] += bbox[:, 0]
>>> bbox[:, 3] += bbox[:, 1]
>>> inputs = Tensor(bbox, mindspore.float32)
>>> nms = ops.NMSWithMask(0.5)
>>> output_boxes, indices, mask = nms(inputs)
>>> indices_np = indices.asnumpy()
>>> print(indices_np[mask.asnumpy()])
[0 1]
class tinyms.primitives.NPUAllocFloatStatus(*args, **kwargs)[source]

Allocates a flag to store the overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32.

Note

Examples: see NPUGetFloatStatus.

Outputs:

Tensor, has the shape of (8,).

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> output = alloc_status()
>>> print(output)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.NPUClearFloatStatus(*args, **kwargs)[source]

Clears the flag which stores the overflow status.

Note

The flag is in the register on the Ascend device. It will be reset and can not be reused again after the NPUClearFloatStatus is called.

Examples: see NPUGetFloatStatus.

Inputs:
  • input_x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape as input_x. All the elements in the tensor will be zero.

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> get_status = ops.NPUGetFloatStatus()
>>> clear_status = ops.NPUClearFloatStatus()
>>> init = alloc_status()
>>> flag = get_status(init)
>>> clear_status(init)
Tensor(shape=[8], dtype=Float32, value= [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00])
>>> print(init)
[1. 1. 1. 1. 1. 1. 1. 1.]
class tinyms.primitives.NPUGetFloatStatus(*args, **kwargs)[source]

Updates the flag which is the output tensor of NPUAllocFloatStatus with the latest overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32. If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened.

Inputs:
  • input_x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape as input_x. All the elements in the tensor will be zero.

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> get_status = ops.NPUGetFloatStatus()
>>> init = alloc_status()
>>> get_status(init)
Tensor(shape=[8], dtype=Float32, value= [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,
  0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00])
>>> print(init)
[1. 1. 1. 1. 1. 1. 1. 1.]
class tinyms.primitives.Neg(*args, **kwargs)[source]

Returns a tensor with negative values of the input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape and dtype as input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> neg = ops.Neg()
>>> input_x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = neg(input_x)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
class tinyms.primitives.NoRepeatNGram(*args, **kwargs)[source]

Update log_probs with repeat n-grams.

Parameters

ngram_size (int) – Size of n-grams, must be greater than 0. Default: 1.

Inputs:
  • state_seq (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, m).

  • log_probs (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, vocab_size). The value of log_probs will be replaced with -FLOAT_MAX when n-grams repeated.

Outputs:
  • log_probs (Tensor) - The output Tensor with same shape and type as original log_probs.

Supported Platforms:

Ascend

Examples

>>> no_repeat_ngram = ops.NoRepeatNGram(ngram_size=3)
>>> state_seq = Tensor([[[1, 2, 1, 2, 5, 1, 2],
                         [9, 3, 9, 5, 4, 1, 5]],
                        [[4, 8, 6, 4, 5, 6, 4],
                         [4, 8, 8, 4, 3, 4, 8]]], dtype=mindspore.int32)
>>> log_probs = Tensor([[[0.75858542, 0.8437121 , 0.69025469, 0.79379992, 0.27400691,
                          0.84709179, 0.78771346, 0.68587179, 0.22943851, 0.17682976]],
                        [[0.99401879, 0.77239773, 0.81973878, 0.32085208, 0.59944118,
                          0.3125177, 0.52604189, 0.77111461, 0.98443699, 0.71532898]]], dtype=mindspore.float32)
>>> output = no_repeat_ngram(state_seq, log_probs)
>>> print(output)
[[[0.75858542 -3.4028235e+38 0.69025469 0.79379992 0.27400691
   -3.4028235e+38 0.78771346 0.68587179 0.22943851 0.17682976]]
 [[0.99401879 0.77239773 0.81973878 0.32085208 0.59944118
   -3.4028235e+38 0.52604189 0.77111461 0.98443699 0.71532898]]]
class tinyms.primitives.NotEqual(*args, **kwargs)[source]

Computes the non-equivalence of two tensors element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(input_x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(input_x, input_y)
>>> print(output)
[False False  True]
class tinyms.primitives.OneHot(*args, **kwargs)[source]

Computes a one-hot tensor.

Makes a new tensor, whose locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters

axis (int) – Position to insert the value. e.g. If indices shape is [n, c], and axis is -1 the output shape will be [n, c, depth], If axis is 0 the output shape will be [depth, n, c]. Default: -1.

Inputs:
  • indices (Tensor) - A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be int32.

  • depth (int) - A scalar defining the depth of the one hot dimension.

  • on_value (Tensor) - A value to fill in output when indices[j] = i. With data type of float16 or float32.

  • off_value (Tensor) - A value to fill in output when indices[j] != i. Has the same data type with as on_value.

Outputs:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> onehot = ops.OneHot()
>>> output = onehot(indices, depth, on_value, off_value)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
class tinyms.primitives.Ones(*args, **kwargs)[source]

Creates a tensor filled with value ones.

Creates a tensor with shape described by the first argument and fills it with value ones in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor. Only constant positive int is allowed.

  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input shape value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore.ops import operations as ops
>>> ones = ops.Ones()
>>> output = ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.OnesLike(*args, **kwargs)[source]

Creates a new tensor. The values of all elements are 1.

Returns a tensor of ones with the same shape and type as the input.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x but filled with ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> oneslike = ops.OnesLike()
>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = oneslike(x)
>>> print(output)
[[1 1]
 [1 1]]
class tinyms.primitives.PReLU(*args, **kwargs)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of an channel of the input.

Note

1-dimensional input_x is not supported.

Inputs:
  • input_x (Tensor) - Float tensor, representing the output of the preview layer. With data type of float16 or float32.

  • weight (Tensor) - Float Tensor, w > 0, there are only two shapes are legitimate, 1 or the number of channels of the input. With data type of float16 or float32.

Outputs:

Tensor, with the same type as input_x.

For detailed information, please refer to nn.PReLU.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.prelu = ops.PReLU()
...     def construct(self, input_x, weight):
...         result = self.prelu(input_x, weight)
...         return result
...
>>> input_x = Tensor(np.random.randint(-3, 3, (2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> output = net(input_x, weight)
>>> print(output.shape)
(2, 3, 2)
tinyms.primitives.Pack(axis=0)[source]

Packs a list of tensors in specified axis.

The usage of Pack is deprecated. Please use Stack.

class tinyms.primitives.Pad(*args, **kwargs)[source]

Pads the input tensor according to the paddings.

Parameters

paddings (tuple) – The shape of parameter paddings is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, the tensor after padding.

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> pad_op = ops.Pad(((1, 2), (2, 1)))
>>> output = pad_op(input_tensor)
>>> print(output)
[[ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.  -0.1  0.3  3.6  0. ]
 [ 0.   0.   0.4  0.5 -3.2  0. ]
 [ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.   0.   0.   0.   0. ]]
class tinyms.primitives.Padding(*args, **kwargs)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Parameters

pad_dim_size (int) – The value of the last dimension of x to be extended, which must be positive.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The rank of x must be at least 2. The last dimension of x must be 1.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.Padding(pad_dim_size)(x)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
class tinyms.primitives.ParallelConcat(*args, **kwargs)[source]

Concats tensor in the first dimension.

Concats input tensors along with the first dimension.

Note

The input tensors are all required to have size 1 in the first dimension.

Inputs:
  • values (tuple, list) - A tuple or a list of input tensors. The data type and shape of these tensors must be the same.

Outputs:

Tensor, data type is the same as values.

Supported Platforms:

Ascend

Examples

>>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
>>> op = ops.ParallelConcat()
>>> output = op((data1, data2))
>>> print(output)
[[0 1]
 [2 1]]
class tinyms.primitives.Partial(*args, **kwargs)[source]

Makes a partial function instance, used for pynative mode.

Inputs:
  • args (Union[FunctionType, Tensor]) - The function and bind arguments.

Outputs:

FunctionType, partial function binded with arguments.

class tinyms.primitives.Poisson(*args, **kwargs)[source]

Produces random non-negative integer values i, distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!},\]
Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • mean (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. It must be greater than 0. With float32 data type.

Outputs:

Tensor. Its shape must be the broadcasted shape of shape and the shape of mean. The dtype is int32.

Supported Platforms:

Ascend

Examples

>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mstype.float32)
>>> poisson = ops.Poisson(seed=5)
>>> output = poisson(shape, mean)
>>> result = output.shape
>>> print(result)
(4, 2)
class tinyms.primitives.PopulationCount(*args, **kwargs)[source]

Calculates population count.

Inputs:
  • input (Tensor) - The data type must be int16 or uint16.

Outputs:

Tensor, with the same shape as the input.

Supported Platforms:

Ascend

Examples

>>> population_count = ops.PopulationCount()
>>> x_input = Tensor([0, 1, 3], mindspore.int16)
>>> output = population_count(x_input)
>>> print(output)
[0 1 2]
class tinyms.primitives.Pow(*args, **kwargs)[source]

Computes a tensor to the power of the second input.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> input_y = 3.0
>>> pow = ops.Pow()
>>> output = pow(input_x, input_y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> input_y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> pow = ops.Pow()
>>> output = pow(input_x, input_y)
>>> print(output)
[ 1. 16. 64.]
class tinyms.primitives.Print(*args, **kwargs)[source]

Outputs the tensor or string to stdout.

Note

In pynative mode, please use python print function.

Inputs:
  • input_x (Union[Tensor, str]) - The graph node to attach to. The input supports multiple strings and tensors which are separated by ‘,’.

Supported Platforms:

Ascend

Examples

>>> class PrintDemo(nn.Cell):
...     def __init__(self):
...         super(PrintDemo, self).__init__()
...         self.print = ops.Print()
...
...     def construct(self, x, y):
...         self.print('Print Tensor x and Tensor y:', x, y)
...         return x
...
>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> net = PrintDemo()
>>> result = net(x, y)
Print Tensor x and Tensor y:
[[1]
 [1]]
[[1 1]
 [1 1]]
class tinyms.primitives.Pull(*args, **kwargs)[source]

Pulls weight from parameter server.

Inputs:
  • key (Tensor) - The key of the weight.

  • weight (Tensor) - The weight to be updated.

Outputs:

None.

class tinyms.primitives.Push(*args, **kwargs)[source]

Pushes the inputs of the corresponding optimizer to parameter server.

Parameters
  • optim_type (string) – The optimizer type. Default: ‘ApplyMomentum’.

  • only_shape_indices (list) – The indices of input of which only shape will be pushed to parameter server. Default: None.

Inputs:
  • optim_inputs (tuple) - The inputs for this kind of optimizer.

  • optim_input_shapes (tuple) - The shapes of the inputs.

Outputs:

Tensor, the key of the weight which needs to be updated.

class tinyms.primitives.RNNTLoss(*args, **kwargs)[source]

Computes the RNNTLoss and its gradient with respect to the softmax outputs.

Parameters

blank_label (int) – blank label. Default: 0.

Inputs:
  • acts (Tensor) - Tensor of shape \((B, T, U, V)\). Data type must be float16 or float32.

  • labels (Tensor[int32]) - Tensor of shape \((B, U-1)\).

  • input_lengths (Tensor[int32]) - Tensor of shape \((B,)\).

  • label_lengths (Tensor[int32]) - Tensor of shape \((B,)\).

Outputs:
  • costs (Tensor[int32]) - Tensor of shape \((B,)\).

  • grads (Tensor[int32]) - Has the same shape as acts.

Supported Platforms:

Ascend

Examples

>>> B, T, U, V = 1, 2, 3, 5
>>> blank = 0
>>> acts = np.random.random((B, T, U, V)).astype(np.float32)
>>> labels = np.array([[1, 2]]).astype(np.int32)
>>> input_length = np.array([T] * B).astype(np.int32)
>>> label_length = np.array([len(l) for l in labels]).astype(np.int32)
>>> rnnt_loss = ops.RNNTLoss(blank_label=0)
>>> costs, grads = rnnt_loss(Tensor(acts), Tensor(labels), Tensor(input_length), Tensor(label_length))
>>> print(costs.shape)
(1,)
>>> print(grads.shape)
(1, 2, 3, 5)
class tinyms.primitives.ROIAlign(*args, **kwargs)[source]

Computes the Region of Interest (RoI) Align operator.

The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling points. The details of (RoI) Align operator are described in Mask R-CNN.

Parameters
  • pooled_height (int) – The output features height.

  • pooled_width (int) – The output features width.

  • spatial_scale (float) – A scaling factor that maps the raw image coordinates to the input feature map coordinates. Suppose the height of a RoI is ori_h in the raw image and fea_h in the input feature map, the spatial_scale must be fea_h / ori_h.

  • sample_num (int) – Number of sampling points. Default: 2.

  • roi_end_mode (int) – Number must be 0 or 1. Default: 1.

Inputs:
  • features (Tensor) - The input features, whose shape must be (N, C, H, W).

  • rois (Tensor) - The shape is (rois_n, 5). With data type of float16 or float32. rois_n represents the number of RoI. The size of the second dimension must be 5 and the 5 colunms are (image_index, top_left_x, top_left_y, bottom_right_x, bottom_right_y). image_index represents the index of image. top_left_x and top_left_y represent the x, y coordinates of the top left corner of corresponding RoI, respectively. bottom_right_x and bottom_right_y represent the x, y coordinates of the bottom right corner of corresponding RoI, respectively.

Outputs:

Tensor, the shape is (rois_n, C, pooled_height, pooled_width).

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor(np.array([[[[1., 2.], [3., 4.]]]]), mindspore.float32)
>>> rois = Tensor(np.array([[0, 0.2, 0.3, 0.2, 0.3]]), mindspore.float32)
>>> roi_align = ops.ROIAlign(2, 2, 0.5, 2)
>>> output = roi_align(input_tensor, rois)
>>> print(output)
[[[[1.775 2.025]
   [2.275 2.525]]]]
class tinyms.primitives.RandomCategorical(*args, **kwargs)[source]

Generates random samples from a given categorical distribution tensor.

Parameters

dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Inputs:
  • logits (Tensor) - The input tensor. 2-D Tensor with shape [batch_size, num_classes].

  • num_sample (int) - Number of sample to be drawn. Only constant values is allowed.

  • seed (int) - Random seed. Default: 0. Only constant values is allowed.

Outputs:
  • output (Tensor) - The output Tensor with shape [batch_size, num_samples].

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...   def __init__(self, num_sample):
...     super(Net, self).__init__()
...     self.random_categorical = ops.RandomCategorical(mindspore.int64)
...     self.num_sample = num_sample
...   def construct(self, logits, seed=0):
...     return self.random_categorical(logits, self.num_sample, seed)
...
>>> x = np.random.random((10, 5)).astype(np.float32)
>>> net = Net(8)
>>> output = net(Tensor(x))
>>> result = output.shape
>>> print(result)
(10, 8)
class tinyms.primitives.RandomChoiceWithMask(*args, **kwargs)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

The input must be a tensor of rank not less than 1. If its rank is greater than or equal to 2, the first dimension specifies the number of samples. The index tensor and the mask tensor have the fixed shapes. The index tensor denotes the index of the nonzero sample, while the mask tensor denotes which elements in the index tensor are valid.

Parameters
  • count (int) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • input_x (Tensor[bool]) - The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

Outputs:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Supported Platforms:

Ascend GPU

Examples

>>> rnd_choice_mask = ops.RandomChoiceWithMask()
>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = rnd_choice_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
class tinyms.primitives.Randperm(*args, **kwargs)[source]

Generates random samples from 0 to n-1.

Parameters
  • max_length (int) – Number of items expected to get and the number must be greater than 0. Default: 1.

  • pad (int) – The pad value to be filled. Default: -1.

  • dtype (mindspore.dtype) – The type of output. Default: mindspore.int32.

Inputs:
  • n (Tensor[int]) - The input tensor with shape: (1,) and the number must be in (0, max_length]. Default: 1.

Outputs:
  • output (Tensor) - The output Tensor with shape: (max_length,) and type: dtype.

Supported Platforms:

Ascend

Examples

>>> randperm = ops.Randperm(max_length=30, pad=-1)
>>> n = Tensor([20], dtype=mindspore.int32)
>>> output = randperm(n)
>>> print(output)
[15 6 11 19 14 16 9 5 13 18 4 10 8 0 17 2 14 1 12 3 7
 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
class tinyms.primitives.Rank(*args, **kwargs)[source]

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor is the number of indices required to uniquely select each element of the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. 0-D int32 Tensor representing the rank of input, i.e., \(R\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> rank = ops.Rank()
>>> output = rank(input_tensor)
>>> print(output)
2
class tinyms.primitives.ReLU(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) of input tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu = ops.ReLU()
>>> output = relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
class tinyms.primitives.ReLU6(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu6 = ops.ReLU6()
>>> result = relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
class tinyms.primitives.ReLUV2(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) of input tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor must be a 4-D tensor.

Outputs:
  • output (Tensor) - Has the same type and shape as the input_x.

  • mask (Tensor) - A tensor whose data type must be uint8.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[[[1, -2], [-3, 4]], [[-5, 6], [7, -8]]]]), mindspore.float32)
>>> relu_v2 = ops.ReLUV2()
>>> output, mask= relu_v2(input_x)
>>> print(output)
[[[[1. 0.]
   [0. 4.]]
  [[0. 6.]
   [7. 0.]]]]
>>> print(mask)
[[[[[1 0]
    [2 0]]
   [[2 0]
    [1 0]]]]]
class tinyms.primitives.RealDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> realdiv = ops.RealDiv()
>>> output = realdiv(input_x, input_y)
>>> print(output)
[0.25 0.4  0.5 ]
class tinyms.primitives.Reciprocal(*args, **kwargs)[source]

Returns reciprocal of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> reciprocal = ops.Reciprocal()
>>> output = reciprocal(input_x)
>>> print(output)
[1.   0.5  0.25]
class tinyms.primitives.ReduceAll(*args, **kwargs)[source]

Reduces a dimension of a tensor by the “logicalAND” of all elements in the dimension.

The dtype of the tensor to be reduced is bool.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[bool]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical and” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAll(keep_dims=True)
>>> output = op(input_x, 1)
>>> print(output)
[[False]
 [ True]]
class tinyms.primitives.ReduceAny(*args, **kwargs)[source]

Reduces a dimension of a tensor by the “logical OR” of all elements in the dimension.

The dtype of the tensor to be reduced is bool.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[bool]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical or” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAny(keep_dims=True)
>>> output = op(input_x, 1)
>>> print(output)
[[ True]
 [ True]]
class tinyms.primitives.ReduceMax(*args, **kwargs)[source]

Reduces a dimension of a tensor by the maximum value in this dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, has the same dtype as the input_x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the maximum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMax(keep_dims=True)
>>> output = op(input_x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
class tinyms.primitives.ReduceMean(*args, **kwargs)[source]

Reduces a dimension of a tensor by averaging all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, has the same dtype as the input_x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the mean of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMean(keep_dims=True)
>>> output = op(input_x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
class tinyms.primitives.ReduceMin(*args, **kwargs)[source]

Reduces a dimension of a tensor by the minimum value in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, has the same dtype as the input_x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the minimum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMin(keep_dims=True)
>>> output = op(input_x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
class tinyms.primitives.ReduceOp[source]

Operation options for reducing tensors.

There are four kinds of operation options, “SUM”, “MAX”, “MIN”, and “PROD”.

  • SUM: Take the sum.

  • MAX: Take the maximum.

  • MIN: Take the minimum.

  • PROD: Take the product.

Supported Platforms:

Ascend GPU

class tinyms.primitives.ReduceProd(*args, **kwargs)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, has the same dtype as the input_x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceProd(keep_dims=True)
>>> output = op(input_x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
class tinyms.primitives.ReduceScatter(*args, **kwargs)[source]

Reduces and scatters tensors from the specified communication group.

Note

The back propagation of the op is not supported yet. Stay tuned for more. The tensors must have the same shape and format in all processes of the collection.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like SUM, MAX, AVG. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If any of operation and group is not a string.

  • ValueError – If the first dimension of the input cannot be divided by the rank size.

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn.
>>> from mindspore import Tensor, context
>>> from mindspore.communication import init
>>> from mindspore.ops.operations.comm_ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops.operations as ops
>>> import numpy as np
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.reducescatter = ops.ReduceScatter(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.reducescatter(x)
...
>>> input_ = Tensor(np.ones([8, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceSum(*args, **kwargs)[source]

Reduces a dimension of a tensor by summing all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(input_x), rank(input_x)).

Outputs:

Tensor, has the same dtype as the input_x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceSum(keep_dims=True)
>>> output = op(input_x, 1)
>>> output.shape
(3, 1, 5, 6)
class tinyms.primitives.Reshape(*args, **kwargs)[source]

Reshapes the input tensor with the same values based on a given shape tuple.

Raises

ValueError – Given a shape tuple, if it has several -1; or if the product of its elements is less than or equal to 0 or cannot be divided by the product of the input tensor shape; or if it does not match the input’s array size.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_shape (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((y_1, y_2, ..., y_S)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> reshape = ops.Reshape()
>>> output = reshape(input_tensor, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
class tinyms.primitives.ResizeBilinear(*args, **kwargs)[source]

Resizes an image to a certain size using the bilinear interpolation.

The resizing only affects the lower two dimensions which represent the height and width. The input images can be represented by different data types, but the data types of output images are always float32.

Parameters
  • size (Union[tuple[int], list[int]]) – A tuple or list of 2 int elements (new_height, new_width), the new size of the images.

  • align_corners (bool) – If true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Default: False.

Inputs:
  • input (Tensor) - Image to be resized. Input images must be a 4-D tensor with shape \((batch, channels, height, width)\), with data type of float32 or float16.

Outputs:

Tensor, resized image. 4-D with shape [batch, channels, new_height, new_width] in float32.

Supported Platforms:

Ascend

Examples

>>> tensor = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
>>> resize_bilinear = ops.ResizeBilinear((5, 5))
>>> output = resize_bilinear(tensor)
>>> print(output)
[[[[1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]]]]
class tinyms.primitives.ResizeNearestNeighbor(*args, **kwargs)[source]

Resizes the input tensor by using the nearest neighbor algorithm.

Resizes the input tensor to a given size by using the nearest neighbor algorithm. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters
  • size (Union[tuple, list]) – The target size. The dimension of size must be 2.

  • align_corners (bool) – Whether the centers of the 4 corner pixels of the input and output tensors are aligned. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of the tensor is \((N, C, H, W)\).

Outputs:

Tensor, the shape of the output tensor is \((N, C, NEW\_H, NEW\_W)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor(np.array([[[[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]]]), mindspore.float32)
>>> resize = ops.ResizeNearestNeighbor((2, 2))
>>> output = resize(input_tensor)
>>> print(output)
[[[[-0.1  0.3]
   [ 0.4  0.5]]]]
class tinyms.primitives.ReverseSequence(*args, **kwargs)[source]

Reverses variable length slices.

Parameters
  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Inputs:
  • x (Tensor) - The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) - Must be a 1-D vector with int32 or int64 types.

Outputs:

Reversed tensor with the same shape and data type as input.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
class tinyms.primitives.ReverseV2(*args, **kwargs)[source]

Reverses specific dimensions of a tensor.

Parameters

axis (Union[tuple(int), list(int)) – The indices of the dimensions to reverse.

Inputs:
  • input_x (Tensor) - The target tensor.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> op = ops.ReverseV2(axis=[1])
>>> output = op(input_x)
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
class tinyms.primitives.Rint(*args, **kwargs)[source]

Returns an integer that is closest to x element-wise.

Inputs:
  • input_x (Tensor) - The target tensor, which must be one of the following types: float16, float32.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([-1.6, -0.1, 1.5, 2.0]), mindspore.float32)
>>> op = ops.Rint()
>>> output = op(input_x)
>>> print(output)
[-2.  0.  2.  2.]
class tinyms.primitives.Round(*args, **kwargs)[source]

Returns half to even of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and type as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> round = ops.Round()
>>> output = round(input_x)
>>> print(output)
[ 1.  2.  2.  2. -4.]
class tinyms.primitives.Rsqrt(*args, **kwargs)[source]

Computes reciprocal of square root of input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input of Rsqrt. Each element must be a non-negative number.

Outputs:

Tensor, has the same type and shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor([[4, 4], [9, 9]], mindspore.float32)
>>> rsqrt = ops.Rsqrt()
>>> output = rsqrt(input_tensor)
>>> print(output)
[[0.5        0.5       ]
 [0.33333334 0.33333334]]
class tinyms.primitives.SGD(*args, **kwargs)[source]

Computes the stochastic gradient descent. Momentum is optional.

Nesterov momentum is based on the formula from paper On the importance of initialization and momentum in deep learning.

Note

For details, please refer to nn.SGD source code.

Parameters
  • dampening (float) – The dampening for momentum. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0.

  • nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • parameters (Tensor) - Parameters to be updated. With float16 or float32 data type.

  • gradient (Tensor) - Gradient, with float16 or float32 data type.

  • learning_rate (Tensor) - Learning rate, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32)

  • accum (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type.

  • momentum (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32).

  • stat (Tensor) - States to be updated with the same shape as gradient, with float16 or float32 data type.

Outputs:

Tensor, parameters to be updated.

Supported Platforms:

Ascend GPU

Examples

>>> sgd = ops.SGD()
>>> parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
>>> gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
>>> learning_rate = Tensor(0.01, mindspore.float32)
>>> accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
>>> momentum = Tensor(0.1, mindspore.float32)
>>> stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
>>> output = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
>>> print(output[0])
(Tensor(shape=[4], dtype=Float32, value= [ 1.98989999e+00, -4.90300000e-01,  1.69520009e+00,  3.98009992e+00]),)
class tinyms.primitives.SameTypeShape(*args, **kwargs)[source]

Checks whether the data type and shape of two tensors are the same.

Raises
  • TypeError – If the data types of two tensors are not the same.

  • ValueError – If the shapes of two tensors are not the same.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_y (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_S)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_R)\), if data type and shape of input_x and input_y are the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> input_y = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.SameTypeShape()(input_x, input_y)
>>> print(output)
[[2. 2.]
 [2. 2.]]
class tinyms.primitives.ScalarCast(*args, **kwargs)[source]

Casts the input scalar to another type.

Inputs:
  • input_x (scalar) - The input scalar. Only constant value is allowed.

  • input_y (mindspore.dtype) - The type to be cast. Only constant value is allowed.

Outputs:

Scalar. The type is the same as the python type corresponding to input_y.

Supported Platforms:

Ascend GPU CPU

Examples

>>> scalar_cast = ops.ScalarCast()
>>> output = scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
class tinyms.primitives.ScalarSummary(*args, **kwargs)[source]

Outputs a scalar to a protocol buffer through a scalar summary operator.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of scalar, and the shape of value must be [] or [1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.ScalarSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         name = "x"
...         self.summary(name, x)
...         x = self.add(x, y)
...         return x
...
class tinyms.primitives.ScalarToArray(*args, **kwargs)[source]

Converts a scalar to a Tensor.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToArray()
>>> data = 1.0
>>> output = op(data)
>>> print(output)
1.0
class tinyms.primitives.ScalarToTensor(*args, **kwargs)[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

  • dtype (mindspore.dtype) - The target data type. Default: mindspore.float32. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToTensor()
>>> data = 1
>>> output = op(data, mindspore.float32)
>>> print(output)
1.0
class tinyms.primitives.ScatterAdd(*args, **kwargs)[source]

Updates the value of the input tensor through the addition operation.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] \mathrel{+}= updates[i, ..., j, :] \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do add operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the add operation with input_x, the data type is the same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[1. 1. 1.]
 [3. 3. 3.]]
class tinyms.primitives.ScatterDiv(*args, **kwargs)[source]

Updates the value of the input tensor through the divide operation.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] \mathrel{/}= updates[i, ..., j, :] \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do div operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the div operation with input_x, the data type is the same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
class tinyms.primitives.ScatterMax(*args, **kwargs)[source]

Updates the value of the input tensor through the maximum operation.

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] = max(input\_x[indices[i, ..., j], :], updates[i, ..., j, :]) \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do max operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the maximum operation with input_x, the data type is the same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> scatter_max = ops.ScatterMax()
>>> output = scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
class tinyms.primitives.ScatterMin(*args, **kwargs)[source]

Updates the value of the input tensor through the minimum operation.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] = min(input\_x[indices[i, ..., j], :], updates[i, ..., j, :]) \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 1.0, 2.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_min = ops.ScatterMin()
>>> output = scatter_min(input_x, indices, update)
>>> print(output)
[[0. 1. 1.]
 [0. 0. 0.]]
class tinyms.primitives.ScatterMul(*args, **kwargs)[source]

Updates the value of the input tensor through the multiply operation.

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] \mathrel{*}= updates[i, ..., j, :] \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do mul operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the mul operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
class tinyms.primitives.ScatterNd(*args, **kwargs)[source]

Scatters a tensor into a new tensor depending on the specified indices.

Creates an empty tensor with the given shape, and set values by scattering the update tensor depending on indices.

The empty tensor has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of the empty tensor.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, shape_N, ..., shape_{P-1})\).

Inputs:
  • indices (Tensor) - The index of scattering in the new tensor with int32 data type. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The source Tensor to be scattered. It has shape indices_shape[:-1] + shape[indices_shape[-1]:].

  • shape (tuple[int]) - Define the shape of the output tensor, has the same type as indices.

Outputs:

Tensor, the new tensor, has the same type as update and the same shape as shape.

Supported Platforms:

Ascend GPU

Examples

>>> op = ops.ScatterNd()
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = op(indices, updates, shape)
>>> print(output)
[[0.  3.2 0. ]
 [0.  1.1 0. ]
 [0.  0.  0. ]]
class tinyms.primitives.ScatterNdAdd(*args, **kwargs)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do add operation whose data type must be mindspore.int32. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The tensor doing the add operation with input_x, the data type is same as input_x, the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_add = ops.ScatterNdAdd()
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
class tinyms.primitives.ScatterNdSub(*args, **kwargs)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do add operation whose data type must be mindspore.int32. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The tensor that performs the subtraction operation with input_x, the data type is the same as input_x, the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_sub = ops.ScatterNdSub()
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
class tinyms.primitives.ScatterNdUpdate(*args, **kwargs)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter.

  • indices (Tensor) - The index of input tensor, with int32 data type. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The tensor to be updated to the input tensor, has the same type as input. the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.ScatterNdUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 1.   0.3  3.6]
 [ 0.4  2.2 -3.2]]
class tinyms.primitives.ScatterNonAliasingAdd(*args, **kwargs)[source]

Applies sparse addition to the input using individual values or slices.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x (Parameter) - The target parameter. The data type must be float16, float32 or int32.

  • indices (Tensor) - The index to perform the addition operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the addition operation with input_x, the data type is the same as input_x, the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_non_aliasing_add = ops.ScatterNonAliasingAdd()
>>> output = scatter_non_aliasing_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
class tinyms.primitives.ScatterSub(*args, **kwargs)[source]

Updates the value of the input tensor through the subtraction operation.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] \mathrel{-}= updates[i, ..., j, :] \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to perform the subtraction operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the subtraction operation with input_x, the data type is the same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Parameter, the updated input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[-1. -1. -1.]
 [-1. -1. -1.]]
class tinyms.primitives.ScatterUpdate(*args, **kwargs)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

\[\begin{split}\begin{array}{l} \text {for each i, ..., j in indices.shape:} \\ input\_x[indices[i, ..., j], :] = updates[i, ..., j, :] \end{array}\end{split}\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter.

  • indices (Tensor) - The index of input tensor. With int32 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> op = ops.ScatterUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[2.  1.2 1. ]
 [3.  1.2 1. ]]
class tinyms.primitives.Select(*args, **kwargs)[source]

Returns the selected elements, either from input \(x\) or input \(y\), depending on the condition.

Given a tensor as input, this operation inserts a dimension of 1 at the dimension, if both \(x\) and \(y\) are none, the operation returns the coordinates of the true element in the condition, the coordinates are returned as a two-dimensional tensor, where the first dimension (row) represents the number of true elements and the second dimension (columns) represents the coordinates of the true elements. Keep in mind that the shape of the output tensor can vary depending on how many true values are in the input. Indexes are output in row-first order.

If neither is None, \(x\) and \(y\) must have the same shape. If \(x\) and \(y\) are scalars, the conditional tensor must be a scalar. If \(x\) and \(y\) are higher-dimensional vectors, the condition must be a vector whose size matches the first dimension of \(x\), or must have the same shape as \(y\).

The conditional tensor acts as an optional compensation (mask), which determines whether the corresponding element / row in the output must be selected from \(x\) (if true) or \(y\) (if false) based on the value of each element.

If condition is a vector, then \(x\) and \(y\) are higher-dimensional matrices, then it chooses to copy that row (external dimensions) from \(x\) and \(y\). If condition has the same shape as \(x\) and \(y\), you can choose to copy these elements from \(x\) and \(y\).

Inputs:
  • input_cond (Tensor[bool]) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The condition tensor, decides which element is chosen.

  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The first input tensor.

  • input_y (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The second input tensor.

Outputs:

Tensor, has the same shape as input_x. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> select = ops.Select()
>>> input_cond = Tensor([True, False])
>>> input_x = Tensor([2,3], mindspore.float32)
>>> input_y = Tensor([1,2], mindspore.float32)
>>> output = select(input_cond, input_x, input_y)
>>> print(output)
[2. 2.]
class tinyms.primitives.Shape(*args, **kwargs)[source]

Returns the shape of the input tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[int], the output tuple is constructed by multiple integers, \((x_1, x_2, ..., x_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.Shape()
>>> output = shape(input_tensor)
>>> print(output)
(3, 2, 1)
class tinyms.primitives.Sigmoid(*args, **kwargs)[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

where \(x_i\) is the element of the input.

Inputs:
  • input_x (Tensor) - The input of Sigmoid, data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> sigmoid = ops.Sigmoid()
>>> output = sigmoid(input_x)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
class tinyms.primitives.SigmoidCrossEntropyWithLogits(*args, **kwargs)[source]

Uses the given logits to compute sigmoid cross entropy.

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}}\]
\[loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})]\]
Inputs:
  • logits (Tensor) - Input logits.

  • label (Tensor) - Ground truth label.

Outputs:

Tensor, with the same shape and type as input logits.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32))
>>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32))
>>> sigmoid = ops.SigmoidCrossEntropyWithLogits()
>>> output = sigmoid(logits, labels)
>>> print(output)
[[ 0.6111007   0.5032824   0.26318604]
 [ 0.58439666  0.5530153  -0.4368139 ]]
class tinyms.primitives.Sign(*args, **kwargs)[source]

Performs sign on the tensor element-wise.

Note

\[sign(x) = \begin{cases} -1, &if\ x < 0 \cr 0, &if\ x = 0 \cr 1, &if\ x > 0\end{cases}\]
Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and type as the input_x.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
>>> sign = ops.Sign()
>>> output = sign(input_x)
>>> print(output)
[[ 1.  0. -1.]]
class tinyms.primitives.Sin(*args, **kwargs)[source]

Computes sine of the input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend GPU

Examples

>>> sin = ops.Sin()
>>> input_x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sin(input_x)
>>> print(output)
[0.5810352  0.27635565 0.41687083 0.5810352 ]
class tinyms.primitives.Sinh(*args, **kwargs)[source]

Computes hyperbolic sine of the input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> sinh = ops.Sinh()
>>> input_x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinh(input_x)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
class tinyms.primitives.Size(*args, **kwargs)[source]

Returns the size of a tensor.

Returns an int scalar representing the elements size of input, the total number of elements in the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

int, a scalar representing the elements size of input_x, tensor is the number of elements in a tensor, \(size=x_1*x_2*...x_R\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> size = ops.Size()
>>> output = size(input_tensor)
>>> print(output)
4
class tinyms.primitives.Slice(*args, **kwargs)[source]

Slices a tensor in the specified shape.

Inputs:
  • x (Tensor): The target tensor.

  • begin (tuple, list): The beginning of the slice. Only constant value is allowed.

  • size (tuple, list): The size of the slice. Only constant value is allowed.

Outputs:

Tensor, the shape is : input size, the data type is the same as input x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> slice = ops.Slice()
>>> output = slice(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
class tinyms.primitives.SmoothL1Loss(*args, **kwargs)[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Note

Sets input prediction as X, input target as Y, output as loss. Then,

\[\text{SmoothL1Loss} = \begin{cases} \frac{0.5 x^{2}}{\text{beta}}, &if \left |x \right | < \text{beta} \cr \left |x \right|-0.5 \text{beta}, &\text{otherwise}\end{cases}\]
Parameters

beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.

Inputs:
  • prediction (Tensor) - Predict data. Data type must be float16 or float32.

  • target (Tensor) - Ground truth data, with the same type and shape as prediction.

Outputs:

Tensor, with the same type and shape as prediction.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.SmoothL1Loss()
>>> input_data = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> target_data = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(input_data, target_data)
>>> print(output)
[0.  0.  0.5]
class tinyms.primitives.Softmax(*args, **kwargs)[source]

Softmax operation.

Applies the Softmax operation to the input tensor on the specified axis. Suppose a slice in the given aixs \(x\), then for each element \(x_i\), the Softmax function is shown as follows:

\[\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},\]

where \(N\) is the length of the tensor.

Parameters

axis (Union[int, tuple]) – The axis to perform the Softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - The input of Softmax, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the logits.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softmax = ops.Softmax()
>>> output = softmax(input_x)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
class tinyms.primitives.SoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Gets the softmax cross-entropy value between logits and labels with one-hot encoding.

Note

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}\]
\[loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})}\]
Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N, C)\), has the same data type with logits.

Outputs:

Tuple of 2 tensors, the loss shape is (N,), and the dlogits with the same shape as logits.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32)
>>> softmax_cross = ops.SoftmaxCrossEntropyWithLogits()
>>> loss, dlogits = softmax_cross(logits, labels)
>>> print(loss)
[0.5899297  0.52374405]
>>> print(dlogits)
[[ 0.02760027  0.20393994  0.01015357  0.20393994 -0.44563377]
 [ 0.08015892  0.02948882  0.08015892 -0.4077012   0.21789455]]
class tinyms.primitives.Softplus(*args, **kwargs)[source]

Softplus activation function.

Softplus is a smooth approximation to the ReLU function. The function is shown as follows:

\[\text{output} = \log(1 + \exp(\text{input_x})),\]
Inputs:
  • input_x (Tensor) - The input tensor whose data type must be float.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softplus = ops.Softplus()
>>> output = softplus(input_x)
>>> print(output)
[1.3132615 2.126928  3.0485873 4.01815   5.0067153]
class tinyms.primitives.Softsign(*args, **kwargs)[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{ 1 + |x|}\]
Inputs:
  • input_x (Tensor) - The input tensor whose data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> softsign = ops.Softsign()
>>> output = softsign(input_x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
class tinyms.primitives.Sort(*args, **kwargs)[source]

Sorts the elements of the input tensor along a given dimension in ascending order by value.

Parameters
  • axis (int) – The dimension to sort along. Default: -1.

  • descending (bool) – Controls the sorting order. If descending is True then the elements are sorted in descending order by value. Default: False.

Inputs:
  • x (Tensor) - The input to sort, with float16 or float32 data type.

Outputs:
  • y1 (Tensor) - A tensor whose values are the sorted values, with the same shape and data type as input.

  • y2 (Tensor) - The indices of the elements in the original input tensor. Data type is int32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.Sort()
>>> output = sort(x)
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
 [ 3.0000e+00,  5.0000e+00,  9.0000e+00],
 [ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
 [2, 0, 1],
 [0, 1, 2]]))
class tinyms.primitives.SpaceToBatch(*args, **kwargs)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_size, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the square of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters
  • block_size (int) – The block size of dividing blocks with value greater than 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction lists. Each subtraction list contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_size.

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_size\) and \(paddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_size*block\_size)\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_size\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_size\)

Supported Platforms:

Ascend

Examples

>>> block_size = 2
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch = ops.SpaceToBatch(block_size, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToBatchND(*args, **kwargs)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_shape, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the product of block_shape. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is a int, the block size of M dimendions are the same, equal to block_shape. M must be 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction list. Each contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_shape[i].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_shape\) and \(padddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_shape[0]*block\_shape[1])\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_shape[0]\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_shape[1]\)

Supported Platforms:

Ascend

Examples

>>> block_shape = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch_nd = ops.SpaceToBatchND(block_shape, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch_nd(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToDepth(*args, **kwargs)[source]

Rearranges blocks of spatial data into depth.

The output tensor’s height dimension is \(height / block\_size\).

The output tensor’s weight dimension is \(weight / block\_size\).

The depth of output tensor is \(block\_size * block\_size * input\_depth\).

The input tensor’s height and width must be divisible by block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide spatial data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor.

Outputs:

Tensor, the same data type as x. It must be a 4-D tensor.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
>>> block_size = 2
>>> space_to_depth = ops.SpaceToDepth(block_size)
>>> output = space_to_depth(x)
>>> print(output.shape)
(1, 12, 1, 1)
class tinyms.primitives.SparseApplyAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme.

\[accum += grad * grad\]
\[var -= lr * grad * (1 / sqrt(accum))\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – Learning rate.

  • update_slots (bool) – If True, accum will be updated. Default: True.

  • use_locking (bool) – If true, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradient. The shape must be the same as var’s shape except the first dimension. Gradients has the same data type as var.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The shape of indices must be the same as grad in first dimension, the type must be int32.

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad = ops.SparseApplyAdagrad(lr=1e-8)
...         self.var = Parameter(Tensor(np.ones([1, 1, 1]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([1, 1, 1]).astype(np.float32)), name="accum")
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad(self.var, self.accum, grad, indices)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(1, 1, 1).astype(np.float32))
>>> indices = Tensor([0], mstype.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.00000000e+00]]]), Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.00000000e+00]]]))
class tinyms.primitives.SparseApplyAdagradV2(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad.

\[accum += grad * grad\]
\[var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – Learning rate.

  • epsilon (float) – A small value added for numerical stability.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

  • update_slots (bool) – If True, the computation logic will be different to False. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradient. The shape must be the same as var’s shape except the first dimension. Gradients has the same data type as var.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The shape of indices must be the same as grad in first dimension, the type must be int32.

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> import mindspore.common.dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad_v2 = ops.SparseApplyAdagradV2(lr=1e-8, epsilon=1e-6)
...         self.var = Parameter(Tensor(np.ones([1, 1, 1]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([1, 1, 1]).astype(np.float32)), name="accum")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad_v2(self.var, self.accum, grad, indices)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(1, 1, 1).astype(np.float32))
>>> indices = Tensor([0], mstype.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.00000000e+00]]]), Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.30119634e+00]]]))
class tinyms.primitives.SparseApplyFtrl(*args, **kwargs)[source]

Updates relevant entries according to the FTRL-proximal scheme.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be the same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. The shape of indices must be the same as grad in the first dimension. If there are duplicates in indices, the behavior is undefined. The type must be int32 or int64.

Outputs:
  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.SparseApplyFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.random.rand(1, 1).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(1, 1).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.random.rand(1, 1).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> np.random.seed(0)
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.random.rand(1, 1).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[5.48813522e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[7.15189338e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[6.02763355e-01]]))
class tinyms.primitives.SparseApplyFtrlV2(*args, **kwargs)[source]

Updates relevant entries according to the FTRL-proximal scheme. This class has one more attribute, named l2_shrinkage, than class SparseApplyFtrl.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • l2_shrinkage (float) – L2 shrinkage regularization.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – If True, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

  • indices (Tensor) - A vector of indices in the first dimension of var and accum. The shape of indices must be the same as grad in the first dimension. The type must be int32.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> class SparseApplyFtrlV2Net(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlV2Net, self).__init__()
...         self.sparse_apply_ftrl_v2 = ops.SparseApplyFtrlV2(lr=0.01, l1=0.0, l2=0.0,
...                                                         l2_shrinkage=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.random.rand(1, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(1, 2).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.random.rand(1, 2).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl_v2(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> np.random.seed(0)
>>> net = SparseApplyFtrlV2Net()
>>> grad = Tensor(np.random.rand(1, 2).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 2], dtype=Float32, value=
[[ 5.48813522e-01,  7.15189338e-01]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 6.02763355e-01,  5.44883192e-01]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 4.23654795e-01,  6.45894110e-01]]))
class tinyms.primitives.SparseApplyProximalAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the proximal adagrad algorithm. Compared with ApplyProximalAdagrad, an additional index tensor is input.

\[accum += grad * grad\]
\[\text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}}\]
\[var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0)\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float16 or float32.

  • accum (Parameter) - Variable tensor to be updated, has the same dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type..

  • grad (Tensor) - A tensor of the same type as var, for the gradient.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64.

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.SparseApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.random.rand(1, 2).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.random.rand(1, 2).astype(np.float32)), name="accum")
...         self.lr = 0.01
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> np.random.seed(0)
>>> net = Net()
>>> grad = Tensor(np.random.rand(1, 2).astype(np.float32))
>>> indices = Tensor(np.ones((1,), np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 2], dtype=Float32, value=
[[ 5.48813522e-01,  7.15189338e-01]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 6.02763355e-01,  5.44883192e-01]]))
class tinyms.primitives.SparseGatherV2(*args, **kwargs)[source]

Returns a slice of input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The original Tensor.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor, must be in the range [0, input_param.shape[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> out = ops.SparseGatherV2()(input_params, input_indices, axis)
>>> print(out)
[[2. 7.]
 [4. 54.]
 [2. 55.]]
class tinyms.primitives.SparseSoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Note

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}\]
\[loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr -ln(1 - p_{ij}), & j \neq y_i \end{cases}\]
\[loss = \sum_{ij} loss_{ij}\]
Parameters

is_grad (bool) – If true, this operation returns the computed gradient. Default: False.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N)\). Data type must be int32 or int64.

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Supported Platforms:

GPU CPU

Examples

Please refer to the usage in nn.SoftmaxCrossEntropyWithLogits source code.

class tinyms.primitives.SparseToDense(*args, **kwargs)[source]

Converts a sparse representation into a dense tensor.

Inputs:
  • indices (Tensor) - The indices of sparse representation.

  • values (Tensor) - Values corresponding to each row of indices.

  • dense_shape (tuple) - An int tuple which specifies the shape of dense tensor.

Returns

Tensor, the shape of tensor is dense_shape.

Examples

>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> dense_shape = (3, 4)
>>> out = ops.SparseToDense()(indices, values, dense_shape)
class tinyms.primitives.Split(*args, **kwargs)[source]

Splits the input tensor into output_num of tensors along the given axis and output numbers.

Parameters
  • axis (int) – Index of the split position. Default: 0.

  • output_num (int) – The number of output tensors. Must be positive int. Default: 1.

Raises

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)), or if the output_num is less than or equal to 0.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], the shape of each output tensor is the same, which is \((y_1, y_2, ..., y_S)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> split = ops.Split(1, 2)
>>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), mindspore.int32)
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]), Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]))
class tinyms.primitives.Sqrt(*args, **kwargs)[source]

Returns square root of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> sqrt = ops.Sqrt()
>>> output = sqrt(input_x)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.Square(*args, **kwargs)[source]

Returns square of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> square = ops.Square()
>>> output = square(input_x)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.SquareSumAll(*args, **kwargs)[source]

Returns the square sum of a tensor element-wise

Inputs:
  • input_x1 (Tensor) - The input tensor. The data type must be float16 or float32.

  • input_x2 (Tensor) - The input tensor has the same type and shape as the input_x1.

Note

SquareSumAll only supports float16 and float32 data type.

Outputs:
  • output_y1 (Tensor) - The same type as the input_x1.

  • output_y2 (Tensor) - The same type as the input_x1.

Supported Platforms:

Ascend

Examples

>>> input_x1 = Tensor(np.array([0, 0, 2, 0]), mindspore.float32)
>>> input_x2 = Tensor(np.array([0, 0, 2, 4]), mindspore.float32)
>>> square_sum_all = ops.SquareSumAll()
>>> output = square_sum_all(input_x1, input_x2)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4),
 Tensor(shape=[], dtype=Float32, value= 20))
class tinyms.primitives.SquaredDifference(*args, **kwargs)[source]

Subtracts the second input tensor from the first input tensor element-wise and returns square of it.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32, int32 or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor or a tensor whose data type is float16, float32, int32 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([2.0, 4.0, 6.0]), mindspore.float32)
>>> squared_difference = ops.SquaredDifference()
>>> output = squared_difference(input_x, input_y)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.Squeeze(*args, **kwargs)[source]

Returns a tensor with the same type but dimensions of 1 are removed based on axis.

Note

The dimension index starts at 0 and must be in the range [-input.ndim, input.ndim.

Raises

ValueError – If the corresponding dimension of the specified axis does not equal to 1.

Parameters

axis (Union[int, tuple(int)]) – Specifies the dimension indexes of shape to be removed, which will remove all the dimensions that are equal to 1. If specified, it must be int32 or int64. Default: (), an empty tuple.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_S)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> squeeze = ops.Squeeze(2)
>>> output = squeeze(input_tensor)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
class tinyms.primitives.Stack(*args, **kwargs)[source]

Stacks a list of tensors in specified axis.

Stacks the list of input tensors with the same rank R, output is a tensor of rank (R+1).

Given input tensors of shape \((x_1, x_2, ..., x_R)\). Set the number of input tensors as N. If \(0 \le axis\), the shape of the output tensor is \((x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)\).

Parameters

axis (int) – Dimension to stack. Default: 0. Negative values wrap around. The range is [-(R+1), R+1).

Inputs:
  • input_x (Union[tuple, list]) - A Tuple or list of Tensor objects with the same shape and type.

Outputs:

Tensor. A stacked Tensor with the same type as input_x.

Raises
  • TypeError – If the data types of elements in input_x are not the same.

  • ValueError – If the length of input_x is not greater than 1; or if axis is out of the range [-(R+1), R+1); or if the shapes of elements in input_x are not the same.

Supported Platforms:

Ascend GPU

Examples

>>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> stack = ops.Stack()
>>> output = stack([data1, data2])
>>> print(output)
[[0. 1.]
 [2. 3.]]
class tinyms.primitives.StandardLaplace(*args, **kwargs)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x;0,1) = \frac{1}{2}\exp(-|x|),\]
Parameters
  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Supported Platforms:

Ascend

Examples

>>> shape = (4, 16)
>>> stdlaplace = ops.StandardLaplace(seed=2)
>>> output = stdlaplace(shape)
>>> result = output.shape
>>> print(result)
(4, 16)
class tinyms.primitives.StandardNormal(*args, **kwargs)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape is the same as the input shape. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (4, 16)
>>> stdnormal = ops.StandardNormal(seed=2)
>>> output = stdnormal(shape)
>>> result = output.shape
>>> print(result)
(4, 16)
class tinyms.primitives.StridedSlice(*args, **kwargs)[source]

Extracts a strided slice of a tensor.

Given an input tensor, this operation inserts a dimension of length 1 at the dimension. This operation extracts a fragment of size (end-begin)/stride from the given ‘input_tensor’. Starting from the beginning position, the fragment continues adding stride to the index until all dimensions are not less than the ending position.

Note

The stride may be negative value, which causes reverse slicing. The shape of begin, end and strides must be the same.

Parameters
  • begin_mask (int) – Starting index of the slice. Default: 0.

  • end_mask (int) – Ending index of the slice. Default: 0.

  • ellipsis_mask (int) – An int mask. Default: 0.

  • new_axis_mask (int) – An int mask. Default: 0.

  • shrink_axis_mask (int) – An int mask. Default: 0.

Inputs:
  • input_x (Tensor) - The input Tensor.

  • begin (tuple[int]) - A tuple which represents the location where to start. Only constant value is allowed.

  • end (tuple[int]) - A tuple or which represents the maximum location where to end. Only constant value is allowed.

  • strides (tuple[int]) - A tuple which represents the stride is continuously added before reaching the maximum location. Only constant value is allowed.

Outputs:

Tensor. The output is explained by following example.

  • In the 0th dimension, begin is 1, end is 2, and strides is 1, because \(1+1=2\geq2\), the interval is \([1,2)\). Thus, return the element with \(index = 1\) in 0th dimension, i.e., [[3, 3, 3], [4, 4, 4]].

  • In the 1st dimension, similarly, the interval is \([0,1)\). Based on the return value of the 0th dimension, return the element with \(index = 0\), i.e., [3, 3, 3].

  • In the 2nd dimension, similarly, the interval is \([0,3)\). Based on the return value of the 1st dimension, return the element with \(index = 0,1,2\), i.e., [3, 3, 3].

  • Finally, the output is [3, 3, 3].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> slice = ops.StridedSlice()
>>> output = slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
class tinyms.primitives.Sub(*args, **kwargs)[source]

Subtracts the second input tensor from the first input tensor element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> sub = ops.Sub()
>>> output = sub(input_x, input_y)
>>> print(output)
[-3 -3 -3]
class tinyms.primitives.Tan(*args, **kwargs)[source]

Computes tangent of input_x element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). Data type must be float16, float32 or int32.

Outputs:

Tensor, has the same shape as input_x.

Supported Platforms:

Ascend

Examples

>>> tan = ops.Tan()
>>> input_x = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = tan(input_x)
>>> print(output)
[-1.5574081 0. 1.5574081]
class tinyms.primitives.Tanh(*args, **kwargs)[source]

Tanh activation function.

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - The input of Tanh.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> tanh = ops.Tanh()
>>> output = tanh(input_x)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
tinyms.primitives.TensorAdd()[source]

Adds two input tensors element-wise.

The usage of TensorAdd is deprecated. Please use Add.

class tinyms.primitives.TensorScatterUpdate(*args, **kwargs)[source]

Updates tensor values using given values, along with the input indices.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be equal to indices.shape[-1].

  • indices (Tensor) - The index of input tensor whose data type is int32.

  • update (Tensor) - The tensor to update the input tensor, has the same type as input, and update.shape = indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.TensorScatterUpdate()
>>> output = op(input_x, indices, update)
>>> print(output)
[[ 1.   0.3  3.6]
 [ 0.4  2.2 -3.2]]
class tinyms.primitives.TensorSummary(*args, **kwargs)[source]

Outputs a tensor to a protocol buffer through a tensor summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.TensorSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.Tile(*args, **kwargs)[source]

Replicates a tensor with given multiples times.

Creates a new tensor by replicating input multiples times. The dimension of output tensor is the larger of the input tensor dimension and the length of multiples.

Inputs:
  • input_x (Tensor) - 1-D or higher Tensor. Set the shape of input tensor as \((x_1, x_2, ..., x_S)\).

  • multiples (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). The length of multiples cannot be smaller than the length of the shape of input_x. Only constant value is allowed.

Outputs:

Tensor, has the same data type as the input_x.

  • If the length of multiples is the same as the length of shape of input_x, then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((x_1*y_1, x_2*y_2, ..., x_S*y_R)\).

  • If the length of multiples is larger than the length of shape of input_x, fill in multiple 1 in the length of the shape of input_x until their lengths are consistent. Such as set the shape of input_x as \((1, ..., x_1, x_2, ..., x_S)\), then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((1*y_1, ..., x_S*y_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> tile = ops.Tile()
>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
class tinyms.primitives.TopK(*args, **kwargs)[source]

Finds values and indices of the k largest entries along the last dimension.

Parameters

sorted (bool) – If true, the obtained elements will be sorted by the values in descending order. Default: False.

Inputs:
  • input_x (Tensor) - Input to be computed, data type must be float16, float32 or int32.

  • k (int) - The number of top elements to be computed along the last dimension, constant input is needed.

Outputs:

Tuple of 2 tensors, the values and the indices.

  • values (Tensor) - The k largest elements in each slice of the last dimensional.

  • indices (Tensor) - The indices of values within the last dimension of input.

Supported Platforms:

Ascend GPU

Examples

>>> topk = ops.TopK(sorted=True)
>>> input_x = Tensor([1, 2, 3, 4, 5], mindspore.float16)
>>> k = 3
>>> values, indices = topk(input_x, k)
>>> print((values, indices))
(Tensor(shape=[3], dtype=Float16, value= [ 5.0000e+00,  4.0000e+00,  3.0000e+00]), Tensor(shape=[3],
  dtype=Int32, value= [4, 3, 2]))
class tinyms.primitives.Transpose(*args, **kwargs)[source]

Permutes the dimensions of the input tensor according to input permutation.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_perm (tuple[int]) - The permutation to be converted. The input tuple is constructed by multiple indexes. The length of input_perm and the shape of input_x must be the same. Only constant value is allowed. Must be in the range [0, rank(input_x)).

Outputs:

Tensor, the type of output tensor is the same as input_x and the shape of output tensor is decided by the shape of input_x and the value of input_perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> perm = (0, 2, 1)
>>> transpose = ops.Transpose()
>>> output = transpose(input_tensor, perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
class tinyms.primitives.TruncateDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise for integer types, negative numbers will round fractional quantities towards zero.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_div = ops.TruncateDiv()
>>> output = truncate_div(input_x, input_y)
>>> print(output)
[0 1 0]
class tinyms.primitives.TruncateMod(*args, **kwargs)[source]

Returns the remainder of division element-wise.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_mod = ops.TruncateMod()
>>> output = truncate_mod(input_x, input_y)
>>> print(output)
[ 2  1 -1]
class tinyms.primitives.TruncatedNormal(*args, **kwargs)[source]

Returns a tensor of the specified shape filled with truncated normal values.

The generated values follow a normal distribution.

Parameters
  • seed (int) – A integer number used to create random seed. Default: 0.

  • dtype (mindspore.dtype) – Data type. Default: mindspore.float32.

Inputs:
  • shape (tuple[int]) - The shape of the output tensor, is a tuple of positive integer.

Outputs:

Tensor, the data type of output tensor is the same as attribute dtype.

Examples

>>> shape = (1, 2, 3)
>>> truncated_normal = ops.TruncatedNormal()
>>> output = truncated_normal(shape)
class tinyms.primitives.TupleToArray(*args, **kwargs)[source]

Converts a tuple to a tensor.

If the type of the first number in the tuple is integer, the data type of the output tensor is int. Otherwise, the data type of the output tensor is float.

Inputs:
  • input_x (tuple) - A tuple of numbers. These numbers have the same type. Only constant value is allowed.

Outputs:

Tensor, if the input tuple contains N numbers, then the shape of the output tensor is (N,).

Supported Platforms:

Ascend GPU CPU

Examples

>>> type = ops.TupleToArray()((1,2,3))
>>> print(type)
[1 2 3]
class tinyms.primitives.UniformCandidateSampler(*args, **kwargs)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution. If unique=True, candidates are drawn without replacement, else unique=False with replacement.

Parameters
  • num_true (int) – The number of target classes in each training example.

  • num_sampled (int) – The number of classes to randomly sample. The sampled_candidates will have a shape of num_sampled. If unique=True, num_sampled must be less than or equal to range_max.

  • unique (bool) – Whether all sampled classes in a batch are unique.

  • range_max (int) – The number of possible classes, must be non-negative.

  • seed (int) – Used for random number generation, must be non-negative. If seed has a value of 0, seed will be replaced with a randomly generated value. Default: 0.

  • remove_accidental_hits (bool) – Whether accidental hit is removed. Default: False.

Inputs:
  • true_classes (Tensor) - A Tensor. The target classes with a Tensor shape of (batch_size, num_true).

Outputs:
  • sampled_candidates (Tensor) - The sampled_candidates is independent of the true classes. Shape: (num_sampled, ).

  • true_expected_count (Tensor) - The expected counts under the sampling distribution of each of true_classes. Shape: (batch_size, num_true).

  • sampled_expected_count (Tensor) - The expected counts under the sampling distribution of each of sampled_candidates. Shape: (num_sampled, ).

Supported Platforms:

GPU

Examples

>>> sampler = ops.UniformCandidateSampler(1, 3, False, 4)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int32)))
>>> print(output1, output2, output3)
[1, 1, 3], [[0.75], [0.75], [0.75], [0.75], [0.75]], [0.75, 0.75, 0.75]
class tinyms.primitives.UniformInt(*args, **kwargs)[source]

Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function:

\[\text{P}(i|a,b) = \frac{1}{b-a+1},\]

Note

The number in tensor minval must be strictly less than maxval at any position after broadcasting.

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • minval (Tensor) - The distribution parameter, a. It defines the minimum possibly generated value, with int32 data type. Only one number is supported.

  • maxval (Tensor) - The distribution parameter, b. It defines the maximum possibly generated value, with int32 data type. Only one number is supported.

Outputs:

Tensor. The shape is the same as the input ‘shape’, and the data type is int32.

Supported Platforms:

Ascend GPU

Examples

>>> shape = (2, 4)
>>> minval = Tensor(1, mstype.int32)
>>> maxval = Tensor(5, mstype.int32)
>>> uniform_int = ops.UniformInt(seed=10)
>>> output = uniform_int(shape, minval, maxval)
>>> result = output.shape
>>> print(result)
(2, 4)
class tinyms.primitives.UniformReal(*args, **kwargs)[source]

Produces random floating-point values i, uniformly distributed to the interval [0, 1).

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Supported Platforms:

Ascend GPU

Examples

>>> shape = (2, 2)
>>> uniformreal = ops.UniformReal(seed=2)
>>> output = uniformreal(shape)
>>> result = output.shape
>>> print(result)
(2, 2)
class tinyms.primitives.Unique(*args, **kwargs)[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

Inputs:
  • x (Tensor) - The input tensor.

Outputs:

Tuple, containing Tensor objects (y, idx), `y is a tensor with the same type as x, and contains the unique elements in x, sorted in ascending order. idx is a tensor containing indices of elements in the input corresponding to the output tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.Unique()(x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>>
>>> # note that for GPU, this operator must be wrapped inside a model, and executed in graph mode.
>>> class UniqueNet(nn.Cell):
...     def __init__(self):
...         super(UniqueNet, self).__init__()
...         self.unique_op = P.Unique()
...
...     def construct(self, x):
...         output, indices = self.unique_op(x)
...         return output, indices
...
>>> x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
>>> net = UniqueNet()
>>> output = net(x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
class tinyms.primitives.UniqueWithPad(*args, **kwargs)[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

Inputs:
  • x (Tensor) - The tensor need to be unique. Must be 1-D vector with types: int32, int64.

  • pad_num (int) - Pad num.

Outputs:

tuple(Tensor), tuple of 2 tensors, y and idx. - y (Tensor) - The unique elements filled with pad_num, the shape and type same as x. - idx (Tensor) - The index of each value of x in the unique output y, the shape and type same as x.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([1, 1, 5, 5, 4, 4, 3, 3, 2, 2,]), mindspore.int32)
>>> pad_num = 8
>>> output = ops.UniqueWithPad()(x, pad_num)
>>> print(output)
(Tensor(shape=[10], dtype=Int32, value= [1, 5, 4, 3, 2, 8, 8, 8, 8, 8]),
 Tensor(shape=[10], dtype=Int32, value= [0, 0, 1, 1, 2, 2, 3, 3, 4, 4]))
tinyms.primitives.Unpack(axis=0)[source]

Unpacks tensor in specified axis.

The usage of Unpack is deprecated. Please use Unstack.

class tinyms.primitives.UnsortedSegmentMax(*args, **kwargs)[source]

Computes the maximum along segments of a tensor.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). The data type must be float16, float32 or int32.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be >= 0. The data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids.

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with the minimum value of the input_x’s type.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
class tinyms.primitives.UnsortedSegmentMin(*args, **kwargs)[source]

Computes the minimum of a tensor along segments.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). The data type must be float16, float32 or int32.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be >= 0. The data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids.

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with the maximum value of the input_x’s type.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_min = ops.UnsortedSegmentMin()
>>> output = unsorted_segment_min(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
class tinyms.primitives.UnsortedSegmentProd(*args, **kwargs)[source]

Computes the product of a tensor along segments.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be >= 0. Data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids, must be greater than 0.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_prod = ops.UnsortedSegmentProd()
>>> output = unsorted_segment_prod(input_x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
class tinyms.primitives.UnsortedSegmentSum(*args, **kwargs)[source]

Computes the sum of a tensor along segments.

Calculates a tensor such that \(\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]\), where \(j\) is a tuple describing the index of element in data. segment_ids selects which elements in data to sum up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range.

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with 0.

If the sum of the given segment_ids \(i\) is empty, then \(\text{output}[i] = 0\). If the given segment_ids is negative, the value will be ignored. ‘num_segments’ must be equal to the number of different segment_ids.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) - Set the shape as \((x_1, x_2, ..., x_N)\), where 0 < N <= R. Type must be int.

  • num_segments (int) - Set \(z\) as num_segments.

Outputs:

Tensor, the shape is \((z, x_{N+1}, ..., x_R)\).

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
class tinyms.primitives.Unstack(*args, **kwargs)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters

axis (int) – Dimension along which to pack. Default: 0. Negative values wrap around. The range is [-R, R).

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

Outputs:

A tuple of tensors, the shape of each objects is the same.

Raises

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU

Examples

>>> unstack = ops.Unstack()
>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = unstack(input_x)
>>> print(output)
(Tensor(shape=[4], dtype=Int32, value= [1, 1, 1, 1]),
 Tensor(shape=[4], dtype=Int32, value= [2, 2, 2, 2]))
class tinyms.primitives.Xdivy(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32 or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is float16, float32 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> input_y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xdivy = ops.Xdivy()
>>> output = xdivy(input_x, input_y)
>>> print(output)
[ 1.   2.  -0.5]
class tinyms.primitives.Xlogy(*args, **kwargs)[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when x is zero.

Inputs of input_x and input_y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is float16, float32 or bool.

  • input_y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is float16, float32 or bool. The value must be positive.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> input_y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xlogy = ops.Xlogy()
>>> output = xlogy(input_x, input_y)
>>> print(output)
[-3.465736   0.        2.7725887]
class tinyms.primitives.Zeros(*args, **kwargs)[source]

Creates a tensor filled with value zeros.

Creates a tensor with shape described by the first argument and fills it with value zeros in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor. Only constant positive int is allowed.

  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input shape value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore.ops import operations as ops
>>> zeros = ops.Zeros()
>>> output = zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.ZerosLike(*args, **kwargs)[source]

Creates a new tensor. All elements value are 0.

Returns a tensor of zeros with the same shape and data type as the input tensor.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and data type as input_x but filled with zeros.

Supported Platforms:

Ascend GPU CPU

Examples

>>> zeroslike = ops.ZerosLike()
>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = zeroslike(x)
>>> print(output)
[[0. 0.]
 [0. 0.]]