tinyms.primitives

Primitives module. Operators can be used in the construct function of Layer.

Examples

>>> import tinyms as ts
>>> from tinyms.primitives import tensor_add
>>>
>>> x = ts.ones([2, 3])
>>> y = ts.ones([2, 3])
>>> print(tensor_add(x, y))
[[2. 2. 2.]
[2. 2. 2.]]
tinyms.primitives.core(fn=None, **flags)[source]

A decorator that adds a flag to the function.

By default, the function is marked as True, enabling to use this decorator to set flag to a graph.

Parameters
  • fn (Function) – Function to add flag. Default: None.

  • flags (dict) – The following flags can be set core, which indicates that this is a core function or other flag. Default: None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> net = core(net, predit=True)
>>> print(hasattr(net, '_mindspore_flags'))
True
tinyms.primitives.add_flags(fn=None, **flags)[source]

A decorator that adds a flag to the function.

Note

Only supports bool value.

Parameters
  • fn (Function) – Function or cell to add flag. Default: None.

  • flags (dict) – Flags use kwargs. Default: None.

Returns

Function, the function with added flags.

Examples

>>> net = Net();
>>> net = add_flags(net, predit=True)
>>> print(hasattr(net, '_mindspore_flags'))
True
class tinyms.primitives.MultitypeFuncGraph(name, read_value=False)[source]

Generates overloaded functions.

MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs. Initialize an MultitypeFuncGraph object with name, and use register with input types as the decorator for the function to be registered. And the object can be called with different types of inputs, and work with HyperMap and Map.

Parameters
  • name (str) – Operator name.

  • read_value (bool) – If the registered function not need to set value on Parameter, and all inputs will pass by value, set read_value to True. Default: False.

Raises

ValueError – If failed to find find a matching function for the given arguments.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # `add` is a metagraph object which will add two objects according to
>>> # input type using ".register" decorator.
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import dtype as mstype
>>>
>>> tensor_add = ops.Add()
>>> add = MultitypeFuncGraph('add')
>>> @add.register("Number", "Number")
... def add_scala(x, y):
...     return x + y
>>> @add.register("Tensor", "Tensor")
... def add_tensor(x, y):
...     return tensor_add(x, y)
>>> output = add(1, 2)
>>> print(output)
3
>>> output = add(Tensor([0.1, 0.6, 1.2], dtype=mstype.float32), Tensor([0.1, 0.6, 1.2], dtype=mstype.float32))
>>> print(output)
[0.2 1.2 2.4]
register(*type_names)[source]

Register a function for the given type string.

Parameters

type_names (Union[str, mindspore.dtype]) – Inputs type names or types list.

Returns

decorator, a decorator to register the function to run, when called under the types described in type_names.

class tinyms.primitives.GradOperation(get_all=False, get_by_list=False, sens_param=False)[source]

A higher-order function which is used to generate the gradient function for the input function.

The gradient function generated by GradOperation higher-order function can be customized by construction arguments.

Given an input function net = Net() that takes x and y as inputs, and has a parameter z, see Net in Examples.

To generate a gradient function that returns gradients with respect to the first input (see GradNetWrtX in Examples).

  1. Construct a GradOperation higher-order function with default arguments: grad_op = GradOperation().

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to the first input: grad_op(net)(x, y).

To generate a gradient function that returns gradients with respect to all inputs (see GradNetWrtXY in Examples).

  1. Construct a GradOperation higher-order function with get_all=True which indicates getting gradients with respect to all inputs, they are x and y in example function Net(): grad_op = GradOperation(get_all=True).

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to given parameters (see GradNetWithWrtParams in Examples).

  1. Construct a GradOperation higher-order function with get_by_list=True: grad_op = GradOperation(get_by_list=True).

  2. Construct a ParameterTuple that will be passed to the input function when constructing GradOperation higher-order function, it will be used as a parameter filter that determine which gradient to return: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

  4. Call the gradient function with input function’s inputs to get the gradients with respect to given parameters: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to all inputs and given parameters in the format of ((dx, dy), (dz))(see GradNetWrtInputsAndParams in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and get_by_list=True: grad_op = GradOperation(get_all=True, get_by_list=True).

  2. Construct a ParameterTuple that will be passed along input function when constructing GradOperation higher-order function: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

  4. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs and given parameters: gradient_function(x, y).

We can configure the sensitivity(gradient with respect to output) by setting sens_param as True and passing an extra sensitivity input to the gradient function, the sensitivity input should has the same shape and type with input function’s output(see GradNetWrtXYWithSensParam in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and sens_param=True: grad_op = GradOperation(get_all=True, sens_param=True).

  2. Define grad_wrt_output as sens_param which works as the gradient with respect to output: grad_wrt_output = Tensor(np.ones([2, 2]).astype(np.float32)).

  3. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  4. Call the gradient function with input function’s inputs and sens_param to get the gradients with respect to all inputs: gradient_function(x, y, grad_wrt_output).

Parameters
  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter variables at the same time in the form of ((gradients with respect to inputs), (gradients with respect to parameters)). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sensor_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the location parameter or key-value pair parameter. If the value is transferred through the key-value pair parameter, the key must be sens.

Returns

The higher-order function which takes a function as argument and returns gradient function for it.

Raises

TypeError – If get_all, get_by_list or sens_param is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ParameterTuple
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
...     def construct(self, x, y):
...         x = x * self.z
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation()
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> output = GradNetWrtX(Net())(x, y)
>>> print(output)
[[1.4100001 1.5999999 6.6      ]
 [1.4100001 1.5999999 6.6      ]]
>>>
>>> class GradNetWrtXY(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXY, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXY(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00],
 [ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 2.59999990e+00,  2.59999990e+00,  2.59999990e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]]))
>>>
>>> class GradNetWrtXYWithSensParam(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXYWithSensParam, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True, sens_param=True)
...         self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y, self.grad_wrt_output)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXYWithSensParam(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.21099997e+00,  5.09999990e-01,  1.49000001e+00],
 [ 5.58800030e+00,  2.68000007e+00,  4.07000017e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.51999998e+00,  2.81999993e+00,  2.14000010e+00],
 [ 1.09999990e+00,  2.04999995e+00,  1.54999995e+00],
 [ 9.00000036e-01,  1.54999995e+00,  1.25000000e+00]]))
>>>
>>> class GradNetWithWrtParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWithWrtParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWithWrtParams(Net())(x, y)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
>>>
>>> class GradNetWrtInputsAndParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtInputsAndParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_all=True, get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.1, 0.6, 1.2], [0.5, 1.3, 0.1]], dtype=mstype.float32)
>>> y = Tensor([[0.12, 2.3, 1.1], [1.3, 0.2, 2.4], [0.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtInputsAndParams(Net())(x, y)
>>> print(output)
((Tensor(shape=[2, 3], dtype=Float32, value=
[[ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00],
 [ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 6.00000024e-01,  6.00000024e-01,  6.00000024e-01],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]])), (Tensor(shape=[1], dtype=Float32, value=
 [ 1.29020004e+01]),))
class tinyms.primitives.HyperMap(ops=None, reverse=False)[source]

Hypermap will apply the set operation to input sequences.

Apply the operations to every elements of the sequence or nested sequence. Different from Map, the HyperMap supports to apply on nested structure.

Parameters
  • ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance. Default is None.

  • reverse (bool) – The optimizer needs to be inverted in some scenarios to improve parallel performance, general users please ignore. reverse is the flag to decide if apply the operation reversely. Only supported in graph mode. Default is False.

Inputs:
  • args (Tuple[sequence]) - If ops is not None, all the inputs should be sequences with the same length. And each row of the sequences will be the inputs of the operation.

    If ops is None, the first input is the operation, and the others are inputs.

Outputs:

Sequence or nested sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Raises
  • TypeError – If ops is neither MultitypeFuncGraph nor None.

  • TypeError – If args is not a Tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),
...                     (Tensor(3, mstype.float32), Tensor(4, mstype.float32)))
>>> # square all the tensor in the nested list
>>>
>>> square = MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return ops.square(x)
>>>
>>> common_map = HyperMap()
>>> output = common_map(square, nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
>>> square_map = HyperMap(square, False)
>>> output = square_map(nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Tensor) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[1, 2, 3], [3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 3, 3)
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be the broadcasted shape of input shape and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore import ops as ops
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mindspore.float32)
>>> lambda_param = Tensor(1.0, mindspore.float32)
>>> output = ops.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions and the length of \((N,*)\) should be less than 8 in broadcast operation.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – Type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mindspore.float32.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Raises
  • TypeError – If shape is not tuple.

  • TypeError – If ‘minval’ or ‘maxval’ is neither int32 nor float32 and dtype of ‘minval’ is not the same as ‘maxval’.

  • TypeError – If seed is not an int.

  • TypeError – If ‘dtype’ is neither int32 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mindspore.int32)
>>> maxval = Tensor(2, mindspore.int32)
>>> output = ops.uniform(shape, minval, maxval, seed=5, dtype=mindspore.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> maxval = Tensor([8.0, 10.0], mindspore.float32)
>>> output = ops.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • alpha (Tensor) – The alpha α distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The beta β distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of alpha and beta. The dtype is float32.

Raises
  • TypeError – If shape is not a tuple.

  • TypeError – If neither alpha nor beta is a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha and beta is not float32.

Supported Platforms:

Ascend

Examples

>>> # case 1: alpha_shape is (2, 2)
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> # case 2: alpha_shape is (2, 3), so shape is (3, 1, 3)
>>> shape = (3, 1, 3)
>>> alpha = Tensor(np.array([[1, 3, 4], [2, 5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> # case 3: beta_shape is (1, 2), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0, 2]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 2.2132034  5.8855834]]
 [ 3.3981476  7.5805717]
[[ 3.3981476  7.5805717]]
 [ 3.7190282 19.941492]
[[ 2.9512358  2.5969937]]
 [ 3.786061   5.160872 ]]]
>>> # case 4: beta_shape is (2, 1), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([[1.0], [2.0]]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 5.6085486  7.8280783]]
 [ 15.97684  16.116285]
[[ 1.8347423  1.713663]]
 [ 3.2434065 15.667398]
[[ 4.2922077  7.3365674]]
 [ 5.3876944  13.159832 ]]]
tinyms.primitives.poisson(shape, mean, seed=None)[source]

Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Raises
  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

  • TypeError – If seed is not an int.

Supported Platforms:

Ascend

Examples

>>> # case 1: It can be broadcast.
>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
>>> # case 2: It can not be broadcast. It is recommended to use the same shape.
>>> shape = (2, 2)
>>> mean = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(2, 2)
tinyms.primitives.multinomial(inputs, num_sample, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters
  • x (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_sample (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: 0.

Outputs:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Raises
  • TypeError – If x is not a Tensor whose dtype is not float32.

  • TypeError – If num_sample is not an int.

  • TypeError – If seed is neither an int nor a optional.

Supported Platforms:

GPU

Examples

>>> # case 1: The output is random, and the length of the output is the same as num_sample.
>>> x = Tensor([0, 9, 4, 0], mindspore.float32)
>>> output = ops.multinomial(x, 2)
>>> # print(output)
>>> # [1 2] or [2 1]
>>> # the case where the result is [2 1] in multiple times.
>>> # This is because the value corresponding to the index 1 is larger than the value of the index 2.
>>> print(len(output))
2
>>> # case 2: The output is random, and the length of the output is the same as num_sample.
>>> # replacement is False(Default).
>>> # If the extracted value is 0, the index value of 1 will be returned.
>>> x = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(x, 4)
>>> print(output)
[1 1 2 1]
>>> # case 3: num_sample == x_length = 4, and replacement is True, Can extract the same elements。
>>> x = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(x, 4, True)
>>> print(output)
[1 1 2 2]
tinyms.primitives.clip_by_value(x, clip_value_min, clip_value_max)[source]

Clips tensor values to a specified min and max.

Limits the value of \(x\) to a range, whose lower limit is ‘clip_value_min’ and upper limit is ‘clip_value_max’.

\[\begin{split}out_i= \left\{ \begin{array}{align} clip\_value_{max} & \text{ if } x_i\ge clip\_value_{max} \\ x_i & \text{ if } clip\_value_{min} \lt x_i \lt clip\_value_{max} \\ clip\_value_{min} & \text{ if } x_i \le clip\_value_{min} \\ \end{array}\right.\end{split}\]

Note

‘clip_value_min’ needs to be less than or equal to ‘clip_value_max’.

Parameters
  • x (Tensor) – Input data. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • clip_value_min (Tensor) – The minimum value.

  • clip_value_max (Tensor) – The maximum value.

Returns

Tensor, a clipped Tensor. It has the same shape and data type as x.

Supported Platforms:

Ascend GPU

Examples

>>> min_value = Tensor(5, mindspore.float32)
>>> max_value = Tensor(20, mindspore.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clip_by_value(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.clip_by_global_norm(x, clip_norm=1.0, use_norm=None)[source]

Clips tensor values by the ratio of the sum of their norms.

Note

Input ‘x’ should be a tuple or list of tensors. Otherwise, it will raise an error.

Parameters
  • x (Union(tuple[Tensor], list[Tensor])) – Input data to clip. The shape of each Tensor in tuple is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • clip_norm (Union(float, int)) – The clipping ratio, it should be greater than 0. Default: 1.0

  • use_norm (None) – The global norm. Default: None. Currently only none is supported.

Returns

tuple[Tensor], a clipped Tensor. It has the same data type as x and each Tensor in the output tuple is the same as the original input shape.

Supported Platforms:

Ascend GPU

Examples

>>> x1 = np.array([[2., 3.], [1., 2.]]).astype(np.float32)
>>> x2 = np.array([[1., 4.], [3., 1.]]).astype(np.float32)
>>> input_x = (Tensor(x1), Tensor(x2))
>>> out = ops.clip_by_global_norm(input_x, 1.0)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.98142403e-01,  4.47213590e-01],
 [ 1.49071202e-01,  2.98142403e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.49071202e-01,  5.96284807e-01],
 [ 4.47213590e-01,  1.49071202e-01]]))
tinyms.primitives.count_nonzero(x, axis=(), keep_dims=False, dtype=mindspore.int32)[source]

Count number of nonzero elements across axis of input tensor

Parameters
  • x (Tensor) – Input data is used to count non-zero numbers. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Only constant value is allowed. Default: (), reduce all dimensions.

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • dtype (Union[Number, mindspore.bool_]) – The data type of the output tensor. Only constant value is allowed. Default: mindspore.int32

Returns

Tensor, number of nonzero element. The data type is dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: each value specified.
>>> x = Tensor(np.array([[0, 1, 0], [1, 1, 0]]).astype(np.float32))
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0, 1], keep_dims=True, dtype=mindspore.int32)
>>> print(nonzero_num)
[[3]]
>>> # case 2: all value is default.
>>> nonzero_num = ops.count_nonzero(x=x)
>>> print(nonzero_num)
3
>>> # case 3: axis value was specified 0.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,])
>>> print(nonzero_num)
[1 2 0]
>>> # case 4: axis value was specified 1.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[1,])
>>> print(nonzero_num)
[1 2]
>>> # case 5: keep_dims value was specified.
>>> nonzero_num = ops.count_nonzero(x=x,  keep_dims=True)
>>> print(nonzero_num)
[[3]]
>>> # case 6: keep_dims and axis value was specified.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,], keep_dims=True)
>>> print(nonzero_num)
[[1 2 0]]
tinyms.primitives.tensor_dot(x1, x2, axes, prim_name='tensor_dot')[source]

Computation of Tensor contraction on arbitrary axes between tensors a and b.

Contraction allows for the summation of products of elements of a and b on specified axes. The same number of axes must be specified for both x1 and x2, and values must be within range of number of dims of both a and b.

Selected dims in both inputs must also match.

axes = 0 leads to outer product axes = 1 leads to normal matrix multiplication when inputs both 2D. axes = 1 is the same as axes = ((1,),(0,) where both a and b are 2D. axes = 2 is the same as axes = ((1,2),(0,1)) where both a and b are 3D.

Inputs:
  • x1 (Tensor) - First tensor in tensor_dot with datatype float16 or float32

  • x2 (Tensor) - Second tensor in tensor_dot with datatype float16 or float32

  • axes (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]) - Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and first N dims from b input shape in order as axes for each respectively.

Outputs:

Tensor, the shape of the output tensor is \((N + M)\). Where \(N\) and \(M\) are the free axes not contracted in both inputs

Raises
  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If axes is not one of the following: int, tuple, list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 1, 2]), mindspore.float32)
>>> output = ops.tensor_dot(input_x1, input_x2, ((0,1),(1,2)))
>>> print(output)
[[2. 2. 2]
 [2. 2. 2]
 [2. 2. 2]]
tinyms.primitives.dot(x1, x2, prim_name=None)[source]

Computation a dot product between samples in two tensors.

Inputs:
  • x1 (Tensor) - First tensor in Dot op with datatype float16 or float32 The rank must be greater than or equal to 2.

  • x2 (Tensor) - Second tensor in Dot op with datatype float16 or float32 The rank must be greater than or equal to 2.

Outputs:

Tensor, dot product of x1 and x2.

Raises
  • TypeError – If type of x1 and x2 are not the same.

  • TypeError – If dtype of x1 or x2 is not float16 or float32.

  • ValueError – If rank of x1 or x2 less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.ones(shape=[2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input_x1, input_x2)
>>> print(output)
[[[3. 3.]]
 [[3. 3.]]]
>>> print(output.shape)
(2, 1, 2)
>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input_x1, input_x2)
>>> print(output)
[[[[3. 3.]]
  [[3. 3.]]]]
>>> print(output.shape)
(1, 2, 1, 2)
>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> output = ops.dot(input_x1, input_x2)
>>> print(output)
[[[[3. 3.]
   [3. 3.]]
  [[3. 3.]
   [3. 3.]]]]
>>> print(output.shape)
(1, 2, 2, 2)
>>> input_x1 = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[2, 1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input_x1, input_x2)
>>> print(output)
[[[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]]
>>> print(output.shape)
(3, 2, 2, 1, 2)
tinyms.primitives.batch_dot(x1, x2, axes=None, prim_name=None)[source]

Computation of batch dot product between samples in two tensors containing batch dims.

\[output = x1[batch, :] * x2[batch, :]\]
Inputs:
  • x1 (Tensor) - First tensor in Batch Dot op with datatype float32 and the rank of x1 must be greater than or equal to 2.

  • x2 (Tensor) - Second tensor in Batch Dot op with datatype float32. The datatype of x2 should be same as x1 and the rank of x2 must be greater than or equal to 2.

  • axes (Union[int, tuple(int), list(int)]) - Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and last N dimensions from b input shape in order as axes for each respectively. Default: None.

Outputs:

Tensor, batch dot product of x1 and x2.For example: The Shape of output for input x1 shapes (batch, d1, axes, d2) and x2 shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4), where d1 and d2 means any number.

Raises
  • TypeError – If type of x1 and x2 are not the same.

  • TypeError – If dtype of x1 or x2 is not float32.

  • ValueError – If rank of x1 or x2 less than 2.

  • ValueError – If batch dim used in axes.

  • ValueError – If len(axes) less than 2.

  • ValueError – If axes is not one of those: None, int, (int, int).

  • ValueError – If axes reversed from negative int is too low for dimensions of input arrays.

  • ValueError – If axes value is too high for dimensions of input arrays.

  • ValueError – If batch size of x1 and x2 are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.ones(shape=[2, 2, 3]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (-1, -2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[[3. 3.]
  [3. 3.]]
 [[3. 3.]
  [3. 3.]]]
>>> x1 = Tensor(np.ones(shape=[2, 2]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (1, 2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[2. 2. 2.]
 [2. 2. 2.]]
>>> print(output.shape)
(2, 3)
>>> x1 = Tensor(np.ones(shape=[6, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[6, 5, 4, 8]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(6, 2, 3, 5, 8)
>>> x1 = Tensor(np.ones(shape=[2, 2, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 5, 4, 5]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(2, 2, 5, 5)
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat.

Parameters
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive, required.

  • axis (int) – The axis along which to repeat, default 0.

Outputs:

One tensor with values repeated along the specified axis. If x has shape (s1, s2, …, sn) and axis is i, the output will have shape (s1, s2, …, si * rep, …, sn). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : repeat on axis 0
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
>>> # case 2 : repeat on axis 1
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 1)
>>> print(output)
[[0 0 1 1 2 2]
 [3 3 4 4 5 5]]
tinyms.primitives.sequence_mask(lengths, maxlen=None, prim_name='sequence_mask')[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape [d_1, d_2, …, d_n], then the resulting tensor mask has type dtype and shape [d_1, d_2, …, d_n, maxlen], with mask[i_1, i_2, …, i_n, j] = (j < lengths[i_1, i_2, …, i_n])

Inputs:
  • lengths (Tensor) - Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen. Must be type int32 or int64.

  • maxlen (int) - size of the last dimension of returned tensor. Must be positive and same type as elements in lengths. Default is None.

Outputs:

One mask tensor of shape lengths.shape + (maxlen,).

Raises
  • TypeError – If lengths is not a Tensor.

  • TypeError – If maxlen is not an int.

  • TypeError – If dtype of lengths is neither int32 nor int64.

Supported Platforms:

GPU

Examples

>>> # case 1: When maxlen is assigned
>>> x = Tensor(np.array([1, 2, 3, 4]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[ True False False False False]
 [ True  True False False False]
 [ True  True  True False False]
 [ True  True  True  True False]]
>>> # case 2: When there is 0 in x
>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[[ True False False False False]
  [ True  True  True False False]]
 [[ True  True False False False]
  [False False False False False]]]
>>> # case 3: when the maxlen is not assigned
>>> x = Tensor(np.array([[1, 3], [2, 4]]))
>>> output = ops.sequence_mask(x)
>>> print(output)
[[[ True False False False ]
  [ True  True  True False ]]
 [[ True  True False False ]
  [ True  True  True  True ]]]
tinyms.primitives.matmul(x1, x2, dtype=None, prim_name=None)[source]

Returns the matrix product of two arrays.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters
  • x1 (Tensor) – Input tensor, scalar not allowed. The last dimension of x1 must be the same size as the second last dimension of x2. And the shape of x1 and x2 could be broadcast.

  • x2 (Tensor) – Input tensor, scalar not allowed. The last dimension of x1 must be the same size as the second last dimension of x2. And the shape of x1 and x2 could be broadcast.

  • dtype (mindspore.dtype, optional) – defaults to None. Overrides the dtype of the output Tensor.

Returns

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors.

Raises
  • ValueError – If the last dimension of x1 is not the same size as the second-to-last dimension of x2, or if a scalar value is passed in.

  • ValueError – If the shape of x1 and x2 could not broadcast together。

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : Reasonable application of broadcast mechanism
>>> x1 = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
>>> x2 = Tensor(np.arange(4*5).reshape(4, 5), mindspore.float32)
>>> output = ops.matmul(x1, x2)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
>>> print(output.shape)
(2, 3, 5)
>>> # case 2 : the rank of `x1` is 1
>>> x1 = Tensor(np.ones([1, 2]), mindspore.float32)
>>> x2 = Tensor(np.ones([2,]), mindspore.float32)
>>> output = ops.matmul(x1, x2)
>>> print(output)
[2.]
>>> print(output.shape)
(1,)
class tinyms.primitives.ACos(*args, **kwargs)[source]

Computes arccosine of input tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is

    \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> acos = ops.ACos()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = acos(x)
>>> print(output)
[0.7377037 1.5307858 1.2661037 0.97641146]
class tinyms.primitives.Abs(*args, **kwargs)[source]

Returns absolute value of a tensor element-wise.

\[out_i = |x_i|\]
Inputs:
  • x (Tensor) - The input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> abs = ops.Abs()
>>> output = abs(x)
>>> print(output)
[1. 1. 0.]
class tinyms.primitives.AccumulateNV2(*args, **kwargs)[source]

Computes accumulation of all input tensors element-wise.

AccumulateNV2 is similar to AddN, but there is a significant difference among them: AccumulateNV2 will not wait for all of its inputs to be ready before summing. That is to say, AccumulateNV2 is able to save memory when inputs are ready at different time since the minimum temporary storage is proportional to the output size rather than the input size.

Inputs:
  • x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors whose dtype is number to be added together.

Outputs:

Tensor, has the same shape and dtype as each entry of the x.

Raises

TypeError – If x is neither tuple nor list.

Supported Platforms:

Ascend

Examples

>>> class NetAccumulateNV2(nn.Cell):
...     def __init__(self):
...         super(NetAccumulateNV2, self).__init__()
...         self.accumulateNV2 = ops.AccumulateNV2()
...
...     def construct(self, *z):
...         return self.accumulateNV2(z)
...
>>> net = NetAccumulateNV2()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Acosh(*args, **kwargs)[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]

Warning

Given an input tensor x, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf].

Inputs:
  • x (Tensor) - The data type should be one of the following types: float16, float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and type as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, dtype
>>> acosh = ops.Acosh()
>>> x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), dtype.float32)
>>> output = acosh(x)
>>> print(output)
[0. 0.9624236 1.7627472 5.298292]
class tinyms.primitives.Adam(*args, **kwargs)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

For more details, please refer to nn.Adam.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Tensor) - The 1st moment vector in the updating formula, the shape and data type value should be the same as var.

  • v (Tensor) - the 2nd moment vector in the updating formula, the shape and data type value should be the same as var. Mean square gradients with the same type as var.

  • beta1_power (float) - \(beta_1^t(\beta_1^{t})\) in the updating formula, the data type value should be the same as var.

  • beta2_power (float) - \(beta_2^t(\beta_2^{t})\) in the updating formula, the data type value should be the same as var.

  • lr (float) - \(l\) in the updating formula. The paper suggested value is \(10^{-8}\), the data type value should be the same as var.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations, the data type value should be the same as var. The paper suggested value is \(0.9\)

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations, the data type value should be the same as var. The paper suggested value is \(0.999\)

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as Inputs var.

  • m (Tensor) - The same shape and data type as Inputs m.

  • v (Tensor) - The same shape and data type as Inputs v.

Raises
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If var, m or v is not a Tensor.

  • TypeError – If beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adam = ops.Adam()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                               epsilon, grad)
...         return out
...
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.9, 0.999, 0.001, 0.9, 0.999, 1e-8, gradient)
>>> print(net.var.asnumpy())
[[0.9996838 0.9996838]
 [0.9996838 0.9996838]]
class tinyms.primitives.AdamNoUpdateParam(*args, **kwargs)[source]

Updates gradients by Adaptive Moment Estimation (Adam) algorithm. This operator do not update the parameter, but calculate the value that should be added to the parameter instead.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ \Delta{w} = - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents the parameter to be updated, \(\epsilon\) represents epsilon.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • m (Tensor) - The 1st moment vector in the updating formula. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type must be float32.

  • v (Tensor) - the 2nd moment vector in the updating formula. The shape must be the same as m. The data type must be float32.

  • beta1_power (Tensor) - \(beta_1^t(\beta_1^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta2_power (Tensor) - \(beta_2^t(\beta_1^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • lr (Tensor) - \(l\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations. The shape is \((1, )\) and the data type must be float32.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations. The shape is \((1, )\) and the data type must be float32.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability. The shape is \((1, )\) and the data type must be float32.

  • gradient (Tensor) - Gradient, the shape must be the same as m, the data type must be float32.

Outputs:

Tensor, whose shape and data type are the same with Inputs gradient, is a value that should be added to the parameter to be updated.

Raises
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If m, v, beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam = ops.AdamNoUpdateParam()
...         self.m = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="m")
...         self.v = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.adam(self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad)
...         return out
>>> net = Net()
>>> beta1_power = Tensor(0.9, ms.float32)
>>> beta2_power = Tensor(0.999, ms.float32)
>>> lr = Tensor(0.001, ms.float32)
>>> beta1 = Tensor(0.9, ms.float32)
>>> beta2 = Tensor(0.999, ms.float32)
>>> epsilon = Tensor(1e-8, ms.float32)
>>> gradient = Tensor(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]).astype(np.float32))
>>> result = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient)
>>> print(result)
[[-0.00010004 -0.00010004 -0.00010004]
[-0.00013441 -0.00013441 -0.00013441]]
class tinyms.primitives.AdamWeightDecay(*args, **kwargs)[source]

Updates gradients by the Adaptive Moment Estimation (AdamWeightDecay) algorithm with weight decay.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + eps} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Tensor) - The 1st moment vector in the updating formula, the shape and data type value should be the same as var.

  • v (Tensor) - the 2nd moment vector in the updating formula, the shape and data type value should be the same as var. Mean square gradients with the same type as var.

  • lr (float) - \(l\) in the updating formula. The paper suggested value is \(10^{-8}\), the data type value should be the same as var.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations, the data type value should be the same as var. The paper suggested value is \(0.9\)

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations, the data type value should be the same as var. The paper suggested value is \(0.999\)

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • decay (float) - The weight decay value, must be a scalar tensor with float data type. Default: 0.0.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam_weight_decay = ops.AdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad):
...         out = self.adam_weight_decay(self.var, self.m, self.v, lr, beta1, beta2,
...                               epsilon, decay, grad)
...         return out
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient)
>>> print(net.var.asnumpy())
class tinyms.primitives.AdaptiveAvgPool2D(*args, **kwargs)[source]

AdaptiveAvgPool2D operation.

This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes.

Parameters

output_size (Union[int, tuple]) – The target output size is H x W. ouput_size can be a tuple, or a single H for H x H, and H and W can be int or None which means the output size is the same as the input.

Inputs:
  • input_x (Tensor) - The input of AdaptiveAvgPool2D, which is a 3D or 4D tensor, with float16, float32, float64 data type.

Outputs:

Tensor, with the same type as the input_x.

Shape of the output is input_x_shape[:len(input_x_shape) - len(out_shape)] + out_shape.

If output_size contains None:

  • out_shape = input_x_shape[-2] + output_size[1]: If output_size is (None, w)

  • out_shape = output_size[0] + input_x_shape[-1]: If output_size is (h, None)

  • out_shape = input_x_shape[-2:]: If output_size is (None, None)

If output_size does not contain None:

  • out_shape = (h, h): If output_size is h

  • out_shape = (h, w): If output_size is (h, w)

Raises
  • ValueError – If output_size is a tuple and if output_size length is not 2.

  • TypeError – If input_x is not a tensor.

  • TypeError – If dtype of input_x is not float16, float32, float64.

  • ValueError – If input_x dimension is less than or equal to output_size dimension.

Supported Platforms:

GPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((None, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]]
>>> # case 2: output_size=2
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D(2)
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((1, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[4.5 5.5]]
 [[4.5 5.5]]
 [[4.5 5.5]]]
class tinyms.primitives.Add(*args, **kwargs)[source]

Adds two input tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} + y_{i}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: x and y are both Tensor.
>>> add = ops.Add()
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 7. 9.]
>>> # case 2: x is a scalar and y is a Tensor
>>> add = ops.Add()
>>> x = Tensor(1, mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 6. 7.]
>>> # the data type of x is int32, the data type of y is float32,
>>> # and the output is the data format of higher precision flost32.
>>> print(output.dtype)
Float32
class tinyms.primitives.AddN(*args, **kwargs)[source]

Computes addition of all input tensors element-wise.

All input tensors must have the same shape.

Inputs:
  • x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors whose dtype is number or bool to be added together.

Outputs:

Tensor, has the same shape and dtype as each entry of the x.

Raises

TypeError – If x is neither tuple nor list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetAddN(nn.Cell):
...     def __init__(self):
...         super(NetAddN, self).__init__()
...         self.addN = ops.AddN()
...
...     def construct(self, *z):
...         return self.addN(z)
...
>>> net = NetAddN()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.AllGather(*args, **kwargs)[source]

Gathers tensors from the specified communication group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters

group (str) – The communication group to work on. Default: “hccl_world_group”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).

Raises
  • TypeError – If group is not a str.

  • ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn
>>> import numpy as np
>>> import mindspore.ops as ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore import Tensor, context
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
... class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allgather = ops.AllGather()
...
...     def construct(self, x):
...         return self.allgather(x)
...
>>> input_x = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]]
class tinyms.primitives.AllReduce(*args, **kwargs)[source]

Reduces the tensor data across all devices in such a way that all devices will get the same final result.

Note

The operation of AllReduce does not support “prod” currently. The tensors must have the same shape and format in all processes of the collection.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like sum, max, and min. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.

Raises
  • TypeError – If any of op and group is not a str, or fusion is not an integer, or the input’s dtype is bool.

  • ValueError – If the op is “prod”.

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn
>>> import numpy as np
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.AllSwap(*args, **kwargs)[source]

AllSwap is a collective operation.

AllSwap sends data from the all processes to the all processes in the specified group. It has two phases:

  • The scatter phase: On each process, the operand is split into the send size of blocks along the 0-th axis, and the blocks are scattered to all processes, e.g., the ith block is send to the ith process.

  • The gather phase: Each process concatenates the received blocks along the 0-th axis.

Note

The tensors must have the same format in all processes of the collection.

Parameters

group (str) – The communication group name.

Inputs:

tensor_in (tensor): A 2-D tensor. On each process, divide blocks into number of the send size. send_size (tensor): A 1-D int64 tensor. The element is the send data size for each process. recv_size (tensor): A 1-D int64 tensor. The element is the receive data size for each process.

Returns

The result tensor.

Return type

tensor_out (tensor)

Raises

TypeError – If group is not a string.

class tinyms.primitives.AngleAtomEnergy(*args, **kwargs)[source]

Add the potential energy caused by angle terms to the total potential energy of each atom. Assume the number of angles is m and the number of atoms is n.

The calculation formula is the same as operator AngleEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

angle_numbers (int32) – the number of angles m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd and the central atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • angle_k (Tensor) - The force constant for each angle. The data type is float32 and the shape is \((m,)\).

  • angle_theta0 (Tensor) - The equilibrium position value for each angle. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.AngleEnergy(*args, **kwargs)[source]

Calculate the energy caused by 3-atoms angle term. Assume the number of angles is m and the number of atoms is n.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr_{ab} = (x_b-x_a, y_b-y_a, z_b-z_a)\]
\[dr_{cb} = (x_b-x_c, y_b-y_c, z_b-z_c)\]
\[theta = arccos(inner_product(dr_{ab}, dr_{cb})/|dr_{ab}|/|dr_{cb}|)\]
\[E = k*(theta - theta_0)^2\]
Parameters

angle_numbers (int32) – the number of angles m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd and the central atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • angle_k (Tensor) - The force constant for each angle. The data type is float32 and the shape is \((m,)\).

  • angle_theta0 (Tensor) - The equilibrium position value for each angle. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The potential energy for each angle term. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.AngleForce(*args, **kwargs)[source]

Calculate the force exerted by angles made of 3 atoms on the corresponding atoms. Assume the number of angles is m and the number of atoms is n.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr_{ab} = (x_b-x_a, y_b-y_a, z_b-z_a)\]
\[dr_{cb} = (x_b-x_c, y_b-y_c, z_b-z_c)\]
\[theta = arccos(inner_product(dr_{ab}, dr_{cb})/|dr_{ab}|/|dr_{cb}|)\]
\[F_a = -2*k*(theta-theta_0)/sin(theta)*[cos(theta)/|dr_{ab}|^2*dr_{ab} - 1/|dr_{ab}|/|dr_{cb}|*dr_{cb}]\]
\[F_c = -2*k*(theta-theta_0)/sin(theta)*[cos(theta)/|dr_{cb}|^2*dr_{cb} - 1/|dr_{cb}|/|dr_{ab}|*dr_{ab}]\]
\[F_b = -F_a - F_c\]
Parameters

angle_numbers (int32) – the number of angles m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd and the central atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • angle_k (Tensor) - The force constant for each angle. The data type is float32 and the shape is \((m,)\).

  • angle_theta0 (Tensor) - The equilibrium position value for each angle. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.AngleForceWithAtomEnergy(*args, **kwargs)[source]

Calculate angle force and potential energy together. Assume the number of angles is m and the number of atoms is n.

The calculation formula is the same as operator AngleForce() and AngleEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

angle_numbers (int32) – the number of angles m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd and the central atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each angle. The data type is int32 and the shape is \((m,)\).

  • angle_k (Tensor) - The force constant for each angle. The data type is float32 and the shape is \((m,)\).

  • angle_theta0 (Tensor) - The equilibrium position value for each angle. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - same as operator AngleForce(). The data type is float32 and the shape is \((n, 3)\).

  • ene (Tensor) - same as operator AngleAtomEnergy(). The data type is float and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.ApplyAdaMax(*args, **kwargs)[source]

Updates relevant entries according to the adamax scheme.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \max(\beta_2 * v_{t}, \left| g \right|) \\ var = var - \frac{l}{1 - \beta_1^{t+1}} * \frac{m_{t+1}}{v_{t+1} + \epsilon} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last momentent of \(m_{t+1}\), \(v\) represents the 2nd moment vector, \(v_{t}\) is the last momentent of \(v_{t+1}\), \(l\) represents scaling factor lr, \(g\) represents grad, \(\beta_1, \beta_2\) represent beta1 and beta2, \(beta_1^{t+1}\) represents beta1_power, \(var\) represents the variable to be updated, \(\epsilon\) represents epsilon.

Inputs of var, m, v and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and type as var. With float32 or float16 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients with the same shape and type as var. With float32 or float16 data type.

  • beta1_power (Union[Number, Tensor]) - \(beta_1^t\) in the updating formula, must be scalar. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - Learning rate, \(l\) in the updating formula, must be scalar. With float32 or float16 data type.

  • beta1 (Union[Number, Tensor]) - The exponential decay rate for the 1st moment estimations, must be scalar. With float32 or float16 data type.

  • beta2 (Union[Number, Tensor]) - The exponential decay rate for the 2nd moment estimations, must be scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient, has the same shape and type as var. With float32 or float16 data type.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Raises
  • TypeError – If dtype of var, m, v, beta_power, lr, beta1, beta2, epsilon or grad is neither float16 nor float32.

  • TypeError – If beta_power, lr, beta1, beta2 or epsilon is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_ada_max = ops.ApplyAdaMax()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.array([[0.9, 0.1],
...                                             [0.7, 0.8]]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_ada_max(self.var, self.m, self.v, beta1_power, lr, beta1, beta2, epsilon, grad)
...         return out
...
>>> net = Net()
>>> beta1_power =Tensor(0.9, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.99, mindspore.float32)
>>> epsilon = Tensor(1e-10, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(beta1_power, lr, beta1, beta2, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.93602717e-01,  3.92571449e-01],
 [ 9.72582996e-02,  4.92249995e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.69999993e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000005e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.90999973e-01,  6.99999988e-01],
 [ 6.93000019e-01,  8.00000012e-01]]))
class tinyms.primitives.ApplyAdadelta(*args, **kwargs)[source]

Updates relevant entries according to the adadelta scheme.

\[\begin{split}\begin{array}{ll} \\ accum = \rho * accum + (1 - \rho) * grad^2 \\ \text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{grad}{\sqrt{accum + \epsilon}} \\ \text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * update^2 \\ var -= lr * update \end{array}\end{split}\]

where \(\rho\) represents rho, \(\epsilon\) represents epsilon.

Inputs of var, accum, accum_update and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated, has the same shape and data type as var.

  • accum_update (Parameter) - Accum_update to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - Learning rate, must be scalar. With float32 or float16 data type.

  • rho (Union[Number, Tensor]) - Decay rate, must be scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - Gradients, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Raises
  • TypeError – If dtype of var, accum, accum_update, lr, rho, epsilon or grad is neither float16 nor float32.

  • TypeError – If accum_update, lr, rho or epsilon is neither a Number nor a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adadelta = ops.ApplyAdadelta()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                        [0.7, 0.8]]).astype(np.float32)),
...                                                             name="accum_update")
...     def construct(self, lr, rho, epsilon, grad):
...         out = self.apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, epsilon, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> rho = Tensor(0.0, mindspore.float32)
>>> epsilon = Tensor(1e-6, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, rho, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99051356e-01,  3.99683774e-01],
 [ 9.91633832e-02,  4.99105573e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.00000036e-02,  4.89999980e-01],
 [ 1.00000007e-02,  6.40000045e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.99990976e-01,  1.00000791e-01],
 [ 6.99930906e-01,  7.99999654e-01]]))
class tinyms.primitives.ApplyAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum}} \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises
  • TypeError – If dtype of var, accum, lr or grad is neither float16 nor float32.

  • TypeError – If lr is neither a Number nor a Tensor.

Supported Platforms:

Ascend CPU GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad = ops.ApplyAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAdagradDA(*args, **kwargs)[source]

Update var according to the proximal adagrad scheme.

\[\begin{split}\begin{array}{ll} \\ grad_accum += grad \\ grad_squared_accum += grad * grad \\ tmp_val=sign(grad_accum) * max\left \{|grad_accum|-l1*global_step, 0\right \} if l1>0 else grad_accum \\ x_value = -1 * lr * tmp_val \\ y_value = l2 * global_step * lr + \sqrt{grad_squared_accum} \\ var = x_value / y_value \end{array}\end{split}\]

Inputs of var, gradient_accumulator, gradient_squared_accumulator and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If True, updating of the var and accum tensors will be protected by a lock. Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient_accumulator (Parameter) - The dict of mutable tensor gradient_accumulator. Must have the same shape and dtype as var.

  • gradient_squared_accumulator (Parameter) - The dict of mutable tensor gradient_squared_accumulator. Must have the same shape and dtype as var.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Scaling factor. Must be a scalar. With float32 or float16 data type.

  • l1 ([Number, Tensor]) - L1 regularization. Must be a scalar. With float32 or float16 data type.

  • l2 ([Number, Tensor]) - L2 regularization. Must be a scalar. With float32 or float16 data type.

  • global_step ([Number, Tensor]) - Training step number. Must be a scalar. With int32 or int64 data type.

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • gradient_accumulator (Tensor) - The same shape and data type as gradient_accumulator.

  • gradient_squared_accumulator (Tensor) - The same shape and data type as gradient_squared_accumulator.

Raises
  • TypeError – If var, gradient_accumulator, gradient_squared_accumulator is not a Parameter.

  • TypeError – If grad is not a Tensor.

  • TypeError – If lr, l1, l2 or global_step is neither a Number nor a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, gradient_accumulator, gradient_squared_accumulator, gradient_accumulator, lr, l1, l2 is neither float16 nor float32.

  • TypeError – If dtype of gradient_accumulator, gradient_squared_accumulator, gradient_accumulator is not same as var.

  • TypeError – If dtype of global_step is not int32 or int64.

  • ValueError – If the shape size of lr, l1, l2 and global_step is not 0.

Supported Platforms:

Ascend

Examples

>>> class ApplyAdagradDANet(nn.Cell):
...     def __init__(self, use_locking=False):
...         super(ApplyAdagradDANet, self).__init__()
...         self.apply_adagrad_d_a = P.ApplyAdagradDA(use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...         self.gradient_squared_accumulator = Parameter(Tensor(np.array([[0.2, 0.1],
...                                                                        [0.1, 0.2]]).astype(np.float32)),
...                                                       name="gradient_squared_accumulator")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...     def construct(self, grad, lr, l1, l2, global_step):
...         out = self.apply_adagrad_d_a(self.var, self.gradient_accumulator,
...                                      self.gradient_squared_accumulator, grad, lr, l1, l2, global_step)
...         return out
...
>>> net = ApplyAdagradDANet()
>>> grad = Tensor(np.array([[0.3, 0.4], [0.1, 0.2]]).astype(np.float32))
>>> lr = Tensor(0.001, mstype.float32)
>>> l1 = Tensor(0.001, mstype.float32)
>>> l2 = Tensor(0.001, mstype.float32)
>>> global_step = Tensor(2, mstype.int32)
>>> output = net(grad, lr, l1, l2, global_step)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[-7.39064650e-04, -1.36888528e-03],
 [-5.96988888e-04, -1.42478070e-03]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.00000006e-01,  7.00000048e-01],
 [ 2.00000003e-01,  6.99999988e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.90000021e-01,  2.60000020e-01],
 [ 1.09999999e-01,  2.40000010e-01]]))
class tinyms.primitives.ApplyAdagradV2(*args, **kwargs)[source]

Updates relevant entries according to the adagradv2 scheme.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

The difference is that ApplyAdagradV2 has one more small constant value than ApplyAdagrad.

Parameters
  • epsilon (float) – A small value added for numerical stability.

  • update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as m.

Raises
  • TypeError – If dtype of var, accum, lr or grad is neither float16 nor float32.

  • TypeError – If lr is neither a Number nor a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad_v2 = ops.ApplyAdagradV2(epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad_v2(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAddSign(*args, **kwargs)[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = (\alpha + \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last momentent of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\alpha\) represents alpha, \(\beta\) represents beta.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float32 or float16 data type.

  • alpha (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises
  • TypeError – If dtype of var, lr, alpha, sign_decay or beta is neither float16 nor float32.

  • TypeError – If lr, alpha or sign_decay is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_add_sign = ops.ApplyAddSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.alpha = 1.0
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_add_sign(self.var, self.m, self.lr, self.alpha, self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99403024e-01,  3.98607016e-01],
 [ 9.98010039e-02,  4.98407990e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyCenteredRMSProp(*args, **kwargs)[source]

Optimizer that implements the centered RMSProp algorithm. Please refer to the usage in source code of nn.RMSProp.

The updating formulas of ApplyCenteredRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ g_{t+1} = \rho g_{t} + (1 - \rho)\nabla Q_{i}(w) \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} - g_{t+1}^2 + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(g_{t+1}\) represents mean_gradient, \(g_{t}\) is the last momentent of \(g_{t+1}\). \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last momentent of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last momentent of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Note

The difference between ApplyCenteredRMSProp and ApplyRMSProp is that the fromer uses the centered RMSProp algorithm, and the centered RRMSProp algorithm uses an estimate of the centered second moment(i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more exapnsive interms of computation and memory.

Warning

In dense implementation of this algorithm, mean_gradient, mean_square, and moment will update even if the grad is zero. But in this sparse implementation, mean_gradient, mean_square, and moment will not update in iterations during which the grad is zero.

Parameters

use_locking (bool) – Whether to enable a lock to protect the variable and accumlation tensors from being updated. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_gradient (Tensor) - Mean gradients, must have the same type as var.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • grad (Tensor) - Gradient, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be update.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_gradient, mean_square, moment or grad is not a Tensor.

  • TypeError – If learing_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of learing_rate is neither float16 nor float32.

  • TypeError – If decay, momentum or epsilon is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_centerd_rms_prop = ops.ApplyCenteredRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_grad, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_centerd_rms_prop(self.var, mean_grad, mean_square, moment, grad,
...                                           lr, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_grad, mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.68377227  0.68377227]
 [0.68377227  0.68377227]]
class tinyms.primitives.ApplyFtrl(*args, **kwargs)[source]

Updates relevant entries according to the FTRL scheme.

For more details, please refer to nn.FTRL.

Parameters

use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same shape and data type as var.

  • linear (Parameter) - The linear coefficient to be updated, must be same shape and data type as var.

  • grad (Tensor) - Gradient. The data type must be float16 or float32.

  • lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.

Outputs:
  • var (Tensor) - Represents the updated var. As the input parameters has been updated in-place, this value is always zero when the platforms is GPU.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, grad, lr, l1, l2 or lr_power is neither float16 nor float32.

  • TypeError – If lr, l1, l2 or lr_power is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> class ApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(ApplyFtrlNet, self).__init__()
...         self.apply_ftrl = ops.ApplyFtrl()
...         self.lr = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...         self.lr_power = -0.5
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                  [0.7, 0.8]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad):
...         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
...                               self.lr_power)
...         return out
...
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(input_x)
>>> print(net.var.asnumpy())
[[ 0.0390525  0.11492836]
 [ 0.00066425 0.15075898]]
class tinyms.primitives.ApplyGradientDescent(*args, **kwargs)[source]

Updates relevant entries according to the following.

\[var = var - \alpha * \delta\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same shape and data type as var.

Outputs:

Tensor, represents the updated var.

Raises
  • TypeError – If dtype of var or alpha is neither float16 nor float32.

  • TypeError – If delta is not a Tensor.

  • TypeError – If alpha is neither a Number nor a Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_gradient_descent = ops.ApplyGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...     def construct(self, delta):
...         out = self.apply_gradient_descent(self.var, self.alpha, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.9999 0.9999]
 [0.9999 0.9999]]
class tinyms.primitives.ApplyMomentum(*args, **kwargs)[source]

Optimizer that implements the Momentum algorithm.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • gradient_scale (float) – The scale of the gradient. Default: 1.0.

Inputs:
  • variable (Parameter) - Weights to be updated. data type must be float.

  • accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.

Outputs:

Tensor, parameters to be updated.

Raises

TypeError – If the use_locking or use_nesterov is not a bool or gradient_scale is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

Please refer to the usage in mindspore.nn.Momentum.

class tinyms.primitives.ApplyPowerSign(*args, **kwargs)[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = \exp(\text{logbase} * \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last momentent of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\beta\) represents beta.

All of inputs comply with the implicit type conversion rules to make the data types consistent. If lr, logbase, sign_decay or beta is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. If inputs are tensors and have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. If data type of var is float16, all inputs must have the same data type as var. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float32 or float16 data type.

  • logbase (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises
  • TypeError – If dtype of var, lr, logbase, sign_decay, beta or grad is neither float16 nor float32.

  • TypeError – If lr, logbase, sign_decay or beta is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_power_sign = ops.ApplyPowerSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.logbase = np.e
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_power_sign(self.var, self.m, self.lr, self.logbase,
...                                        self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.95575690e-01,  3.89676481e-01],
 [ 9.85252112e-02,  4.88201708e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyProximalAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the proximal adagrad algorithm.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. Must has the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be scalar. The data type must be float16 or float32.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be scalar. The data type must be float16 or float32.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be scalar. The data type must be float16 or float32.

  • grad (Tensor) - Gradient with the same shape and dtype as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises
  • TypeError – If use_blocking is not a bool.

  • TypeError – If dtype of var, lr, l1 or l2 is neither float16 nor float32.

  • TypeError – If lr, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_adagrad = ops.ApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.lr = 0.01
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, grad):
...         out = self.apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1, self.l2, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.96388459e-01,  3.92964751e-01],
 [ 9.78178233e-02,  4.92815793e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyProximalGradientDescent(*args, **kwargs)[source]

Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm.

\[\begin{split}\begin{array}{ll} \\ \text{prox_v} = var - \alpha * \delta \\ var = \frac{sign(\text{prox_v})}{1 + \alpha * l2} * \max(\left| \text{prox_v} \right| - \alpha * l1, 0) \end{array}\end{split}\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be scalar. With float32 or float16 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same shape and data type as var.

Outputs:

Tensor, represents the updated var.

Raises
  • TypeError – If dtype of var, alpha, l1 or l2 is neither float16 nor float32.

  • TypeError – If alpha, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If delta is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_gradient_descent = ops.ApplyProximalGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...         self.l1 = 0.1
...         self.l2 = 0.1
...     def construct(self, delta):
...         out = self.apply_proximal_gradient_descent(self.var, self.alpha, self.l1, self.l2, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.99969995 0.99969995]
 [0.99969995 0.99969995]]
class tinyms.primitives.ApplyRMSProp(*args, **kwargs)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of nn.RMSProp.

The updating formulas of ApplyRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last momentent of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last momentent of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Warning

Note that in dense implementation of this algorithm, “mean_square” and “moment” will update even if “grad” is 0, but in this sparse implementation, “mean_square” and “moment” will not update in iterations during which “grad” is 0.

Parameters

use_locking (bool) – Whether to enable a lock to protect the variable and accumlation tensors from being updated. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - Gradient, must have the same type as var.

  • decay (float) - Decay rate. Only constant value is allowed.

  • momentum (float) - Momentum. Only constant value is allowed.

  • epsilon (float) - Ridge term. Only constant value is allowed.

Outputs:

Tensor, parameters to be update.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_square, moment or decay is not a Tensor.

  • TypeError – If learning_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of decay, momentum or epsilon is not float.

  • TypeError – If dtype of learning_rate is neither float16 nor float32.

  • ValueError – If decay, momentum or epsilon is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_rms_prop = ops.ApplyRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_rms_prop(self.var, mean_square, moment, lr, grad, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.990005  0.990005]
 [0.990005  0.990005]]
class tinyms.primitives.ApproximateEqual(*args, **kwargs)[source]

Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False.

\[\begin{split}out_i = \begin{cases} & \text{ if } \left | x_{i} - y_{i} \right | < \text{tolerance},\ \ True \\ & \text{ if } \left | x_{i} - y_{i} \right | \ge \text{tolerance},\ \ False \end{cases}\end{split}\]

where \(\text{tolerance}\) indicates Acceptable maximum tolerance.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Inputs:
  • x (Tensor) - A tensor. Must be one of the following types: float32, float16. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • y (Tensor) - A tensor of the same type and shape as ‘x’.

Outputs:

Tensor, the shape is the same as the shape of ‘x’, and the data type is bool.

Raises

TypeError – If tolerance is not a float.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([2, 4, 6]), mindspore.float32)
>>> approximate_equal = ops.ApproximateEqual(2.)
>>> output = approximate_equal(x, y)
>>> print(output)
[ True  True  False]
class tinyms.primitives.ArgMaxWithValue(*args, **kwargs)[source]

Calculates the maximum value with the corresponding index.

Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\). And the data type only support mindspore.float16 or float32.

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor. - index (Tensor) - The index for the maximum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\). - output_x (Tensor) - The maximum value of input tensor, with the same shape as index.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMaxWithValue()(input_x)
>>> print(index, output)
3 0.7
>>> index, output = ops.ArgMaxWithValue(keep_dims=True)(input_x)
>>> print(index, output)
[3] [0.7]
class tinyms.primitives.ArgMinWithValue(*args, **kwargs)[source]

Calculates the minimum value with corresponding index, and returns indices and values.

Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor. - index (Tensor) - The index for the minimum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\). - output_x (Tensor) - The minimum value of input tensor, with the same shape as index.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output = ops.ArgMinWithValue()(input_x)
>>> print(output)
(Tensor(shape=[], dtype=Int32, value= 0), Tensor(shape=[], dtype=Float32, value= 0))
>>> output = ops.ArgMinWithValue(keep_dims=True)(input_x)
>>> print(output)
(Tensor(shape=[1], dtype=Int32, value= [0]), Tensor(shape=[1], dtype=Float32, value= [ 0.00000000e+00]))
class tinyms.primitives.Argmax(*args, **kwargs)[source]

Returns the indices of the maximum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor will be \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis where the Argmax operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions. Support data type list as follows:

    • Ascend: Float16, Float32.

    • GPU: Float16, Float32.

    • CPU: Float16, Float32, Float64.

Outputs:

Tensor, indices of the max value of input tensor across the axis.

Raises
  • TypeError – If axis is not an int.

  • TypeError – If output_type is neither int32 nor int64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.Argmax(output_type=mindspore.int32)(input_x)
>>> print(output)
[1 0 0]
class tinyms.primitives.Argmin(*args, **kwargs)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis where the Argmin operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, indices of the min value of input tensor across the axis.

Raises
  • TypeError – If axis is not an int.

  • TypeError – If output_type is neither int32 nor int64.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.Argmin()(input_x)
>>> print(index)
2
class tinyms.primitives.Asin(*args, **kwargs)[source]

Computes arcsine of input tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is

    \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asin = ops.Asin()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = asin(x)
>>> print(output)
[0.8330927  0.04001068  0.30469266  0.59438497]
class tinyms.primitives.Asinh(*args, **kwargs)[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(input_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8. The data type should be one of the following types: float16, float32.

Outputs:

Tensor, has the same shape and type as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asinh = ops.Asinh()
>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = asinh(x)
>>> print(output)
[-2.3124385  1.1947632  1.8184465  5.298342 ]
class tinyms.primitives.Assert(*args, **kwargs)[source]

Asserts that the given condition is True. If input condition evaluates to false, print the list of tensor in data.

Parameters

summarize (int) – Print this many entries of each tensor.

Inputs:
  • condition [Union[Tensor[bool], bool]] - The condition to evaluate.

  • input_data (Union(tuple[Tensor], list[Tensor])) - The tensors to print out when condition is false.

Raises
  • TypeError – If summarize is not an int.

  • TypeError – If condition is neither a Tensor nor a bool.

  • TypeError – If input_data is neither a tuple nor a list.

Examples

>>> class AssertDemo(nn.Cell):
...     def __init__(self):
...         super(AssertDemo, self).__init__()
...         self.assert1 = ops.Assert(summarize=10)
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         data = self.add(x, y)
...         self.assert1(True, [data])
...         return data
...
class tinyms.primitives.Assign(*args, **kwargs)[source]

Assigns Parameter with a value.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • variable (Parameter) - The Parameter. \((N,*)\) where \(*\) means ,any number of additional dimensions, its rank should less than 8.

  • value (Tensor) - The value to be assigned, has the same shape with variable.

Outputs:

Tensor, has the same data type and shape as original variable.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> value = Tensor([2.0], mindspore.float32)
>>> variable = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="variable")
>>> assign = ops.Assign()
>>> output = assign(variable, value)
>>> print(output)
[2.]
class tinyms.primitives.AssignAdd(*args, **kwargs)[source]

Updates a Parameter by adding a value to it.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Inputs:
  • variable (Parameter) - The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • value (Union[numbers.Number, Tensor]) - The value to be added to the variable. It must have the same shape as variable if it is a Tensor. it is recommended to use the same data type when using this operator.

Outputs:

Tensor, has the same data type and shape as original variable.

Raises

TypeError – If value is neither Number nor Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignAdd = ops.AssignAdd()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
...
...     def construct(self, x):
...         self.AssignAdd(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> output = net(value)
>>> print(output)
[101]
class tinyms.primitives.AssignSub(*args, **kwargs)[source]

Updates a Parameter by subtracting a value from it.

Inputs of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Inputs:
  • variable (Parameter) - The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • value (Union[numbers.Number, Tensor]) - The value to be subtracted from the variable. It must have the same shape as variable if it is a Tensor. it is recommended to use the same data type when using this operator.

Outputs:

Tensor, has the same data type and shape as original variable.

Raises

TypeError – If value is neither Number nor Tensor.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignSub = ops.AssignSub()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
...
...     def construct(self, x):
...         self.AssignSub(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int32)*100)
>>> output = net(value)
>>> print(output)
[-99]
class tinyms.primitives.Atan(*args, **kwargs)[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Inputs:
  • x (Tensor): The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Outputs:

A Tensor, has the same type as the input.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> atan = ops.Atan()
>>> output = atan(x)
>>> print(output)
[0.7853982 0.       ]
class tinyms.primitives.Atan2(*args, **kwargs)[source]

Returns arctangent of x/y element-wise.

It returns \(\theta\ \in\ [-\pi, \pi]\) such that \(x = r*\sin(\theta), y = r*\cos(\theta)\), where \(r = \sqrt{x^2 + y^2}\).

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions. The data type will give priority to the high-precision data type

  • y (Tensor) - The input tensor. It has the same shape with x. The data type will give priority to the high-precision data type.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is same as x.

Raises

TypeError – If x or y is not a Tensor.

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor(np.array([0, 1]), mindspore.float32)
>>> y = Tensor(np.array([1, 1]), mindspore.float32)
>>> atan2 = ops.Atan2()
>>> output = atan2(x, y)
>>> print(output)
[0.        0.7853982]
class tinyms.primitives.Atanh(*args, **kwargs)[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = \tanh^{-1}(x_{i})\]
Inputs:
  • x (Tensor): The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

A Tensor, has the same type as the input.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([0, -0.5]), mindspore.float32)
>>> atanh = ops.Atanh()
>>> output = atanh(x)
>>> print(output)
[0. -0.54930614]
class tinyms.primitives.AvgPool(*args, **kwargs)[source]

Average pooling operation.

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), AvgPool outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \frac{1}{h_{ker} * w_{ker}} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]

Warning

  • Only single input and single output are supported.

  • Global pooling is supported.

  • The height of “kernel_size” and the weight of “kernel_size” are positive integers within the range [1, 255]. ksize_h * ksize_w < 256.

  • Due to instruction restrictions, the values of “strides_h” and “strides_w” are positive integers within the range [1, 63].

Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.avgpool_op = ops.AvgPool(pad_mode="VALID", kernel_size=2, strides=1)
...
...     def construct(self, x):
...         result = self.avgpool_op(x)
...         return result
...
>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
class tinyms.primitives.AvgPool3D(*args, **kwargs)[source]

3D Average pooling operation.

Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), AvgPool3D outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

Warning

“kernel_size” is in the range [1, 255]. “strides” is in the range [1, 63].

\[\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents depth, height and width are both kernel_size, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “SAME”, “VALID”, “PAD”, not case sensitive. Default: “VALID”.

    • same: Adopts the way of completion. The depth, height and width of the output will be the same as the input. The total number of padding will be calculated in depth, horizontal and vertical directions and evenly distributed to head and tail, top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (bool) – If True, ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the same data type with x.

Raises
  • TypeError – If kernel_size, strides or pad is neither an int not a tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If pad_mode or data_format is not a string.

  • TypeError – If divisor_override is not an int.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If element of pad is less than 0.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to 0 or (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float16)
>>> avg_pool3d = ops.AvgPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = avg_pool3d(x)
>>> print(output)
[[[[[ 5.  6.]]]
  [[[17. 18.]]]]]
class tinyms.primitives.BCEWithLogitsLoss(*args, **kwargs)[source]

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

Sets input logits as \(X\), input label as \(Y\), input weight as \(W\), output as \(L\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ L_{ij} = -[Y_{ij} * log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})] \end{array}\end{split}\]

\(i\) indicates the \(i^{th}\) sample, \(j\) indicates the category. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

\(\ell\) indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor weight assigns different weights to each piece of data in the batch, and the tensor pos_weight adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

\[\begin{split}\begin{array}{ll} \\ p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\ L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})] \end{array}\end{split}\]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and \(p_c\) is the weight of the positive answer for the class c. \(p_c>1\) increases the recall, \(p_c<1\) increases the precision.

Parameters

reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’, not case sensitive. If ‘none’, do not perform reduction. Default:’mean’.

Inputs:
  • logits (Tensor) - Input logits. Data type must be float16 or float32. Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • label (Tensor) - Ground truth label, has the same shape as logits. Data type must be float16 or float32.

  • weight (Tensor) - A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • pos_weight (Tensor) - A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

Outputs:

Tensor or Scalar, if reduction is ‘none’, it’s a tensor with the same shape and type as input logits. Otherwise, the output is a scalar.

Raises
  • TypeError – If data type of any input is neither float16 nor float32.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend GPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> label = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> loss = ops.BCEWithLogitsLoss()
>>> output = loss(logits, label, weight, pos_weight)
>>> print(output)
0.3463612
class tinyms.primitives.BNTrainingReduce(*args, **kwargs)[source]

For the BatchNorm operation this operator updates the moving averages for training and is used in conjunction with BNTrainingUpdate.

Inputs:
  • x (Tensor) - A 4-D Tensor with float16 or float32 data type. Tensor of shape \((N, C, A, B)\).

Outputs:
  • sum (Tensor) - A 1-D Tensor with float32 data type. Tensor of shape \((C,)\).

  • square_sum (Tensor) - A 1-D Tensor with float16 or float32 data type. Tensor of shape \((C,)\).

Raises
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.ones([128, 3, 32, 3]), mindspore.float32)
>>> bn_training_reduce = ops.BNTrainingReduce()
>>> output = bn_training_reduce(x)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value=
[ 1.22880000e+04, 1.22880000e+04, 1.22880000e+04]), Tensor(shape=[3], dtype=Float32, value=
[ 1.22880000e+04, 1.22880000e+04, 1.22880000e+04]))
class tinyms.primitives.BNTrainingUpdate(*args, **kwargs)[source]

For the BatchNorm operation, this operator updates the moving averages for training and is used in conjunction with BNTrainingReduce. Where the moving averages is a method of analyzing data points by creating a series of averages of different subsets of the entire data set.

Warning

For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Parameters
  • isRef (bool) – If a ref. Default: True. Ref indicates whether to enable the output multiplexing input address.

  • epsilon (float) – A small value added to variance avoid dividing by zero. Default: 1e-5.

  • factor (float) – A weight for updating the mean and variance. Default: 0.1.

Inputs:
  • input_x (Tensor) - A 4-D Tensor with float16 or float32 data type. Tensor of shape \((N, C, A, B)\).

  • sum (Tensor) - A 1-D Tensor with float16 or float32 data type for the output of operator BNTrainingReduce. Tensor of shape \((C,)\).

  • square_sum (Tensor) - A 1-D Tensor with float16 or float32 data type for the output of operator BNTrainingReduce. Tensor of shape \((C,)\).

  • scale (Tensor) - A 1-D Tensor with float16 or float32, for the scaling factor. Tensor of shape \((C,)\).

  • offset (Tensor) - A 1-D Tensor with float16 or float32, for the scaling offset. Tensor of shape \((C,)\).

  • mean (Tensor) - A 1-D Tensor with float16 or float32, for the scaling mean. Tensor of shape \((C,)\).

  • variance (Tensor) - A 1-D Tensor with float16 or float32, for the update variance. Tensor of shape \((C,)\).

Outputs:
  • y (Tensor) - Tensor, has the same shape and data type as input_x.

  • mean (Tensor) - Tensor for the updated mean, with float32 data type. Has the same shape as variance.

  • variance (Tensor) - Tensor for the updated variance, with float32 data type. Has the same shape as variance.

  • batch_mean (Tensor) - Tensor for the mean of input_x, with float32 data type. Has the same shape as variance.

  • batch_variance (Tensor) - Tensor for the mean of variance, with float32 data type. Has the same shape as variance.

Raises
  • TypeError – If isRef is not a bool.

  • TypeError – If dtype of epsilon or factor is not float.

  • TypeError – If input_x, sum, square_sum, scale, offset, mean or variance is not a Tensor.

  • TypeError – If dtype of input_x, sum, square_sum, scale, offset, mean or variance is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.ones([1, 2, 2, 2]), mindspore.float32)
>>> sum_val = Tensor(np.ones([2]), mindspore.float32)
>>> square_sum = Tensor(np.ones([2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> offset = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> bn_training_update = ops.BNTrainingUpdate()
>>> output = bn_training_update(input_x, sum_val, square_sum, scale, offset, mean, variance)
>>> print(output)
(Tensor(shape=[1, 2, 2, 2], dtype=Float32, value=
[[[[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]],
  [[ 2.73200464e+00,  2.73200464e+00],
   [ 2.73200464e+00,  2.73200464e+00]]]]), Tensor(shape=[2], dtype=Float32, value= [9.24999952e-01,
9.24999952e-01]), Tensor(shape=[2], dtype=Float32, value= [ 9.24999952e-01, 9.24999952e-01]),
Tensor(shape=[2], dtype=Float32, value= [ 2.50000000e-01, 2.50000000e-01]), Tensor(shape=[2], dtype=Float32,
value= [ 1.87500000e-01, 1.87500000e-01]))
class tinyms.primitives.BasicLSTMCell(*args, **kwargs)[source]

It’s similar to operator DynamicRNN. BasicLSTMCell will be deprecated in the future. Please use DynamicRNN instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BatchMatMul(*args, **kwargs)[source]

Computes matrix multiplication between two tensors by batch.

\[\begin{split}\\text{output}[..., :, :] = \\text{matrix}(x[..., :, :]) * \\text{matrix}(y[..., :, :])\end{split}\]

The two input tensors must have the same rank and the rank must be not less than 3.

Parameters
  • transpose_x (bool) – If true, the last two dimensions of x is transposed before multiplication. Default: False.

  • transpose_y (bool) – If true, the last two dimensions of y is transposed before multiplication. Default: False.

Inputs:
  • x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions. If transpose_a is True, its shape must be \((*B, C, N)\).

  • y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\). If transpose_b is True, its shape must be \((*B, M, C)\).

Outputs:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Raises
  • TypeError – If transpose_x or transpose_y is not a bool.

  • ValueError – If length of shape of x is not equal to length of shape of y or length of shape of x is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[2, 4, 1, 3]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul()
>>> output = batmatmul(x, y)
>>> print(output)
[[[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]
 [[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]]
>>> x = Tensor(np.ones(shape=[2, 4, 3, 1]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul(transpose_a=True)
>>> output = batmatmul(x, y)
>>> print(output)
[[[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]
 [[[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]
  [[3. 3. 3. 3.]]]]
class tinyms.primitives.BatchNorm(*args, **kwargs)[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over inputs to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon, \(mean\) is the mean of x, \(variance\) is the variance of x.

Warning

  • If the operation is used for inference, and outputs “reserve_space_1” and “reserve_space_2” are available, then “reserve_space_1” has the same value as “mean” and “reserve_space_2” has the same value as “variance”.

  • For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Parameters
  • is_training (bool) – If is_training is True, mean and variance are computed during training. If is_training is False, they’re loaded from checkpoint during inference. Default: False.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = (1 - momentum) * running\_mean + momentum * current\_mean\)). Momentum value must be [0, 1]. Default: 0.1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:

If is_training is False, inputs are Tensors.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Tensor) - Tensor of shape \((C,)\), with float16 or float32 data type.

  • bias (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • mean (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • variance (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

If is_training is True, scale, bias, mean and variance are Parameters.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Parameter) - Parameter of shape \((C,)\), with float16 or float32 data type.

  • bias (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • mean (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • variance (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

Outputs:

Tuple of 5 Tensors, the normalized inputs and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

  • updated_scale (Tensor) - Tensor of shape \((C,)\).

  • updated_bias (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_1 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_2 (Tensor) - Tensor of shape \((C,)\).

Raises
  • TypeError – If is_training is not a bool.

  • TypeError – If dtype of epsilon or momentum is not float.

  • TypeError – If data_format is not a str.

  • TypeError – If input_x, scale, bias, mean or variance is not a Tensor.

  • TypeError – If dtype of input_x, scale is neither float16 nor float32.

Supported Platforms:

Ascend CPU GPU

Examples

>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> bias = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> batch_norm = ops.BatchNorm()
>>> output = batch_norm(input_x, scale, bias, mean, variance)
>>> print(output[0])
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.BatchToSpace(*args, **kwargs)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_size, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_size with given amount to crop from dimension, respectively.

Parameters
  • block_size (int) – The block size of division, has the value not less than 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction lists. Each list contains 2 integers. All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape. The data type is float16 or float32.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_size and crops. The output shape will be (n’, c’, h’, w’), where

\(n' = n//(block\_size*block\_size)\)

\(c' = c\)

\(h' = h*block\_size-crops[0][0]-crops[0][1]\)

\(w' = w*block\_size-crops[1][0]-crops[1][1]\)

Raises
  • TypeError – If block_size or element of crops is not an int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If block_size is less than 2.

Supported Platforms:

Ascend GPU

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpace(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceND(*args, **kwargs)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_shape, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_shape with given amount to crop from dimension, respectively.

Parameters
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is a int, the block size of M dimendions are the same, equal to block_shape. M must be 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction list, each containing 2 int value. All values must be >= 0. crops[i] specifies the crop values for spatial dimension i, which corresponds to input dimension i+2. It is required that input_shape[i+2]*block_shape[i] > crops[i][0]+crops[i][1].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape. The data type is float16 or float32.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_shape and crops. The output shape will be (n’, c’, h’, w’), where

\(n' = n//(block\_shape[0]*block\_shape[1])\)

\(c' = c\)

\(h' = h*block\_shape[0]-crops[0][0]-crops[0][1]\)

\(w' = w*block\_shape[1]-crops[1][0]-crops[1][1]\)

Raises
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If length of block_shape or crops is not equal to 2.

Supported Platforms:

Ascend

Examples

>>> block_shape = [2, 2]
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space_nd = ops.BatchToSpaceND(block_shape, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space_nd(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BesselI0e(*args, **kwargs)[source]

Computes BesselI0e of input element-wise.

Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Data type must be float16 or float32.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> bessel_i0e = ops.BesselI0e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0e(x)
>>> print(output)
[0.7979961  0.5144438  0.75117415  0.9157829 ]
class tinyms.primitives.BesselI1e(*args, **kwargs)[source]

Computes BesselI1e of input element-wise.

Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Data type must be float16 or float32.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> bessel_i1e = ops.BesselI1e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1e(x)
>>> print(output)
[0.09507662 0.19699717 0.11505538 0.04116856]
class tinyms.primitives.BiasAdd(*args, **kwargs)[source]

Returns sum of input and bias tensor.

Adds the 1-D bias tensor to the input tensor, and broadcasts the shape on all axis except for the channel axis.

Parameters

data_format (str) – The format of input and output data. It should be ‘NHWC’, ‘NCHW’ or ‘NCDHW’. Default is ‘NCHW’.

Inputs:
  • input_x (Tensor) - The input tensor. The shape can be 2-5 dimensions. The data type should be float16 or float32.

  • bias (Tensor) - The bias tensor, with shape \((C)\). The shape of bias must be the same as input_x’s channel dimension. The data type should be float16 or float32.

Outputs:

Tensor, with the same shape and data type as input_x.

Raises
  • TypeError – If data_format is not a str.

  • TypeError – If input_x or bias is not a Tensor.

  • TypeError – If dtype of input_x or bias is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3,)), mindspore.float32)
>>> bias_add = ops.BiasAdd()
>>> output = bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.BinaryCrossEntropy(*args, **kwargs)[source]

Computes the binary cross entropy between the logits and the labels.

Sets logits as \(x\), labels as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

In which, \(L\) indicates the loss of all batch_sizes, \(l\) indicates the loss of one batch_size, and n indicates one batch_size in the 1-N range. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

  • The value of “x” must range from 0 to 1.

  • The value of “y” must be “0” or “1”.

Parameters

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - The input Tensor. The data type must be float16 or float32, The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • labels (Tensor) - The label Tensor which has same shape and data type as logits.

  • weight (Tensor, optional) - A rescaling weight applied to the loss of each batch element. And it must have same shape and data type as logits. Default: None.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise, the output is a scalar.

Raises
  • TypeError – If dtype of logits, labels or weight (if given) is neither float16 not float32.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • ValueError – If shape of labels is not the same as logits or weight (if given).

  • TypeError – If logits, labels or weight is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.binary_cross_entropy = ops.BinaryCrossEntropy()
...     def construct(self, logits, labels, weight):
...         result = self.binary_cross_entropy(logits, labels, weight)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = net(logits, labels, weight)
>>> print(output)
0.38240486
class tinyms.primitives.BitwiseAnd(*args, **kwargs)[source]

Returns bitwise and of two tensors element-wise.

\[out_i = x_{i} \wedge y_{i}\]

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • x (Tensor) - The input tensor with int16, int32 or uint16 data type. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The input tensor with same type as the x.

Outputs:

Tensor, has the same type as the x.

Raises

TypeError – If x or y is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_and = ops.BitwiseAnd()
>>> output = bitwise_and(x, y)
>>> print(output)
[ 0  0  1 -1  1  0  1]
class tinyms.primitives.BitwiseOr(*args, **kwargs)[source]

Returns bitwise or of two tensors element-wise.

\[out_i = x_{i} \mid y_{i}\]

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • x (Tensor) - The input tensor with int16, int32 or uint16 data type.

  • y (Tensor) - The input tensor with same type as the x.

Outputs:

Tensor, has the same type as the x.

Raises

TypeError – If x or y is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_or = ops.BitwiseOr()
>>> output = bitwise_or(x, y)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
class tinyms.primitives.BitwiseXor(*args, **kwargs)[source]

Returns bitwise xor of two tensors element-wise.

\[out_i = x_{i} \oplus y_{i}\]

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • x (Tensor) - The input tensor with int16, int32 or uint16 data type.

  • y (Tensor) - The input tensor with same type as the x.

Outputs:

Tensor, has the same type as the x.

Raises

TypeError – If x or y is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_xor = ops.BitwiseXor()
>>> output = bitwise_xor(x, y)
>>> print(output)
[ 0  1  0  0 -2  3  2]
class tinyms.primitives.BondAtomEnergy(*args, **kwargs)[source]

Add the potential energy caused by simple harmonic bonds to the total potential energy of each atom.

The calculation formula is the same as operator BondEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • atom_ene (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.BondEnergy(*args, **kwargs)[source]

Calculate the harmonic potential energy between each bonded atom pair. Assume our system has n atoms and m harmonic bonds.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_1-x_2, y_1-y_2, z_1-z_2)\]
\[E = k*(|dr| - r_0)^2\]
Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • bond_ene (Tensor) - The harmonic potential energy for each bond. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.BondForce(*args, **kwargs)[source]

Calculate the force exerted by the simple harmonic bond on the corresponding atoms. Assume the number of harmonic bonds is m and the number of atoms is n.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_1-x_2, y_1-y_2, z_1-z_2)\]
\[F = (F_x, F_y, F_z) = 2*k*(1 - r_0/|dr|)*dr\]
Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.BondForceWithAtomEnergy(*args, **kwargs)[source]

Calculate bond force and harmonic potential energy together.

The calculation formula is the same as operator BondForce() and BondEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_e (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.BondForceWithAtomEnergyAndVirial(*args, **kwargs)[source]

Calculate bond force, harmonic potential energy and atom virial together.

The calculation formula is the same as operator BondForce() and BondEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - The force of each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_e (Tensor) - The energy of each atom. The data type is float32 and the shape is \((n,)\).

  • atom_virial (Tensor) - The virial of each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.BondForceWithAtomVirial(*args, **kwargs)[source]

Calculate bond force and the virial coefficient caused by simple harmonic bond for each atom together.

The calculation formula of the force part is the same as operator BondForce().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

The Virial part is as follows:

\[dr = (x_1-x_2, y_1-y_2, z_1-z_2)\]
\[virial = |dr|*(|dr| - r_0)*k\]
Parameters
  • atom_numbers (int32) – the number of atoms n.

  • bond_numbers (int32) – the number of harmonic bonds m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor (x, y, z), between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The first atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The second atom index of each bond. The data type is int32 and the shape is \((m,)\).

  • bond_k (Tensor) - The force constant of each bond. The data type is float32 and the shape is \((m,)\).

  • bond_r0 (Tensor) - The equlibrium length of each bond. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - Same as operator BondForce(). The data type is float32 and the shape is \((n, 3)\).

  • atom_v (Tensor) - The accumulated virial coefficient for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.BoundingBoxDecode(*args, **kwargs)[source]

Decodes bounding boxes locations.

Parameters
  • means (tuple) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • max_shape (tuple) – The max size limit for decoding box calculation.

  • wh_ratio_clip (float) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).

  • deltas (Tensor) - Delta of boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, decoded boxes. It has the same data type and shape as anchor_box.

Raises
  • TypeError – If means, stds or max_shape is not a tuple.

  • TypeError – If wh_ratio_clip is not a float.

  • TypeError – If anchor_box or deltas is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_decode = ops.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0),
...                                          max_shape=(768, 1280), wh_ratio_clip=0.016)
>>> output = boundingbox_decode(anchor_box, deltas)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
class tinyms.primitives.BoundingBoxEncode(*args, **kwargs)[source]

Encodes bounding boxes locations.

Parameters
  • means (tuple) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).

  • groundtruth_box (Tensor) - Ground truth boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, encoded bounding boxes. It has the same data type and shape as input anchor_box.

Raises
  • TypeError – If means or stds is not a tuple.

  • TypeError – If anchor_box or groundtruth_box is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
>>> output = boundingbox_encode(anchor_box, groundtruth_box)
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
class tinyms.primitives.Broadcast(*args, **kwargs)[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters
  • root_rank (int) – Source rank. Required in all processes except the one that is sending the data.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises

TypeError – If root_rank is not a integer or group is not a string.

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with multiple processes.
>>> # Please refer to the tutorial > Distributed Training on mindspore.cn.
>>> from mindspore import Tensor
>>> from mindspore import context
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.broadcast = ops.Broadcast(1)
...
...     def construct(self, x):
...         return self.broadcast((x,))
...
>>> input_x = Tensor(np.ones([2, 4]).astype(np.int32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape[2,4], dtype=Int32, value=
[[1, 1, 1, 1],
 [1, 1, 1, 1]]),)
class tinyms.primitives.BroadcastTo(*args, **kwargs)[source]

Broadcasts input tensor to a given shape.

Input shape can be broadcast to target shape if for each dimension pair they are either equal or input is one or the target dimension is -1. In case of -1 in target shape, it will be replaced by the input shape’s value in that dimension.

When input shape is broadcast to target shape, it starts with the trailing dimensions. If there is a -1 in the target shape, the -1 cannot be in a leading, non-existing dimension.

Parameters

shape (tuple) – The target shape to broadcast. Can be fully specified, or have -1 in one position where it will be substituted by the input tensor’s shape in that position, see example.

Inputs:
  • input_x (Tensor) - The input tensor. The data type should be one of the following types: float16, float32, int32, int8, uint8. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

Outputs:

Tensor, with the given shape and the same data type as input_x.

Raises
  • TypeError – If shape is not a tuple.

  • ValueError – if the target and input shapes are incompatible, or if a - 1 in the target shape is in an invalid location.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 3)
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> broadcast_to = ops.BroadcastTo(shape)
>>> output = broadcast_to(input_x)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>> shape = (-1, 2)
>>> input_x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> broadcast_to = ops.BroadcastTo(shape)
>>> output = broadcast_to(input_x)
>>> print(output)
[[1. 1.]
 [2. 2.]]
class tinyms.primitives.BufferAppend(*args, **kwargs)[source]

In reinforcement learning, the experience data is collected in each step. We use BufferAppend to push data to the bottom of buffer under the First-In-First-Out rule.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer,

each tensor is described by the buffer_shape and buffer_type.

  • exp (tuple(Parameter(Tensor))) - The tuple(Tensor) represents one list of experince data,

each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer,

data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

None.

Raises
  • ValueError – If count and head is not a integer.

  • ValueError – If capacity is not a positive integer.

  • ValueError – If length of data not equal to length of exp.

  • ValueError – If dim of data euqals to dim of exp, but data[1:] not equal to the shape in exp.

  • ValueError – If the shape of data[1:] not equal to the shape in exp.

  • TypeError – If the type in exp is not the same with data.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
              Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
              Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
              Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> exp = [Tensor(np.array([2, 2, 2, 2]), ms.float32), Tensor(np.array([0, 0]), ms.int32),
           Tensor(np.array([0]), ms.int32), Tensor(np.array([3, 3, 3, 3]), ms.float32)]
>>> batch_exp = [Tensor(np.array([[2, 2, 2, 2], [2, 2, 2, 2]]), ms.float32),
                 Tensor(np.array([[0, 0], [0, 0]), ms.int32),
                 Tensor(np.array([[0], [0]]), ms.int32),
                 Tensor(np.array([[3, 3, 3, 3], [3, 3, 3, 3]]), ms.float32)]
>>> buffer_append = ops.BufferAppend(capacity, shapes, types)
>>> buffer_append(buffer, exp, count, head)
>>> buffer_append(buffer, batch_exp, count, head)
class tinyms.primitives.BufferGetItem(*args, **kwargs)[source]

Get the data from buffer in the position of input index.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer,

each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer,

data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

  • index (int64) - The position of the data in buffer.

Outputs:

tuple(Tensor). The shape is buffer_shape. The dtype is buffer_dtype.

Raises
  • ValueError – If count and head is not a integer.

  • ValueError – If capacity is not a positive integer.

  • TypeError – If buffer_shape is not a tuple.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> index = 3
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
              Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
              Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
              Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_get = ops.BufferGetItem(capacity, shapes, types)
>>> output = buffer_get(buffer, count, head, index)
>>> print(output)
    (Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]),
     Tensor(shape=[2], dtype=Int32, value= [6, 7]),
     Tensor(shape=[1], dtype=Int32, value= [1]),
     Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]))
class tinyms.primitives.BufferSample(*args, **kwargs)[source]

In reinforcement learning, the data is sampled from the replaybuffer randomly.

Returns the tuple tensor with the given shape, decided by the given batchsize.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • batch_size (int64) – The size of the sampled data, lessequal to capacity.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

  • seed (int64) – Random seed for sample. Default: 0. If use the default seed, it will generate a ramdom

  • in kernel. Set a number other than 0 to keep a specific seed. Default (one) –

  • unique (bool) – Whether the sampled data is strictly unique. Setting it to False has a better performance. Default: False

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer,

each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer,

data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

tuple(Tensor). The shape is batch_size * buffer_shape. The dtype is buffer_dtype.

Raises
  • TypeError – If buffer_shape is not a tuple.

  • ValueError – If batch_size is larger than capacity.

  • ValueError – If capacity is not a positive integer.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> batch_size = 5
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
              Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
              Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
              Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_sample = ops.BufferSample(capacity, batch_size, shapes, types)
>>> output = buffer_sample(buffer, count, head)
>>> print(output)
    (Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]),
     Tensor(shape=[5, 2], dtype=Int32, value=
        [[ 0, 1],
        [ 4, 5],
        [ 8, 9],
        [ 6, 7],
        [16, 17]]),
     Tensor(shape=[5, 1], dtype=Int32, value=
        [[1],
        [1],
        [1],
        [1],
        [1]]),
     Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]))
class tinyms.primitives.CTCGreedyDecoder(*args, **kwargs)[source]

Performs greedy decoding on the logits given in inputs.

Parameters

merge_repeated (bool) – If true, merge repeated classes in output. Default: True.

Inputs:
  • inputs (Tensor) - The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float32 or float64.

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must be equal to or less than max_time.

Outputs:
  • decoded_indices (Tensor) - A tensor with shape of \((total\_decoded\_outputs, 2)\). Data type is int64.

  • decoded_values (Tensor) - A tensor with shape of \((total\_decoded\_outputs, )\), it stores the decoded classes. Data type is int64.

  • decoded_shape (Tensor) - A tensor with shape of \((batch\_size, max\_decoded\_legth)\). Data type is int64.

  • log_probability (Tensor) - A tensor with shape of \((batch\_size, 1)\), containing sequence log-probability, has the same type as inputs.

Raises
  • TypeError – If merge_repeated is not a bool.

  • ValueError – If length of shape of inputs is not equal to 3.

  • ValueError – If length of shape of sequence_length is not equal to 1.

Supported Platforms:

Ascend

Examples

>>> inputs = Tensor(np.array([[[0.6, 0.4, 0.2], [0.8, 0.6, 0.3]],
...                           [[0.0, 0.6, 0.0], [0.5, 0.4, 0.5]]]), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_greedyDecoder = ops.CTCGreedyDecoder()
>>> decoded_indices, decoded_values, decoded_shape, log_probability = ctc_greedyDecoder(inputs, sequence_length)
>>> print(decoded_indices)
[[0 0]
 [0 1]
 [1 0]]
>>> print(decoded_values)
[0 1 0]
>>> print(decoded_shape)
[2 2]
>>> print(log_probability)
[[-1.2]
 [-1.3]]
class tinyms.primitives.CTCLoss(*args, **kwargs)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Parameters
  • preprocess_collapse_repeated (bool) – If true, repeated labels will be collapsed prior to the CTC calculation. Default: False.

  • ctc_merge_repeated (bool) – If false, during CTC calculation, repeated non-blank labels will not be merged and these labels will be interpreted as individual ones. This is a simplfied version of CTC. Default: True.

  • ignore_longer_outputs_than_inputs (bool) – If true, sequences with longer outputs than inputs will be ignored. Default: False.

Inputs:
  • x (Tensor) - The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float16, float32 or float64.

  • labels_indices (Tensor) - The indices of labels. labels_indices[i, :] = [b, t] means labels_values[i] stores the id for (batch b, time t). The type must be int64 and rank must be 2.

  • labels_values (Tensor) - A 1-D input tensor. The values are associated with the given batch and time. The type must be int32. labels_values[i] must in the range of [0, num_classes).

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must not be greater than max_time.

Outputs:
  • loss (Tensor) - A tensor containing log-probabilities, the shape is \((batch\_size, )\). The tensor has the same data type as x.

  • gradient (Tensor) - The gradient of loss, has the same shape and data type as x.

Raises
  • TypeError – If preprocess_collapse_repeated, ctc_merge_repeated or ignore_longer_outputs_than_inputs is not a bool.

  • TypeError – If x, labels_indices, labels_values or sequence_length is not a Tensor.

  • ValueError – If rank of labels_indices is not equal 2.

  • TypeError – If dtype of x is not one of the following: float16, float32 or float64.

  • TypeError – If dtype of labels_indices is not int64.

  • TypeError – If dtype of labels_values or sequence_length is not int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[0.3, 0.6, 0.6],
...                       [0.4, 0.3, 0.9]],
...
...                      [[0.9, 0.4, 0.2],
...                       [0.9, 0.9, 0.1]]]).astype(np.float32))
>>> labels_indices = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int64)
>>> labels_values = Tensor(np.array([2, 2]), mindspore.int32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_loss = ops.CTCLoss()
>>> loss, gradient = ctc_loss(x, labels_indices, labels_values, sequence_length)
>>> print(loss)
[ 0.79628  0.5995158 ]
>>> print(gradient)
[[[ 0.27029088  0.36485454  -0.6351454  ]
  [ 0.28140804  0.25462854  -0.5360366 ]]
 [[ 0.47548494  0.2883962    0.04510255 ]
  [ 0.4082751   0.4082751    0.02843709 ]]]
class tinyms.primitives.CalculateNowrapCrd(*args, **kwargs)[source]

Calculate the inside-box periodic image of each atom.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

atom_numbers (int32) – the number of atoms n.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • box (Tensor) - The 3-D size of system. The data type is float32 and the shape is \((3, )\).

  • box_map_times (Tensor) - The number of times each atom has crossed the box. The data type is int32 and the shape is \((n, 3)\).

Outputs:
  • nowrap_crd (Tensor) - The inside-box periodic image of each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.Cast(*args, **kwargs)[source]

Returns a tensor with the new specified data type.

Inputs:
  • input_x (Union[Tensor, Number]) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The tensor to be cast.

  • type (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises
  • TypeError – If input_x is neither Tensor nor Number.

  • TypeError – If type is not a Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> type_dst = mindspore.int32
>>> cast = ops.Cast()
>>> output = cast(input_x, type_dst)
>>> print(output.dtype)
Int32
>>> print(output.shape)
(2, 3, 4, 5)
class tinyms.primitives.Cdist(*args, **kwargs)[source]

Computes batched the p norm distance between each pair of the two collections of row vectors.

Parameters

p (float) – P value for the p norm distance to calculate between each vector pair ∈[0,∞].

Inputs:
  • input_x (Tensor) - Input tensor of shape \((B, P, M)\). Letter \(B\) represents 0 or positive int number. When \(B\) is equal to 0, it means this dimension can be ignored, i.e. shape of the tensor is \((P, M)\).

  • input_y (Tensor) - Input tensor of shape \((B, R, M)\).

Outputs:

Tensor, has the same dtype as input_x, which shape is \((B, P, R)\).

Raises
  • TypeError – If input_x or input_y is not a Tensor.

  • TypeError – If dtype of input_x or input_y is neither float16 nor float32.

  • TypeError – If p is not a float.

  • ValueError – If p is a negative float.

  • ValueError – If dimension of input_x is not the same as input_y.

  • ValueError – If dimension of input_x or input_y is neither 2 nor 3.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[[1.0, 1.0], [2.0, 2.0]]]).astype(np.float32))
>>> input_y = Tensor(np.array([[[3.0, 3.0], [3.0, 3.0]]]).astype(np.float32))
>>> op = ops.Cdist(p=2.0)
>>> output = op(input_x, input_y)
>>> print(output)
[[[2.8284273 2.8284273]
  [1.4142137 1.4142137]]]
class tinyms.primitives.Ceil(*args, **kwargs)[source]

Rounds a tensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Inputs:
  • x (Tensor) - The input tensor. It’s element data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> ceil_op = ops.Ceil()
>>> output = ceil_op(x)
>>> print(output)
[ 2.  3. -1.]
class tinyms.primitives.CheckBprop(*args, **kwargs)[source]

Checks whether the data type and the shape of corresponding elements from tuples x and y are the same.

Inputs:
  • input_x (tuple[Tensor]) - The input_x contains the outputs of bprop to be checked.

  • input_y (tuple[Tensor]) - The input_y contains the inputs of bprop to check against.

Outputs:

(tuple[Tensor]), the input_x, if data type and shape of corresponding elements from input_x and input_y are the same.

Raises

TypeError – If input_x or input_y is not a Tensor.

Examples

>>> input_x = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> input_y = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> out = ops.CheckBprop()(input_x, input_y)
class tinyms.primitives.CheckValid(*args, **kwargs)[source]

Checks bounding box.

Checks whether the bounding box cross data and data border are valid.

Warning

specifying the valid boundary (heights x ratio, weights x ratio).

Inputs:
  • bboxes (Tensor) - Bounding boxes tensor with shape (N, 4). Data type must be float16 or float32.

  • img_metas (Tensor) - Raw image size information with the format of (height, width, ratio). Data type must be float16 or float32.

Outputs:

Tensor, with shape of (N,) and dtype of bool.

Raises
  • TypeError – If bboxes or img_metas is not a Tensor.

  • TypeError – If dtype of bboxes or img_metas is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.check_valid = ops.CheckValid()
...     def construct(self, x, y):
...         valid_result = self.check_valid(x, y)
...         return valid_result
...
>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> net = Net()
>>> output = net(bboxes, img_metas)
>>> print(output)
[ True False False]
class tinyms.primitives.ComputeAccidentalHits(*args, **kwargs)[source]

Compute accidental hits of sampled classes which match target classes.

When a target class matches the sample class, we call it “accidental hit”. The result of calculating accidental hits contain three parts (index, id, weight), where index represents the row number in true_classes, and id represents the position in sampled_candidates, the weight is -FLOAT_MAX. FLOAT_MAX indicates the max value in the type of Float

Parameters

num_true (int) – The number of target classes per training example. Default: 1.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int32 or int64 and shape \((batch\_size, num\_true)\).

  • sampled_candidates (Tensor) - The Candidate sampling results of operators, types of training samples, with data type of int32 or int64 and shape \((num\_sampled, )\).

Outputs:

Tuple of 3 Tensors.

  • indices (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • ids (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • weights (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the type float32.

Raises
  • TypeError – If dtype of num_true is not int.

  • TypeError – If true_classes or sampled_candidates is not a Tensor.

  • TypeError – If dtype of true_classes or sampled_candidates is neither int32 nor int64.

Supported Platforms:

Ascend

Examples

>>> true_classes = np.array([[1, 2], [0, 4], [3, 3]])
>>> sampled_candidates = np.array([0, 1, 2, 3, 4])
>>> sampler = ops.ComputeAccidentalHits(2)
>>> indices, ids, weights = sampler(Tensor(true_classes), Tensor(sampled_candidates))
>>> print(indices, ids, weights)
[0 0 1 1 2 2]
[1 2 0 4 3 3]
[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
class tinyms.primitives.Concat(*args, **kwargs)[source]

Connect tensor in the specified axis.

Connect input tensors along with the given axis.

The input data is a tuple of tensors. These tensors have the same rank R. Set the given axis as m, and \(0 \le m < R\). Set the number of input tensors as N. For the \(i\)-th tensor \(t_i\), it has the shape of \((x_1, x_2, ..., x_{mi}, ..., x_R)\). \(x_{mi}\) is the \(m\)-th dimension of the \(i\)-th tensor. Then, the shape of the output tensor is

\[(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\]

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters

axis (int) – The specified axis. Default: 0.

Inputs:
  • input_x (tuple, list) - A tuple or a list of input tensors. Suppose there are two tensors in this tuple or list, namely x1 and x2. To perform Concat in the axis 0 direction, except for the 0th axis, all other axes should be equal, that is, \(x1.shape[1] == x2.shape[1], x1.shape[2] == x2.shape[2], ..., x1.shape[R] == x2.shape[R]\), where the \(R\) indicates the last axis.

Outputs:
  • Tensor, the shape is \((x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\). The data type is the same with input_x.

Raises

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> op = ops.Concat()
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
>>> op = ops.Concat(1)
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1. 0. 1.]
 [2. 1. 2. 1.]]
class tinyms.primitives.ConfusionMatrix(*args, **kwargs)[source]

Calculates the confusion matrix from labels and predictions.

Parameters
  • num_classes (int) – The num of classes.

  • dtype (str) – Data type of confusion matrix. Default: ‘int32’.

Inputs:
  • labels (Tensor) - real labels, tensor of 1-D. the dtype must be non-negative Integer.

  • predictions (Tensor) - the labels from prediction, tensor of 1-D. the shape same as labels and the dtype must be non-negative Integer.

  • weights (Tensor) - tensor of 1-D. the shape same as predictions.

Outputs:

Tensor, the confusion matrix, with shape (num_classes, num_classes).

Raises
  • TypeError – If num_classes is not an int.

  • TypeError – If dtype is not a str.

  • TypeError – If labels, predictions or weight` is not a Tensor.

Examples

>>> confusion_matrix = ops.ConfusionMatrix(4)
>>> labels = Tensor([0, 1, 1, 3], mindspore.int32)
>>> predictions = Tensor([1, 2, 1, 3], mindspore.int32)
>>> output = confusion_matrix(labels, predictions)
>>> print(output)
[[0 1 0 0]
 [0 1 1 0]
 [0 0 0 0]
 [0 0 0 1]]
class tinyms.primitives.Conj(*args, **kwargs)[source]

Returns a Tensor that is the real part of the input.

Inputs:
  • input (Tensor, complex) - The input tensor. types: complex64, complex128.

Outputs:

Tensor, has the float type.

Raises

TypeError – If the dtype of input is not one of: complex64, complex128.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> conj = ops.Conj()
>>> output = conj(x)
>>> print(output)
1.3-0.4j
class tinyms.primitives.Constrain(*args, **kwargs)[source]

Calculate the constraint force and virial depends on pressure calculation.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

  • iteration_numbers (int32) – the number of iteration numbers p.

  • half_exp_gamma_plus_half (float32) – half exp_gamma plus half q.

  • update_interval (int32) – the number of update interval, default 10.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • quarter_cof (Tensor) - The 3-D scale factor. The data type is float32 and the shape is \((3,)\).

  • mass_inverse (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • need_pressure (Tensor) - If need pressure, 1 else 0. The data type is int32 and the shape is \((1,)\) or \(()\).

Outputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • frc (Tensor) - The constraint force on each atom. The data type is float32 and the shape is \((n, 3)\).

  • virial (Tensor) - The constraint virial on each atom. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.ConstrainForce(*args, **kwargs)[source]

Calculate the constraint force in a step with iteration numbers.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

  • iteration_numbers (int32) – the number of iteration numbers p.

  • half_exp_gamma_plus_half (float32) – half exp_gamma plus half q.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • quarter_cof (Tensor) - The 3-D scale factor. The data type is float32 and the shape is \((3,)\).

  • mass_inverse (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

Outputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • frc (Tensor) - The constraint force on each atom. The data type is float32 and the shape is \((n, 3)\).

  • virial (Tensor) - The constraint virial on each atom and it is zero. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.ConstrainForceCycle(*args, **kwargs)[source]

Calculate the constraint force in each iteration.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

Outputs:
  • test_frc (Tensor) - The constraint force. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.ConstrainForceCycleWithVirial(*args, **kwargs)[source]

Calculate the constraint force and virial in each iteration.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

Outputs:
  • test_frc (Tensor) - The constraint force. The data type is float32 and the shape is \((n, 3)\).

  • atom_virial (Tensor) - The virial caused by constraint force of each atom. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.ConstrainForceVirial(*args, **kwargs)[source]

Calculate the constraint force and virial in a step with iteration numbers.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

  • iteration_numbers (int32) – the number of iteration numbers p.

  • half_exp_gamma_plus_half (float32) – half exp_gamma plus half q.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • quarter_cof (Tensor) - The 3-D scale factor. The data type is float32 and the shape is \((3,)\).

  • mass_inverse (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The 3-D scale factor (x, y, z), The data type is float32 and the shape is \((3,)\).

  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

Outputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • virial (Tensor) - The constraint virial on each atom. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.Conv2D(*args, **kwargs)[source]

2D convolution layer.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is height, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} // \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the group number to split the input in the channel dimension.

If the ‘pad_mode’ is set to be “valid”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} - (\text{kernel_size[0]} - 1) \times (\text{dilation[0]} - 1) }{\text{stride[0]}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + \text{padding[2]} + \text{padding[3]} - \text{kernel_size[1]} - (\text{kernel_size[1]} - 1) \times (\text{dilation[1]} - 1) }{\text{stride[1]}}} \right \rfloor\) respectively. Where \(dialtion\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: http://cs231n.github.io/convolutional-networks/.

Parameters
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both the height and the width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input x. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – Implicit paddings on both sides of the input x. If pad is one integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple with four integers, the paddings of top, bottom, left and right will be equal to pad[0], pad[1], pad[2], and pad[3] accordingly. Default: 0.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})\).

Outputs:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Raises
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ not ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> conv2d = ops.Conv2D(out_channel=32, kernel_size=3)
>>> output = conv2d(x, weight)
>>> print(output.shape)
(10, 32, 30, 30)
class tinyms.primitives.Conv2DBackpropInput(*args, **kwargs)[source]

Computes the gradients of convolution with respect to the input.

Parameters
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both the height and the width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • mode (int) – Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • stride (Union[int. tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union[int. tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • dout (Tensor) - The gradients write respect to the output of the convolution. The shape conforms to the default data_format \((N, C_{out}, H_{out}, W_{out})\).

  • weight (Tensor) - Set size of kernel is \((\text{ks_w}, \text{ks_h})\), where \(\text{ks_w}\) and \(\text{ks_h}\) are the height and width of the convolution kernel, then the shape is \((C_{out}, C_{in}, \text{ks_w}, \text{ks_h})\).

  • input_size (Tensor) - A tuple describes the shape of the input which conforms to the format \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the gradients with respect to the input of convolution. It has the same shape as the input.

Raises
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ not ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> input_x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_backprop_input = ops.Conv2DBackpropInput(out_channel=32, kernel_size=3)
>>> output = conv2d_backprop_input(dout, weight, ops.shape(input_x))
>>> print(output.shape)
(10, 32, 32, 32)
class tinyms.primitives.Conv2DTranspose(*args, **kwargs)[source]

Compute a 2D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Parameters
  • out_channel (int) – The dimensionality of the output space.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution window.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • mode (int) – Modes for different convolutions. 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • stride (Union[int. tuple[int]]) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union[int. tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) –

Inputs:
  • dout (Tensor) - the gradients w.r.t the output of the convolution. The shape conforms to the default data_format \((N, C_{out}, H_{out}, W_{out})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

  • input_size (Tensor) - A tuple describes the shape of the input which conforms to the format \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the gradients w.r.t the input of convolution. It has the same shape as the input.

Raises
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ not ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_transpose_input = ops.Conv2DTranspose(out_channel=32, kernel_size=3)
>>> output = conv2d_transpose_input(dout, weight, ops.shape(x))
>>> print(output.shape)
(10, 32, 32, 32)
class tinyms.primitives.Conv3D(*args, **kwargs)[source]

3D convolution layer.

Applies a 3D convolution over an input tensor which is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\). Where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H\) is height, \(W\) is width. the formula is defined as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation operator.

If the ‘pad_mode’ is set to be “valid”, the output depth, height and width will be \(\left \lfloor{1 + \frac{D_{in} + 2 \times \text{padding} - \text{ks_d} - (\text{ks_d} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively. Where \(dialtion\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

Parameters
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 3 integers. Specifies the depth, height and width of the 3D convolution window. Single int means the value is for the depth, height and the width of the kernel. A tuple of 3 ints means the first value is for the depth, height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. It is currently not used. Default: 1.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be the same as the input. The total number of padding will be calculated in depth, horizontal and vertical directions and evenly distributed to head and tail, top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • dilation (Union[int, tuple[int]]) – The data type is int or a tuple of 3 integers : math:(dilation_d, dilation_h, dilation_w). Currently, dilation on depth only supports the case of 1. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input. Default: 1.

  • group (int) – Splits filter into groups, in_ channels and out_channels must be divisible by the number of groups. Default: 1. Only 1 is currently supported.

  • data_format (str) – The optional value for data format. Currently only support “NCDHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

  • weight (Tensor) - Set size of kernel is \((k_d, K_h, K_w)\), then the shape is \((C_{out}, C_{in}//groups, k_d, K_h, K_w)\). Currently weight data type only support float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{in}\). Currently, only support none.

Outputs:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

Raises
  • TypeError – If out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • ValueError – If out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> conv3d = ops.Conv3D(out_channel=32, kernel_size=(4, 3, 3))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(16, 32, 7, 30, 30)
class tinyms.primitives.Conv3DTranspose(*args, **kwargs)[source]

Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Input is typically of shape \((N, C, D, H, W)\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H\) is height, \(W\) is width.

If the ‘pad_mode’ is set to be “pad”, the depth, height and width of output are defined as:

\[ \begin{align}\begin{aligned}D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{pad}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1\\H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{pad}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1\\W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{pad}[2] + \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) + \text{output\_padding}[2] + 1\end{aligned}\end{align} \]
Parameters
  • in_channel (int) – The channel of the input x.

  • out_channel (int) – The channel of the weight x.

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 3 integers. Specifies the depth, height and width of the 3D convolution window. Single int means the value is for the depth, height and the width of the kernel. A tuple of 3 ints means the first value is for the depth, second value is for height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. Default is 1. It is currently not used.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be the same as the input. The total number of padding will be calculated in depth, horizontal and vertical directions and evenly distributed to head and tail, top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the tail, bottom and the right side. If this mode is set, pad and output_padding must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad and output_padding must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – Add extra size to each dimension of the output. Default: 0.

  • data_format (str) – The optional value for data format. Currently only ‘NCDHW’ is supported.

Inputs:
  • dout (Tensor) - The gradients with respect to the output of the convolution. The shape conforms to the default. data_format \((N, C_{in}, D_{out}, H_{out}, W_{out})\). Currently dout data type only supports float16 and float32.

  • weight (Tensor) - Set size of kernel is \((K_d, K_h, K_w)\), then the shape is \((C_{in}, C_{out}//group, K_d, K_h, K_w)\). Where \(group\) is the Args parameter. Currently weight data type only supports float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{out}\). Currently, only support none.

Outputs:

Tensor, the gradients with respect to the input of convolution 3D. Tensor of shape \((N, C_{out}//group, D_{out}, H_{out}, W_{out})\), where \(group\) is the Args parameter.

Supported Platforms:

Ascend GPU

Raises
  • TypeError – If in_channel, out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad , dilation or output_padding is neither an int not a tuple.

  • ValueError – If in_channel, out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

  • TypeError – If dout and weight data type is not float16.

  • ValueError – If bias is not none. The rank of dout and weight is not 5.

Examples

>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
>>> conv3d_transpose = ops.Conv3DTranspose(in_channel=16, out_channel=3, kernel_size=(4, 6, 2))
>>> output = conv3d_transpose(dout, weight)
>>> print(output.shape)
(32, 3, 13, 37, 33)
class tinyms.primitives.Cos(*args, **kwargs)[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> cos = ops.Cos()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cos(x)
>>> print(output)
[0.971338 0.67487574 0.95233357 0.9959527 ]
class tinyms.primitives.Cosh(*args, **kwargs)[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = \cosh(input_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> cosh = ops.Cosh()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cosh(x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
class tinyms.primitives.CrdToUintCrd(*args, **kwargs)[source]

Convert FP32 coordinate to Uint32 coordinate.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

atom_numbers (int32) – the number of atoms n.

Inputs:
  • crd_to_uint_crd_cof (Tensor) - The scale factor between the unsigned int value and the real space coordinates. The data type is float32 and the shape is \((3,)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

Outputs:
  • output (Scalar) - The data type is uint32.

Supported Platforms:

GPU

class tinyms.primitives.CrdToUintCrdQuarter(*args, **kwargs)[source]

Convert FP32 coordinate to Uint32 coordinate.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

atom_numbers (int32) – the number of atoms n.

Inputs:
  • crd_to_uint_crd_cof (Tensor) - The crd_to_uint_crd coefficient. The data type is float32 and the shape is \((3,)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

Outputs:
  • output (Tensor) - The unsigned int coordinates. The data type is unsigned int32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.CropAndResize(*args, **kwargs)[source]

Extracts crops from the input image tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant.

Parameters
  • method (str) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. Default: “bilinear”

  • extrapolation_value (float) – An optional float value used extrapolation, if applicable. Default: 0.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape [batch, image_height, image_width, depth]. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • boxes (Tensor) - A 2-D tensor of shape [num_boxes, 4]. The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the [0, 1] range are allowed, in which case we use extrapolation_value to extrapolate the input image values. Types allowed: float32.

  • box_index (Tensor) - A 1-D tensor of shape [num_boxes] with int32 values in [0, batch). The value of box_ind[i] specifies the image that the i-th box refers to. Types allowed: int32.

  • crop_size (Tuple[int]) - A tuple of two int32 elements: (crop_height, crop_width). Only constant value is allowed. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive.

Outputs:

A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth] with type: float32.

Raises
  • TypeError – If method is not a str.

  • TypeError – If extrapolation_value is not a float.

  • ValueError – If method is not one of ‘bilinear’, ‘nearest’, ‘bilinear_v2’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class CropAndResizeNet(nn.Cell):
...     def __init__(self, crop_size):
...         super(CropAndResizeNet, self).__init__()
...         self.crop_and_resize = ops.CropAndResize()
...         self.crop_size = crop_size
...
...     def construct(self, x, boxes, box_index):
...         return self.crop_and_resize(x, boxes, box_index, self.crop_size)
...
>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> crop_and_resize = CropAndResizeNet(crop_size=crop_size)
>>> output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
>>> print(output.shape)
(5, 24, 24, 3)
class tinyms.primitives.CumProd(*args, **kwargs)[source]

Computes the cumulative product of the tensor x along axis.

Parameters
  • exclusive (bool) – If true, perform exclusive cumulative product. Default: False.

  • reverse (bool) – If true, reverse the result along axis. Default: False

Inputs:
  • x (Tensor[Number]) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (int) - The dimensions to compute the cumulative product. Only constant value is allowed.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises
Supported Platforms:

Ascend GPU

Examples

>>> a, b, c, = 1, 2, 3
>>> x = Tensor(np.array([a, b, c]).astype(np.float32))
>>> op0 = ops.CumProd()
>>> output0 = op0(x, 0) # output=[a, a * b, a * b * c]
>>> op1 = ops.CumProd(exclusive=True)
>>> output1 = op1(x, 0) # output=[1, a, a * b]
>>> op2 = ops.CumProd(reverse=True)
>>> output2 = op2(x, 0) # output=[a * b * c, b * c, c]
>>> op3 = ops.CumProd(exclusive=True, reverse=True)
>>> output3 = op3(x, 0) # output=[b * c, c, 1]
>>> print(output0)
[1. 2. 6.]
>>> print(output1)
[1. 1. 2.]
>>> print(output2)
[6. 6. 3.]
>>> print(output3)
[6. 3. 1.]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [5, 3, 5]]).astype(np.float32))
>>> output4 = op0(x, 0)
>>> output5 = op0(x, 1)
>>> print(output4)
[[ 1.  2.  3.]
 [ 4. 10. 18.]
 [20. 30. 90.]]
>>> print(output5)
[[1.  2.   6.]
 [4. 20. 120.]
 [5. 15.  75.]]
class tinyms.primitives.CumSum(*args, **kwargs)[source]

Computes the cumulative sum of input tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]
Parameters
  • exclusive (bool) – If true, perform exclusive mode. Default: False.

  • reverse (bool) – If true, perform inverse cumulative sum. Default: False.

Inputs:
  • input (Tensor) - The input tensor to accumulate.

  • axis (int) - The axis to accumulate the tensor’s value. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Outputs:

Tensor, the shape of the output tensor is consistent with the input tensor’s.

Raises
  • TypeError – If exclusive or reverse is not a bool.

  • TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> cumsum = ops.CumSum()
>>> # case 1: along the axis 0
>>> y = cumsum(x, 0)
>>> print(y)
[[ 3.  4.  6. 10.]
 [ 4. 10. 13. 19.]
 [ 8. 13. 21. 26.]
 [ 9. 16. 28. 35.]]
>>> # case 2: along the axis 1
>>> y = cumsum(x, 1)
>>> print(y)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
>>> # Next demonstrate exclusive and reverse, along axis 1
>>> # case 3: exclusive = True
>>> cumsum = ops.CumSum(exclusive=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[ 0.  3.  7. 13.]
 [ 0.  1.  7. 14.]
 [ 0.  4.  7. 15.]
 [ 0.  1.  4. 11.]]
>>> # case 4: reverse = True
>>> cumsum = ops.CumSum(reverse=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[23. 20. 16. 10.]
 [23. 22. 16.  9.]
 [22. 18. 15.  7.]
 [20. 19. 16.  9.]]
class tinyms.primitives.DType(*args, **kwargs)[source]

Returns the data type of the input tensor as mindspore.dtype.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

mindspore.dtype, the data type of a tensor.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.DType()(input_tensor)
>>> print(output)
Float32
class tinyms.primitives.DataFormatDimMap(*args, **kwargs)[source]

Returns the dimension index in the destination data format given in the source data format.

Parameters
  • src_format (str) – An optional value for source data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NHWC’.

  • dst_format (str) – An optional value for destination data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor with each element as a dimension index in source data format. The suggested values is in the range [-4, 4). Only supports int32.

Outputs:

Tensor, Return the dimension index in the given target data format, has the same data type and shape as the input_x.

Raises
  • TypeError – If src_format or dst_format is not a str.

  • TypeError – If input_x is not a Tensor whose dtype is not int32.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor([0, 1, 2, 3], mindspore.int32)
>>> dfdm = ops.DataFormatDimMap()
>>> output = dfdm(input_x)
>>> print(output)
[0 3 1 2]
class tinyms.primitives.Depend(*args, **kwargs)[source]

Depend is used for processing dependency operations.

In most scenarios, if operators have IO side effects or memory side effects, they will be executed according to the user’s semantics. In some scenarios, if the two operators A and B have no order dependency, and A must be executed before B, we recommend using Depend to specify their execution order. The usage method is as follows:

a = A(x)                --->        a = A(x)
b = B(y)                --->        y = Depend(y, a)
                        --->        b = B(y)
Inputs:
  • value (Tensor) - the real value to return for depend operator.

  • expr (Expression) - the expression to execute with no outputs.

Outputs:

Tensor, the value passed by last operator.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.softmax = ops.Softmax()
...         self.depend = ops.Depend()
...
...     def construct(self, x, y):
...         mul = x * y
...         y = self.depend(y, mul)
...         ret = self.softmax(y)
...         return ret
...
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
[[0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]]
class tinyms.primitives.DepthToSpace(*args, **kwargs)[source]

Rearranges blocks of depth data into spatial dimensions.

This is the reverse operation of SpaceToDepth.

The depth of output tensor is \(input\_depth / (block\_size * block\_size)\).

The output tensor’s height dimension is \(height * block\_size\).

The output tensor’s weight dimension is \(weight * block\_size\).

The input tensor’s depth must be divisible by block_size * block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide depth data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. It must be a 4-D tensor with shape \((N, C_{in}, H_{in}, W_{in})\). The data type is Number.

Outputs:

Tensor of shape \((N, C_{in} / \text{block_size} ^ 2, H_{in} * \text{block_size}, W_{in} * \text{block_size})\).

Raises
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1, 12, 1, 1), mindspore.float32)
>>> block_size = 2
>>> depth_to_space = ops.DepthToSpace(block_size)
>>> output = depth_to_space(x)
>>> print(output.shape)
(1, 3, 2, 2)
class tinyms.primitives.DepthwiseConv2dNative(*args, **kwargs)[source]

Returns the depth-wise convolution value for the input.

Applies depthwise conv2d for the input, which will generate more channels with channel_multiplier. Given an input tensor of shape \((N, C_{in}, H_{in}, W_{in})\) where \(N\) is the batch size, \(C\) is the channels, \(H\) is height, \(W\) is width and a filter tensor with kernel size \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) indicates the kernel_size of height, \(\text{kernel_size[1]}\) indicates the kernel_size of width, containing \(C_{in} * \text{channel_multiplier}\) convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels for each input channel has the default value 1), then concatenates the results together. The output has \(C_{in} * \text{channel_multiplier}\) channels.

Parameters
  • channel_multiplier (int) – The multiplier for the original output convolution. Its value must be greater than 0.

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both the height and the width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. 0 Math convolution, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 3.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input x. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union[int, tuple[int]]) – Implicit paddings on both sides of the input x. If pad is one integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple with four integers, the paddings of top, bottom, left and right will be equal to pad[0], pad[1], pad[2], and pad[3] accordingly. Default: 0.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set the size of kernel as \((\text{kernel_size[0]}, \text{kernel_size[1]})\), then the shape is \((K, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})\), K must be 1.

Outputs:

Tensor of shape \((N, C_{in} * \text{channel_multiplier}, H_{out}, W_{out})\).

Raises
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If channel_multiplier or group is not an int.

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of the following:’same’, ‘valid’ or ‘pad’.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([1, 32, 3, 3]), mindspore.float32)
>>> depthwise_conv2d = ops.DepthwiseConv2dNative(channel_multiplier=3, kernel_size=(3, 3))
>>> output = depthwise_conv2d(x, weight)
>>> print(output.shape)
(10, 96, 30, 30)
class tinyms.primitives.Diag(*args, **kwargs)[source]

Constructs a diagonal tensor with a given diagonal values.

Assume input_x has dimensions \([D_1,... D_k]\), the output is a tensor of rank 2k with dimensions \([D_1,..., D_k, D_1,..., D_k]\) where: \(output[i_1,..., i_k, i_1,..., i_k] = input_x[i_1,..., i_k]\) and 0 everywhere else.

Inputs:
  • input_x (Tensor) - The input tensor. The input shape must be less than 5d.

Outputs:

Tensor, has the same dtype as the input_x.

Raises
Supported Platforms:

Ascend

Examples

>>> input_x = Tensor([1, 2, 3, 4])
>>> diag = ops.Diag()
>>> output = diag(input_x)
>>> print(output)
[[1, 0, 0, 0],
 [0, 2, 0, 0],
 [0, 0, 3, 0],
 [0, 0, 0, 4]]
class tinyms.primitives.DiagPart(*args, **kwargs)[source]

Extracts the diagonal part from given tensor.

Assume input has dimensions \([D_1,..., D_k, D_1,..., D_k]\), the output is a tensor of rank k with dimensions \([D_1,..., D_k]\) where: \(output[i_1,..., i_k] = input[i_1,..., i_k, i_1,..., i_k]\).

Inputs:
  • input_x (Tensor) - The input tensor of rank 2k, k is not zero.

Outputs:

Tensor, the extracted diagonal has the same dtype as the input_x.

Raises
  • TypeError – If input_x is not a Tensor.

  • ValueError – If rank of input_x is not even or zero.

  • ValueError – If input_shape[i] is not equal to input_shape[i + len(input_shape)/2].

Supported Platforms:

Ascend

Examples
>>> input_x = Tensor([[1, 0, 0, 0],
...                   [0, 2, 0, 0],
...                   [0, 0, 3, 0],
...                   [0, 0, 0, 4]])
>>> diag_part = ops.DiagPart()
>>> output = diag_part(input_x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.Dihedral14CFAtomEnergy(*args, **kwargs)[source]

Add the potential energy caused by Coulumb energy correction for each necessary dihedral 1,4 terms to the total potential energy of each atom.

The calculation formula is the same as operator Dihedral14CFEnergy.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb part of force correction for each dihedral 1,4 terms. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The accumulated potential energy of each atom. The data type is float32 and the shape is \((n,)\)

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14CFEnergy(*args, **kwargs)[source]

Calculate the Coulumb part of 1,4 dihedral energy correction for each necessary dihedral terms on the corresponding atoms.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_a-x_b, y_a-y_b, z_a-z_b)\]
\[E = k*q_a*q_b/|dr|\]
Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb part of force correction for each dihedral 1,4 terms. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The accumulated potential energy of each atom. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14ForceWithAtomEnergyVirial(*args, **kwargs)[source]

Calculate the Lennard-Jones and Coulumb energy correction and force correction for each necessary dihedral 1,4 terms together and add them to the total force and potential energy for each atom.

The calculation formula of force correction is the same as operator Dihedral14LJForceWithDirectCF, and the energy correction part is the same as operator Dihedral14LJEnergy and Dihedral14CFEnergy.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJtype (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term.

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb force. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_energy (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n, )\).

  • atom_virial (Tensor) - The accumulated potential virial for each atom. The data type is float32 and the shape is \((n, )\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14LJAtomEnergy(*args, **kwargs)[source]

Add the potential energy caused by Lennard-Jones energy correction for each necessary dihedral 1,4 terms to the total potential energy of each atom.

The calculation formula is the same as operator Dihedral14LJEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term. The data type is float32 and the shape is \((m,)\).

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb part of force correction for each dihedral 1,4 terms. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • ene (Tensor) - The accumulated potential energy of each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14LJCFForceWithAtomEnergy(*args, **kwargs)[source]

Calculate the Lennard-Jones and Coulumb energy correction and force correction for each necessary dihedral 1,4 terms together and add them to the total force and potential energy for each atom.

The calculation formula of force correction is the same as operator Dihedral14LJForceWithDirectCF, and the energy correction part is the same as operator Dihedral14LJEnergy and Dihedral14CFEnergy.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term. The data type is float32 and the shape is \((m,)\).

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb part of force correction for each dihedral 1,4 terms. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_energy (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14LJEnergy(*args, **kwargs)[source]

Calculate the Lennard-Jones part of 1,4 dihedral energy correction for each necessary dihedral terms on the corresponding atoms.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_a-x_b, y_a-y_b, z_a-z-b)\]
\[E = k*(A/|dr|^{12} - B/|dr|^{6})\]
Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • ene (Tensor) - The Lennard-Jones potential energy correction. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14LJForce(*args, **kwargs)[source]

Calculate the Lennard-Jones part of 1,4 dihedral force correction for each necessary dihedral terms on the corresponding atoms.

Assume the number of necessary dihedral 1,4 terms is m, the number of atoms is n, and the number of Lennard-Jones types for all atoms is P, which means there will be q = P*(P+1)/2 types of possible Lennard-Jones interactions for all kinds of atom pairs.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_a-x_b, y_a-y_b, z_a-z_b)\]
\[F = k*(-12*A/|dr|^{14} + 6*B/|dr|^{8})*dr\]
Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.Dihedral14LJForceWithDirectCF(*args, **kwargs)[source]

Calculate the Lennard-Jones part and the Coulomb part of force correction for each necessary dihedral 1,4 terms.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

The calculation formula of the Lennard-Jones part is the same as operator Dihedral14LJForce(), and the Coulomb part is as follows:

\[dr = (x_a-x_b, y_a-y_b, z_a-z_b)\]
\[F = -k*q_a*q_b/|r|^3*dr\]
Parameters
  • nb14_numbers (int32) – the number of necessary dihedral 1,4 terms m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJ_type (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge of each atom. The data type is float32 and the shape is \((n,)\).

  • boxlength_f (Tensor) - The length of molecular simulation box in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • a_14 (Tensor) - The first atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • b_14 (Tensor) - The second atom index of each dihedral 1,4 term. The data type is int32 and the shape is \((m,)\).

  • lj_scale_factor (Tensor) - The scale factor for the Lennard-Jones part of force correction of each dihedral 1,4 term. The data type is float32 and the shape is \((m,)\).

  • cf_scale_factor (Tensor) - The scale factor for the Coulomb part of force correction for each dihedral 1,4 terms. The data type is float32 and the shape is \((m,)\).

  • LJ_type_A (Tensor) - The A parameter in Lennard-Jones scheme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • LJ_type_B (Tensor) - The B parameter in Lennard-Jones shceme of each atom pair type. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\)

Supported Platforms:

GPU

class tinyms.primitives.DihedralAtomEnergy(*args, **kwargs)[source]

Add the potential energy caused by dihedral terms to the total potential energy of each atom.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

The calculation formula is the same as operator DihedralEnergy().

Parameters

dihedral_numbers (int32) – the number of dihedral terms m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tenso) - The 3rd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_d (Tensor) - The 4th atom index of each dihedral. 4 atoms are connected in the form a-b-c-d. The data type is int32 and the shape is \((m,)\).

  • ipn (Tensor) - The period of dihedral angle of each dihedral. The data type is int32 and the shape is \((m,)\).

  • pk (Tensor) - The force constant of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gamc (Tensor) - k*cos(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gams (Tensor) - k*sin(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • pn (Tensor) - The floating point form of ipn. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.DihedralEnergy(*args, **kwargs)[source]

Calculate the potential energy caused by dihedral terms for each 4-atom pair. Assume our system has n atoms and m dihedral terms.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

dihedral_numbers (int32) – the number of dihedral terms m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_d (Tensor) - The 4th atom index of each dihedral. 4 atoms are connected in the form a-b-c-d. The data type is int32 and the shape is \((m,)\).

  • ipn (Tensor) - The period of dihedral angle of each dihedral. The data type is int32 and the shape is \((m,)\).

  • pk (Tensor) - The force constant of each dihedral. The data type is int32 and the shape is \((m,)\).

  • gamc (Tensor) - k*cos(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gams (Tensor) - k*sin(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • pn (Tensor) - The floating point form of ipn. The data type is float32 and the shape is \((m,)\).

Outputs:
  • ene (Tensor) - The potential energy for each dihedral term. The data type is float32 and the shape is \((m,)\).

Supported Platforms:

GPU

class tinyms.primitives.DihedralForce(*args, **kwargs)[source]

Calculate the force exerted by the dihedral term which made of 4-atoms on the corresponding atoms. Assume the number of dihedral terms is m and the number of atoms is n.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

dihedral_numbers (int32) – the number of dihedral terms m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tensor) - The 3rd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_d (Tensor) - The 4th atom index of each dihedral. 4 atoms are connected in the form a-b-c-d. The data type is int32 and the shape is \((m,)\).

  • ipn (Tensor) - The period of dihedral angle of each dihedral. The data type is int32 and the shape is \((m,)\).

  • pk (Tensor) - The force constant of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gamc (Tensor) - k*cos(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gams (Tensor) - k*sin(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • pn (Tensor) - The floating point form of ipn. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.DihedralForceWithAtomEnergy(*args, **kwargs)[source]

Calculate dihedral force and potential energy together.

The calculation formula is the same as operator DihedralForce() and DihedralEnergy().

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters

dihedral_numbers (int32) – the number of dihedral terms m.

Inputs:
  • uint_crd_f (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • scaler_f (Tensor) - The 3-D scale factor between the real space float coordinates and the unsigned int coordinates. The data type is float32 and the shape is \((3,)\).

  • atom_a (Tensor) - The 1st atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_b (Tensor) - The 2nd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_c (Tenso) - The 3rd atom index of each dihedral. The data type is int32 and the shape is \((m,)\).

  • atom_d (Tensor) - The 4th atom index of each dihedral. 4 atoms are connected in the form a-b-c-d. The data type is int32 and the shape is \((m,)\).

  • ipn (Tensor) - The period of dihedral angle of each dihedral. The data type is int32 and the shape is \((m,)\).

  • pk (Tensor) - The force constant of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gamc (Tensor) - k*cos(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • gams (Tensor) - k*sin(phi_0) of each dihedral. The data type is float32 and the shape is \((m,)\).

  • pn (Tensor) - The floating point form of ipn. The data type is float32 and the shape is \((m,)\).

Outputs:
  • frc_f (Tensor) - Same as operator DihedralForce(). The data type is float32 and the shape is \((n, 3)\).

  • ene (Tensor) - Same as operator DihedralAtomEnergy(). The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.Div(*args, **kwargs)[source]

Computes the quotient of dividing the first input tensor by the second input tensor element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = \frac{x_i}{y_i}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - When the first input is a tensor, The second input could be a number, a bool, or a tensor whose data type is number or bool. When the first input is a number or a bool, the second input must be a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 :has same data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = ops.Div()
>>> output = div(x, y)
>>> print(output)
[-1.3333334  2.5        2.        ]
>>> # case 2 : different data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.int32)
>>> y = Tensor(2, mindspore.float32)
>>> output = div(x, y)
>>> print(output)
[-2.  2.5  3.]
>>> print(output.dtype)
Float32
class tinyms.primitives.DivNoNan(*args, **kwargs)[source]

Computes a safe divide and returns 0 if the y is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([0., 0., 0., 2.0, 3.0]), mindspore.float32)
>>> div_no_nan = ops.DivNoNan()
>>> output = div_no_nan(x, y)
>>> print(output)
[0.  0.  0.  2.5 2. ]
class tinyms.primitives.Dropout(*args, **kwargs)[source]

During training, randomly zeroes some of the elements of the input tensor with probability 1-keep_prob from a Bernoulli distribution.

Parameters
  • keep_prob (float) – The keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units. Default: 0.5.

  • Seed0 (int) – Seed0 value for random generating. Default: 0.

  • Seed1 (int) – Seed1 value for random generating. Default: 0.

Inputs:
  • x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x.

Raises
  • TypeError – If keep_prob is not a float.

  • TypeError – If Seed0 or Seed1 is not an int.

  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout(keep_prob=0.5)
>>> x = Tensor(((20, 16), (50, 50)), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 2)
class tinyms.primitives.Dropout2D(*args, **kwargs)[source]

During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 4-dimensional tensor with a shape of NCHW, the channel feature map refers to a 2-dimensional feature map with the shape of HW).

For example, the \(j_th\) channel of the \(i_th\) sample in the batched input is a 2D tensor input[i,j]. Each channel will be zeroed out independently on every forward call with probability 1-keep_prob using samples from a Bernoulli distribution.

Dropout2D can improve the independence between channel feature maps.

Parameters

keep_prob (float) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 4-D tensor with shape \((N, C, H, W)\). The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises
  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0]; or if the dim of input is not 4-D.

Supported Platforms:

Ascend

Examples

>>> dropout = ops.Dropout2D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 3)
class tinyms.primitives.Dropout3D(*args, **kwargs)[source]

During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 5-dimensional tensor with a shape of NCDHW, the channel feature map refers to a 3-dimensional feature map with a shape of DHW).

For example, the \(j_th\) channel of the \(i_th\) sample in the batched input is a 3D tensor input[i,j,k]. Each channel will be zeroed out independently on every forward call with probability 1-keep_prob using samples from a Bernoulli distribution.

Dropout3D can improve the independence between channel feature maps.

Parameters

keep_prob (float) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 5-D tensor with shape \((N, C, D, H, W)\). The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises
  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0]; or if the dim of input is not 5-D.

Supported Platforms:

Ascend GPU

Examples

>>> dropout = ops.Dropout3D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 1, 2)
class tinyms.primitives.DropoutDoMask(*args, **kwargs)[source]

Applies dropout mask on the input tensor.

Take the mask output of DropoutGenMask as input, and apply dropout on the input.

Dropout means that neural network units are temporarily dropped from the network according to a certain probability during the deep learning network training. Generally, The effect of Dropout is the same as that of DropoutGenMask and DropoutDoMask. The DropoutGenMask generates a mask shape that is specified based on the input. Next, The DropoutDoMask is a mask generated using DropoutGenMask. The input tensor is randomly set to zero based on the probability p.

Inputs:
  • input_x (Tensor) - The input tensor. Tensor of shape \((N, \ldots)\). The data type should be float32, float16 or int32

  • mask (Tensor) - The mask to be applied on input_x, which is the output of DropoutGenMask. And the shape of input_x must be the same as the value of DropoutGenMask’s input shape. If input wrong mask, the output of DropoutDoMask are unpredictable.

  • keep_prob (Union[Tensor, float]) - The keep rate, greater than 0 and less equal than 1, e.g. keep_prob = 0.9, means dropping out 10% of input units. The value of keep_prob is the same as the input keep_prob of the operator DropoutGenMask.

Outputs:

Tensor, the value that applied dropout on, as the same data type and shape as input_x.

Raises
  • TypeError – If input_x, mask or keep_prob is not a Tensor.

  • TypeError – If keep_prob is not a float.

  • ValueError – If value of keep_prob is not same as DropoutGenMaks.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> shape = (2, 2, 3)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> dropout_gen_mask = ops.DropoutGenMask()
>>> dropout_do_mask = ops.DropoutDoMask()
>>> mask = dropout_gen_mask(shape, keep_prob)
>>> output = dropout_do_mask(input_x, mask, keep_prob)
>>> print(output.shape)
(2, 2, 3)
class tinyms.primitives.DropoutGenMask(*args, **kwargs)[source]

Generates the mask value for the input shape.

Dropout means that neural network units are temporarily dropped from the network according to a certain probability during the deep learning network training. Generally, The effect of Dropout is the same as that of DropoutGenMask and DropoutDoMask. The DropoutGenMask generates a mask shape that is specified based on the input. Next, The DropoutDoMask is a mask generated using DropoutGenMask. The input tensor is randomly set to zero based on the probability p.

Parameters
  • Seed0 (int) – Seed0 value for random generating. Default: 0.

  • Seed1 (int) – Seed1 value for random generating. Default: 0.

Inputs:
  • shape (tuple[int]) - The shape of target mask.

  • keep_prob (Tensor) - The keep rate, greater than 0 and less equal than 1, e.g. keep_prob = 0.9, means dropping out 10% of input units.

Outputs:

Tensor, the value of generated mask for Inputs shape.

Raises
  • TypeError – If neither seed0 nor seed1 is an int.

  • TypeError – If shape is not a tuple.

  • TypeError – If keep_prob is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> dropout_gen_mask = ops.DropoutGenMask()
>>> shape = (2, 4, 5)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> output = dropout_gen_mask(shape, keep_prob)
>>> print(output.shape)
(16,)
class tinyms.primitives.DynamicGRUV2(*args, **kwargs)[source]

Applies a single-layer gated recurrent unit (GRU) to an input sequence.

\[\begin{split}\begin{array}{ll} r_{t+1} = \sigma(W_{ir} x_{t+1} + b_{ir} + W_{hr} h_{(t)} + b_{hr}) \\ z_{t+1} = \sigma(W_{iz} x_{t+1} + b_{iz} + W_{hz} h_{(t)} + b_{hz}) \\ n_{t+1} = \tanh(W_{in} x_{t+1} + b_{in} + r_{t+1} * (W_{hn} h_{(t)}+ b_{hn})) \\ h_{t+1} = (1 - z_{t+1}) * n_{t+1} + z_{t+1} * h_{(t)} \end{array}\end{split}\]

where \(h_{t+1}\) is the hidden state at time t+1, \(x_{t+1}\) is the input at time t+1, \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0, and \(r_{t+1}\), \(z_{t+1}\), \(n_{t+1}\) are the reset, update, and new gates, respectively. \(W\), \(b\) are the weight parameter and the deviation parameter respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.

Parameters
  • direction (str) – A string identifying the direction in the op. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the op. Default: 1.

  • keep_prob (float) – A float identifying the keep prob in the op. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the op. Default: -1.0.

  • num_proj (int) – An integer identifying the num proj in the op. Default: 0.

  • time_major (bool) – A bool identifying the time major in the op. Default: True.

  • activation (str) – A string identifying the type of activation function in the op. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • gate_order (str) – A string identifying the gate order in weight and bias. Default: ‘rzh. ‘zrh’ is another option.

  • reset_after (bool) – A bool identifying whether to apply reset gate after matrix multiplication. Default: True.

  • is_training (bool) – A bool identifying is training in the op. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((\text{num_step}, \text{batch_size}, \text{input_size})\). The data type must be float16.

  • weight_input (Tensor) - Input-hidden weight. Tensor of shape \((\text{input_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • weight_hidden (Tensor) - Hidden-hidden weight. Tensor of shape \((\text{hidden_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((\text{batch_size}, \text{hidden_size})\). The data type must be float16 or float32.

  • bias_input (Tensor) - Input-hidden bias. Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • bias_hidden (Tensor) - Hidden-hidden bias. Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((\text{batch_size})\). Only None is currently supported.

Outputs:
  • y (Tensor) - A Tensor of shape:

    • y_shape = \((num\_step, batch\_size, min(hidden\_size, num\_proj))\): If num_proj > 0,

    • y_shape = \((num\_step, batch\_size, hidden\_size)\): If num_proj = 0.

    Has the same data type with input bias_type.

  • output_h (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • update (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • reset (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • hidden_new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

A note about the bias_type:

  • If bias_input and bias_hidden both are None, bias_type is date type of init_h.

  • If bias_input is not None, bias_type is the date type of bias_input.

  • If bias_input is None and bias_hidden is not None, bias_type is the date type of bias_hidden.

Raises
  • TypeError – If direction, activation or gate_order is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob or cell_clip is not a float.

  • TypeError – If time_major, reset_after or is_training is not a bool.

  • TypeError – If x, weight_input, weight_hidden, bias_input, bias_hidden, seq_length or ini_h is not a Tensor.

  • TypeError – If dtype of x, weight_input or weight_hidden is not float16.

  • TypeError – If dtype of init_h is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 8, 64).astype(np.float16))
>>> weight_i = Tensor(np.random.rand(64, 48).astype(np.float16))
>>> weight_h = Tensor(np.random.rand(16, 48).astype(np.float16))
>>> bias_i = Tensor(np.random.rand(48).astype(np.float16))
>>> bias_h = Tensor(np.random.rand(48).astype(np.float16))
>>> init_h = Tensor(np.random.rand(8, 16).astype(np.float16))
>>> dynamic_gru_v2 = ops.DynamicGRUV2()
>>> output = dynamic_gru_v2(x, weight_i, weight_h, bias_i, bias_h, None, init_h)
>>> print(output[0].shape)
(2, 8, 16)
class tinyms.primitives.DynamicRNN(*args, **kwargs)[source]

Applies a recurrent neural network to the input. Only long short-term memory (LSTM) currently supported.

\[\begin{split}\begin{array}{ll} \\ i_{t+1} = \sigma(W_{ix} x_{t+1} + b_{ix} + W_{ih} h_{(t)} + b_{ih}) \\ f_{t+1} = \sigma(W_{fx} x_{t+1} + b_{fx} + W_{fh} h_{(t)} + b_{fh}) \\ \tilde{c}_{t+1} = \tanh(W_{cx} x_{t+1} + b_{cx} + W_{ch} h_{(t)} + b_{ch}) \\ o_{t+1} = \sigma(W_{ox} x_{t+1} + b_{ox} + W_{oh} h_{(t)} + b_{oh}) \\ c_{t+1} = f_{t+1} * c_{(t)} + i_t * \tilde{c}_{t+1} \\ h_{t+1} = o_{t+1} * \tanh(c_{t+1}) \\ \end{array}\end{split}\]

where \(h_{t+1}\) is the hidden state at time t+1, \(x_{t+1}\) is the input at time t+1, \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0, \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\).

Parameters
  • cell_type (str) – A string identifying the cell type in the op. Default: ‘LSTM’. Only ‘LSTM’ is currently supported.

  • direction (str) – A string identifying the direction in the op. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the op. Default: 1.

  • use_peephole (bool) – A bool identifying if use peephole in the op. Default: False.

  • keep_prob (float) – A float identifying the keep prob in the op. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the op. Default: -1.0.

  • num_proj (int) – An integer identifying the num proj in the op. Default: 0.

  • time_major (bool) – A bool identifying the time major in the op. Default: True. Only True is currently supported.

  • activation (str) – A string identifying the type of activation function in the op. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • forget_bias (float) – A float identifying the forget bias in the op. Default: 0.0.

  • is_training (bool) – A bool identifying is training in the op. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((num\_step, batch\_size, input\_size)\). The data type must be float16.

  • w (Tensor) - Weight. Tensor of shape \((input\_size + hidden\_size, 4 x hidden\_size)\). The data type must be float16.

  • b (Tensor) - Bias. Tensor of shape :math`(4 x hidden_size)`. The data type must be float16 or float32.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((batch\_size, )\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

  • init_c (Tensor) - Cell state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

Outputs:
  • y (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • output_h (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). With data type of float16.

  • output_c (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • i (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • j (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • f (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • o (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • tanhct (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

Raises
  • TypeError – If cell_type, direction or activation is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob, cell_clip or forget_bias is not a float.

  • TypeError – If use_peehpole, time_major or is_training is not a bool.

  • TypeError – If x, w, b, seq_length, init_h or init_c is not a Tensor.

  • TypeError – If dtype of x, w, init_h or nit_c is not float16.

  • TypeError – If dtype of b is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 16, 64).astype(np.float16))
>>> w = Tensor(np.random.rand(96, 128).astype(np.float16))
>>> b = Tensor(np.random.rand(128).astype(np.float16))
>>> init_h = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> init_c = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> dynamic_rnn = ops.DynamicRNN()
>>> output = dynamic_rnn(x, w, b, None, init_h, init_c)
>>> print(output[0].shape)
(2, 16, 32)
class tinyms.primitives.DynamicShape(*args, **kwargs)[source]

Returns the shape of the input tensor. And it used to be dynamic shape.

Note

Dynamic shape: After the graph is running, as the tensor flows in the graph, the specific shape of the tensor on each node on the graph can be inferred according to the structure of the graph. This shape is called a dynamic shape. As the input shape of the graph is different, the dynamic shape of the tensor in the graph will change.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor[int], 1-dim Tensor of type int32

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.DynamicShape()
>>> output = shape(input_x)
>>> print(output)
[3 2 1]
class tinyms.primitives.EditDistance(*args, **kwargs)[source]

Computes the Levenshtein Edit Distance. It is used to measure the similarity of two sequences. The inputs are variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape) and (truth_indices, truth_values, truth_shape).

Parameters

normalize (bool) – If true, edit distances are normalized by length of truth. Default: True.

Inputs:
  • hypothesis_indices (Tensor) - The indices of the hypothesis list SparseTensor. With int64 data type. The shape of tensor is \((N, R)\).

  • hypothesis_values (Tensor) - The values of the hypothesis list SparseTensor. With float32 data type. Must be 1-D vector with length of N.

  • hypothesis_shape (Tensor) - The shape of the hypothesis list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

  • truth_indices (Tensor) - The indices of the truth list SparseTensor. With int64 data type. The shape of tensor is \((M, R)\).

  • truth_values (Tensor) - The values of the truth list SparseTensor. Must be 1-D vector with length of M. With float32 data type.

  • truth_shape (Tensor) - The shape of the truth list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

Outputs:

Tensor, a dense tensor with rank R-1 and float32 data type.

Raises

TypeError – If normalize is not a bool.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> from mindspore import context
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> class EditDistance(nn.Cell):
...     def __init__(self, hypothesis_shape, truth_shape, normalize=True):
...         super(EditDistance, self).__init__()
...         self.edit_distance = ops.EditDistance(normalize)
...         self.hypothesis_shape = hypothesis_shape
...         self.truth_shape = truth_shape
...
...     def construct(self, hypothesis_indices, hypothesis_values, truth_indices, truth_values):
...         return self.edit_distance(hypothesis_indices, hypothesis_values, self.hypothesis_shape,
...                                   truth_indices, truth_values, self.truth_shape)
...
>>> hypothesis_indices = Tensor(np.array([[0, 0, 0], [1, 0, 1], [1, 1, 1]]).astype(np.int64))
>>> hypothesis_values = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> hypothesis_shape = Tensor(np.array([1, 1, 2]).astype(np.int64))
>>> truth_indices = Tensor(np.array([[0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]).astype(np.int64))
>>> truth_values = Tensor(np.array([1, 3, 2, 1]).astype(np.float32))
>>> truth_shape = Tensor(np.array([2, 2, 2]).astype(np.int64))
>>> edit_distance = EditDistance(hypothesis_shape, truth_shape)
>>> output = edit_distance(hypothesis_indices, hypothesis_values, truth_indices, truth_values)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Elu(*args, **kwargs)[source]

Computes exponential linear:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

The data type of input tensor must be float.

Parameters

alpha (float) – The coefficient of negative factor whose type is float, only support ‘1.0’ currently. Default: 1.0.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, has the same shape and data type as input_x.

Raises
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> elu = ops.Elu()
>>> output = elu(input_x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
class tinyms.primitives.EmbeddingLookup(*args, **kwargs)[source]

Returns a slice of input tensor based on the specified indices.

This Primitive has the similar functionality as GatherV2 operating on axis = 0, but has one more inputs: offset.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). This represents a Tensor slice, instead of the entire Tensor. Currently, the dimension is restricted to be 2.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of input_params, and the exceeding part will be filled with 0 in the output. Values does not support negative and the result is undefined if values are negative. The data type should be int32 or int64.

  • offset (int) - Specifies the offset value of this input_params slice. Thus the real indices are equal to input_indices minus offset.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\). The data type is the same with input_params.

Raises
  • TypeError – If dtype of input_indices is not int.

  • ValueError – If length of shape of input_params is greater than 2.

Supported Platforms:

Ascend CPU GPU

Examples

>>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
>>> input_indices = Tensor(np.array([[5, 2], [8, 5]]), mindspore.int32)
>>> offset = 4
>>> output = ops.EmbeddingLookup()(input_params, input_indices, offset)
>>> print(output)
[[[10. 11.]
  [ 0.  0.]]
 [[ 0.  0.]
  [10. 11.]]]
class tinyms.primitives.Eps(*args, **kwargs)[source]

Creates a tensor filled with x dtype minimum value.

Inputs:
  • x (Tensor) - Input tensor. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same type and shape as x, but filled with x dtype minimum val.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([4, 1, 2, 3], mindspore.float32)
>>> output = ops.Eps()(x)
>>> print(output)
[1.5258789e-05 1.5258789e-05 1.5258789e-05 1.5258789e-05]
class tinyms.primitives.Equal(*args, **kwargs)[source]

Computes the equivalence between two tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} = y_{i} \\ & \text{False, if } x_{i} \ne y_{i} \end{cases}\end{split}\]
Inputs:
  • x (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.

  • y (Union[Tensor, Number]) - The second input is a number when the first input is a tensor or a tensor whose data type is number. The data type is the same as the first input.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: The shape of two inputs are different
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> equal = ops.Equal()
>>> output = equal(x, 2.0)
>>> print(output)
[False True False]
>>> # case 2: The shape of two inputs are the same
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal = ops.Equal()
>>> output = equal(x, y)
>>> print(output)
[ True  True False]
class tinyms.primitives.EqualCount(*args, **kwargs)[source]

Computes the number of the same elements of two tensors.

The two input tensors must have the same data type and shape.

Inputs:
  • x (Tensor) - The first input tensor. If the data type and shape of y are determined, then x must be the same as y, and vice versa. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The second input tensor. If the data type and shape of x are determined, then y must be the same as x, and vice versa.

Outputs:

Tensor, with the type same as input tensor and size as (1,).

Raises
  • TypeError – If x or y is not a Tensor.

  • ValueError – If shape of x is not equal to shape of y.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal_count = ops.EqualCount()
>>> output = equal_count(x, y)
>>> print(output)
[2]
class tinyms.primitives.Erf(*args, **kwargs)[source]

Computes the Gauss error function of x element-wise.

\[erf(x)=\frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erf = ops.Erf()
>>> output = erf(x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
class tinyms.primitives.Erfc(*args, **kwargs)[source]

Computes the complementary error function of x element-wise.

\[erfc(x) = 1 - \frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shap dtype as the x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erfc = ops.Erfc()
>>> output = erfc(x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
class tinyms.primitives.Erfinv(*args, **kwargs)[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1) as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises

TypeError – If dtype of input_x is not one of: float32, float16.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = P.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.Erfinv(*args, **kwargs)[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1) as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises

TypeError – If dtype of input_x is not one of: float32, float16.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = P.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.Exp(*args, **kwargs)[source]

Returns exponential of a tensor element-wise.

\[out_i = e^{x_i}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> exp = ops.Exp()
>>> output = exp(x)
>>> print(output)
[ 2.718282  7.389056 54.598152]
class tinyms.primitives.ExpandDims(*args, **kwargs)[source]

Adds an additional dimension to ‘input_x` at the given axis.

Note

If the specified axis is a negative number, the index is counted backward from the end and starts at 1.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (int) - Specifies the dimension index at which to expand the shape of input_x. The value of axis must be in the range [-input_x.ndim-1, input_x.ndim]. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((1, x_1, x_2, ..., x_R)\) if the value of axis is 0. It has the same data type as input_x.

Raises

ValueError – If axis is not an int or not in the valid range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> expand_dims = ops.ExpandDims()
>>> output = expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
class tinyms.primitives.Expm1(*args, **kwargs)[source]

Returns exponential then minus 1 of a tensor element-wise.

\[out_i = e^{x_i} - 1\]
Inputs:
  • x (Tensor) - The input tensor. With float16 or float32 data type. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as the x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 1.0, 2.0, 4.0]), mindspore.float32)
>>> expm1 = ops.Expm1()
>>> output = expm1(x)
>>> print(output)
[ 0.        1.718282  6.389056 53.598152]
class tinyms.primitives.Eye(*args, **kwargs)[source]

Creates a tensor with ones on the diagonal and zeros the rest.

Inputs:
  • n (int) - The number of rows of returned tensor. only constant value.

  • m (int) - The number of columns of returned tensor. only constant value.

  • t (mindspore.dtype) - MindSpore’s dtype, The data type of the returned tensor. The data type can be Number.

Outputs:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of output depends on the user’s Inputs n and m. And the data type depends on Inputs t.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> eye = ops.Eye()
>>> output = eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = eye(1, 2, mindspore.float64)
>>> print(output)
[[1. 0.]]
>>> print(output.dtype)
Float64
>>> # if wants a anti-diagonal
>>> anti_diagonal_input = eye(2, 2, mindspore.int32)
>>> # Note that ReverseV2 only supports "Ascend" at this time
>>> reverse = ops.ReverseV2([1])
>>> anti_diagonal_output = reverse(anti_diagonal_input)
>>> print(anti_diagonal_output)
[[0 1]
 [1 0]]
class tinyms.primitives.FFT3D(*args, **kwargs)[source]

Forward FFT with Three-Dimensional Input.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Inputs:
  • input_tensor (Tensor) - Three dimensional tensor, supported data type is float32.

Outputs:
  • output_tensor (Tensor) - The tensor after undergoing fast Fourier transform, the data type is complex64.

Supported Platforms:

GPU

class tinyms.primitives.FastGeLU(*args, **kwargs)[source]

Fast Gaussian Error Linear Units activation function.

FastGeLU is defined as follows:

\[\text{output} = \frac {x} {1 + \exp(-1.702 * \left| x \right|)} * \exp(0.851 * (x - \left| x \right|)),\]

where \(x\) is the element of the input.

Inputs:
  • x (Tensor) - Input to compute the FastGeLU with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = ops.FastGeLU()
>>> output = fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
class tinyms.primitives.FastGelu(**kwargs)[source]

Same as operator FastGeLU. FastGelu will be deprecated in the future. Please use FastGeLU instead.

class tinyms.primitives.Fill(*args, **kwargs)[source]

Creates a tensor filled with a scalar value.

Creates a tensor with shape described by the first argument and fills it with values in the second argument.

Inputs:
  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

  • shape (tuple) - The specified shape of output tensor. Only constant value is allowed.

  • value (scalar) - Value to fill the returned tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input value.

Raises

TypeError – If shape is not a tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> fill = ops.Fill()
>>> output = fill(mindspore.float32, (2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = fill(mindspore.float32, (3, 3), 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
class tinyms.primitives.Flatten(*args, **kwargs)[source]

Flattens a tensor without changing its batch size on the 0-th axis.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\) to be flattened, where \(N\) is batch size.

Outputs:

Tensor, the shape of the output tensor is \((N, X)\), where \(X\) is the product of the remaining dimension.

Raises
  • TypeError – If input_x is not a Tensor.

  • ValueError – If length of shape of input_x is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> flatten = ops.Flatten()
>>> output = flatten(input_x)
>>> print(output.shape)
(1, 24)
class tinyms.primitives.FloatStatus(*args, **kwargs)[source]

Determines if the elements contain Not a Number(NaN), infinite or negative infinite. 0 for normal, 1 for overflow.

Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the shape of (1,), and the dtype is mindspore.dtype.float32.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

GPU

Examples

>>> float_status = ops.FloatStatus()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = float_status(x)
>>> print(result)
[1.]
class tinyms.primitives.Floor(*args, **kwargs)[source]

Rounds a tensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Inputs:
  • x (Tensor) - The input tensor. Its element data type must be float. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If dtype of x is not float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> floor = ops.Floor()
>>> output = floor(x)
>>> print(output)
[ 1.  2. -2.]
class tinyms.primitives.FloorDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = \text{floor}( \frac{x_i}{y_i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the Floor operator.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_div = ops.FloorDiv()
>>> output = floor_div(x, y)
>>> print(output)
[ 0  1 -1]
class tinyms.primitives.FloorMod(*args, **kwargs)[source]

Computes the remainder of division element-wise. It’s a flooring divide. E.g. \(floor(x / y) * y + mod(x, y) = x\).

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool , and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} =\text{floor}(x_{i} // y_{i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the Floor operator.

Warning

  • The input data does not support 0.

  • When the elements of input exceeds 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_mod = ops.FloorMod()
>>> output = floor_mod(x, y)
>>> print(output)
[2 1 2]
class tinyms.primitives.FusedCastAdamWeightDecay(*args, **kwargs)[source]

Updates gradients by the Adaptive Moment Estimation (AdamWeightDecay) algorithm with weight decay. This operator incorporates type conversion when parameters are initialized with dtype of float16.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + eps} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated with the type float16 or float32.

  • m (Tensor) - The 1st moment vector in the updating formula with the type float32.

  • v (Tensor) - the 2nd moment vector in the updating formula with the type float32.

  • lr (float) - \(lr\) in the updating formula.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • decay (float) - The weight decay value, must be a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the type float16.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.context as context
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, Parameter
>>> from mindspore import dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.opt = ops.FusedCastAdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]), mstype.float16), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad):
...         out = self.opt(self.var, self.m, self.v, lr, beta1, beta2, epsilon, decay, grad)
...         return out
>>> context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]), mstype.float16)
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient)
>>> print(net.var.asnumpy())
class tinyms.primitives.FusedSparseAdam(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula. With float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient, has the same data type as var and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((1, )\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises
  • TypeError – If neither use_locking nor use_neserov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient or indices is not float32.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adam = ops.FusedSparseAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                                      epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[0.99971527 0.99971527]]]
class tinyms.primitives.FusedSparseFtrl(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((1, )\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

  • linear (Tensor) - A Tensor with shape \((1, )\).

Raises
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • ValueError – If shape of lr_power less than or equal to zero.

  • TypeError – If dtype of var is not float32.

  • TypeError – If dtype of indices is not int32.

  • TypeError – If shape of accum, linear or grad is not same as var.

  • TypeError – If shape of indices is not same as shape of first dimension of grad.

Supported Platforms:

Ascend CPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.FusedSparseFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[-0.00598256 -0.00598256]]
 [[-0.00598256 -0.00598256]]
 [[ 1.          1.        ]]]
class tinyms.primitives.FusedSparseLazyAdam(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (LazyAdam) algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the original Adam algorithm, as only the current indices parameters will be updated.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient value with float32 data type and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((1, )\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises
  • TypeError – If neither use_locking nor use_nestrov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon or gradient is not float32.

  • TypeError – If dtype of indices is not int32.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_lazyadam = ops.FusedSparseLazyAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_lazyadam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1,
...                                          beta2, epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedSparseProximalAdagrad(*args, **kwargs)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the variable and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Tensor) - The learning rate value. The data type must be float32. The shape is \((1, )\).

  • l1 (Tensor) - l1 regularization strength. The data type must be float32. The shape is \((1, )\).

  • l2 (Tensor) - l2 regularization strength. The data type must be float32. The shape is \((1, )\).

  • grad (Tensor) - A tensor of the same data type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((1, )\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2 or grad is not float32.

  • TypeError – If dtype of indices is not int32.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.FusedSparseProximalAdagrad()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.lr = Tensor(0.01, mindspore.float32)
...         self.l1 = Tensor(0.0, mindspore.float32)
...         self.l2 = Tensor(0.0, mindspore.float32)
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[0.99900496 0.99900496]]
 [[0.99900496 0.99900496]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedWeightScaleApplyMomentum(*args, **kwargs)[source]

Optimizer that implements the Momentum algorithm with weight decay and loss scale.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.

Inputs:
  • weight_decay (Tensor) - The weight decay value, must be a scalar tensor with float data type. Default: 0.0.

  • loss_scale (Tensor) - The loss scale value, must be a scalar tensor with float data type. Default: 1.0.

  • variable (Parameter) - Weights to be updated. data type must be float.

  • accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.

Outputs:

Tensor, parameters to be updated.

Supported Platforms:

GPU

Examples

Please refer to the usage in mindspore.nn.Momentum, and add weight_decay and loss_scale as inputs.

class tinyms.primitives.Gamma(*args, **kwargs)[source]

Produces random positive floating-point values x, distributed according to probability density function:

\[\text{P}(x|α,β) = \frac{\exp(-x/β)}{{β^α}\cdot{\Gamma(α)}}\cdot{x^{α-1}}\]
Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • alpha (Tensor) - The α distribution parameter. It must be greater than 0. It is also known as the shape parameter with float32 data type.

  • beta (Tensor) - The β distribution parameter. It must be greater than 0. It is also known as the scale parameter with float32 data type.

Outputs:

Tensor. The shape must be the broadcasted shape of Input “shape” and shapes of alpha and beta. The dtype is float32.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If neither alpha nor beta is a Tensor.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend

Examples

>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> beta = Tensor(np.array([1.0]), mstype.float32)
>>> gamma = ops.Gamma(seed=3)
>>> output = gamma(shape, alpha, beta)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
class tinyms.primitives.Gather(*args, **kwargs)[source]

Returns a slice of the input tensor based on the specified indices and axis.

Slices the input tensor base on the indices at specified axis. See the following example for more clear.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The original Tensor.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Must be in the range [0, input_param.shape[axis]) which are only validated on CPU. The data type can be int32 or int64.

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \(input\_params.shape[:axis] + input\_indices.shape + input\_params.shape[axis + 1:]\).

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[ 2.  7.]
 [ 4. 54.]
 [ 2. 55.]]
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[3. 4. 54. 22.]
 [2. 2. 55.  3.]]
class tinyms.primitives.GatherD(*args, **kwargs)[source]

Gathers values along an axis specified by dim.

For a 3-D tensor, the output is:

output[i][j][k] = x[index[i][j][k]][j][k]  # if dim == 0

output[i][j][k] = x[i][index[i][j][k]][k]  # if dim == 1

output[i][j][k] = x[i][j][index[i][j][k]]  # if dim == 2

If x is an n-D tensor with shape \((z_0, z_1, ..., z_i, ..., z_{n-1})\) and dim = i, the index must be an n-D tensor with shape \((z_0, z_1, ..., y, ..., z_{n-1})\) where y>=1 and the output will have the same shape as index.

Inputs:
  • x (Tensor) - The source tensor. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • dim (int) - The axis along which to index. It must be int32 or int64. Only constant value is allowed.

  • index (Tensor) - The indices of elements to gather. It can be one of the following data types: int32, int64. The value range of each index element is [-x_rank[dim], x_rank[dim]).

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\), has the same data type with x.

Raises
  • TypeError – If dtype of dim or index is neither int32 nor int64.

  • ValueError – If length of shape of x is not equal to length of shape of index.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.GatherD()(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
class tinyms.primitives.GatherNd(*args, **kwargs)[source]

Gathers slices from a tensor by indices.

Using given indices to gather slices from a tensor with a specified shape.

indices is an K-dimensional integer tensor. Supposes it as a (K-1)-dimensional tensor and each element of it defines a slice of input_x:

\[output[(i_0, ..., i_{K-2})] = input\_x[indices[(i_0, ..., i_{K-2})]]\]

The last dimension of indices can not more than the rank of input_x: \(indices.shape[-1] <= input\_x.rank\).

Inputs:
  • input_x (Tensor) - The target tensor to gather values. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index tensor, with int32 or int64 data type. The dimension of indices should be <= the dimension of input_x.

Outputs:

Tensor, has the same type as input_x and the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Raises

ValueError – If length of shape of input_x is less than the last dimension of indices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.GatherNd()
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> output = op(input_x, indices)
>>> print(output)
[-0.1  0.5]
class tinyms.primitives.GatherV2(**kwargs)[source]

Same as operator Gather. GatherV2 will be deprecated in the future. Please use Gather instead.

class tinyms.primitives.GeLU(*args, **kwargs)[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

GeLU is defined as follows:

\[\text{output} = 0.5 * x * (1 + tanh(x / \sqrt{2})),\]

where \(tanh\) is the hyperbolic tangent.

Inputs:
  • x (Tensor) - Input to compute the GeLU with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as x.

Raises
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> gelu = ops.GeLU()
>>> result = gelu(x)
>>> print(result)
[0.841192  1.9545976  2.9963627]
class tinyms.primitives.GeSwitch(*args, **kwargs)[source]

Adds control switch to data.

Switch data flows into false or true branch depending on the condition. If the condition is true, the true branch will be activated, or vise verse.

Inputs:
  • data (Union[Tensor, Number]) - The data to be used for switch control.

  • pred (Tensor) - It must be a scalar whose type is bool and shape is (), It is used as condition for switch control.

Outputs:

tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data. The false_output connects with the false_branch and the true_output connects with the true_branch.

Raises
  • TypeError – If data is neither a Tensor nor a Number.

  • TypeError – If pred is not a Tensor.

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.square = ops.Square()
...         self.add = ops.Add()
...         self.value = Tensor(np.full((1), 3), mindspore.float32)
...         self.switch = ops.GeSwitch()
...         self.merge = ops.Merge()
...         self.less = ops.Less()
...
...     def construct(self, x, y):
...         cond = self.less(x, y)
...         st1, sf1 = self.switch(x, cond)
...         st2, sf2 = self.switch(y, cond)
...         add_ret = self.add(st1, st2)
...         st3, sf3 = self.switch(self.value, cond)
...         sq_ret = self.square(sf3)
...         ret = self.merge((add_ret, sq_ret))
...         return ret[0]
...
>>> x = Tensor(10.0, dtype=mindspore.float32)
>>> y = Tensor(5.0, dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
class tinyms.primitives.Gelu(**kwargs)[source]

Same as operator GeLU. Gelu will be deprecated in the future. Please use GeLU instead.

class tinyms.primitives.GetCenterOfMass(*args, **kwargs)[source]

Get coordinate of centroid of each residue. Assume system has n atoms.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

residue_numbers (int32) – the number of residues m.

Inputs:
  • start (Tensor) - The start atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • end (Tensor) - The end atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_mass (Tensor) - The mass of each atom and the atom number is n. The data type is float32 and the shape is \((n,)\).

  • residue_mass_inverse (Tensor) - The inverse of mass of each residue. The data type is float32 and the shape is \((m,)\).

Outputs:
  • center_of_mass (Tensor) - The coordinate of centroid of each residue. The data type is float32 and the shape is \((m, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.GetNext(*args, **kwargs)[source]

Returns the next element in the dataset queue.

Note

The GetNext operation needs to be associated with network and it also depends on the init_dataset interface, it can’t be used directly as a single operation. For details, please refer to connect_network_with_dataset source code.

Parameters
  • types (list[mindspore.dtype]) – The type of the outputs.

  • shapes (list[tuple[int]]) – The dimensionality of the outputs.

  • output_num (int) – The output number, length of types and shapes.

  • shared_name (str) – The queue name of init_dataset interface.

Inputs:

No inputs.

Outputs:

tuple[Tensor], the output of Dataset. The shape is described in shapes and the type is described in types.

Supported Platforms:

Ascend GPU

Examples

>>> train_dataset = create_custom_dataset()
>>> dataset_helper = mindspore.DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> dataset = dataset_helper.iter.dataset
>>> dataset_types, dataset_shapes = dataset_helper.types_shapes()
>>> queue_name = dataset.__transfer_dataset__.queue_name
>>> get_next = ops.GetNext(dataset_types, dataset_shapes, len(dataset_types), queue_name)
>>> data, label = get_next()
>>> relu = ops.ReLU()
>>> result = relu(data).asnumpy()
>>> print(result.shape)
(32, 1, 32, 32)
class tinyms.primitives.Greater(*args, **kwargs)[source]

Computes the boolean value of \(x > y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>y_{i} \\ & \text{False, if } x_{i}<=y_{i} \end{cases}\end{split}\]

Note

Broadcasting is supported.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater = ops.Greater()
>>> output = greater(x, y)
>>> print(output)
[False  True False]
class tinyms.primitives.GreaterEqual(*args, **kwargs)[source]

Computes the boolean value of \(x >= y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>=y_{i} \\ & \text{False, if } x_{i}<y_{i} \end{cases}\end{split}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater_equal = ops.GreaterEqual()
>>> output = greater_equal(x, y)
>>> print(output)
[True True False]
class tinyms.primitives.HShrink(*args, **kwargs)[source]

Applies the hard shrinkage function element-wise, each element complies the follow function:

\[\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters

lambd (float) – The value for the HardShrink formulation. Default: 0.5

Inputs:
  • input_x (Tensor) - The input of HardShrink with data type of float16 or float32.

Outputs:

Tensor, the same shape and data type as the input.

Supported Platforms:

Ascend

Raises
  • TypeError – If lambd is not a float.

  • TypeError – If dtype of input_x is neither float16 nor float32.

Examples

>>> input_x = Tensor(np.array([[ 0.5,  1,  2.0],[0.0533,0.0776,-2.1233]]),mstype.float32)
>>> hshrink = P.HShrink()
>>> output = hshrink(input_x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
class tinyms.primitives.HSigmoid(*args, **kwargs)[source]

Hard sigmoid activation function.

Applies hard sigmoid activation element-wise. The input is a Tensor with any valid shape.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{x_{i} + 3}{6})),\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> hsigmoid = ops.HSigmoid()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hsigmoid(input_x)
>>> print(result)
[0.3333 0.1666 0.5    0.8335 0.6665]
class tinyms.primitives.HSwish(*args, **kwargs)[source]

Hard swish activation function.

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

GPU CPU

Examples

>>> hswish = ops.HSwish()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hswish(input_x)
>>> print(result)
[-0.3333  -0.3333  0  1.666  0.6665]
class tinyms.primitives.HistogramFixedWidth(*args, **kwargs)[source]

Returns a rank 1 histogram counting the number of entries in values that fall into every bin. The bins are equal width and determined by the arguments range and nbins.

Parameters
  • dtype (str) – An optional attribute. The dtype must be “int32”. Default: “int32”.

  • nbins (int) – The number of histogram bins, the type is a positive integer.

Inputs:
  • x (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.

  • range (Tensor) - Must has the same data type as x, and the shape is [2]. x <= range[0] will be mapped to hist[0], x >= range[1] will be mapped to hist[-1].

Outputs:

Tensor, the type is int32.

Raises
  • TypeError – If dtype is not a str or nbins is not an int.

  • ValueError – If nbins is less than 1.

  • ValueError – If dtype is neither ‘int32’ nor ‘int64’.

Supported Platforms:

Ascend

Examples

>>> x = Tensor([-1.0, 0.0, 1.5, 2.0, 5.0, 15], mindspore.float16)
>>> range_op = Tensor([0.0, 5.0], mindspore.float16)
>>> hist = ops.HistogramFixedWidth(5)
>>> output = hist(x, range_op)
>>> print(output)
[2 1 1 0 2]
class tinyms.primitives.HistogramSummary(*args, **kwargs)[source]

Outputs the tensor to protocol buffer through histogram summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.HistogramSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.HookBackward(hook_fn, cell_id='')[source]

This operation is used as a tag to hook gradient in intermediate variables. Note that this function is only supported in Pynative Mode.

Note

The hook function must be defined like hook_fn(grad) -> Tensor or None, where grad is the gradient passed to the primitive and gradient may be modified and passed to next primitive. The difference between a hook function and callback of InsertGradientOf is that a hook function is executed in the python environment while callback will be parsed and added to the graph.

Parameters

hook_fn (Function) – Python function. hook function.

Inputs:
  • inputs (Tensor) - The variable to hook.

Raises
  • TypeError – If inputs are not a Tensor.

  • TypeError – If hook_fn is not a function of python.

Examples

>>> def hook_fn(grad_out):
...     print(grad_out)
...
>>> grad_all = GradOperation(get_all=True)
>>> hook = ops.HookBackward(hook_fn)
>>> def hook_test(x, y):
...     z = x * y
...     z = hook(z)
...     z = z * y
...     return z
...
>>> def backward(x, y):
...     return grad_all(hook_test)(x, y)
...
>>> output = backward(1, 2)
>>> print(output)
class tinyms.primitives.IFFT3D(*args, **kwargs)[source]

Inverse FFT with Three-Dimensional Input.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Inputs:
  • input_tensor (Tensor) - Three dimensional input tensor, supported data type is complex64.

Outputs:
  • output_tensor (Tensor) - Returns the tensor after undergoing inverse Fourier transform, the data type is float32.

Supported Platforms:

GPU

class tinyms.primitives.IOU(*args, **kwargs)[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

\[ \begin{align}\begin{aligned}\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\\\text{IOF} = \frac{\text{Area of Overlap}}{\text{Area of Ground Truth}}\end{aligned}\end{align} \]

Warning

In Ascend, only computation of float16 data is supported. To avoid overflow, the input length and width are scaled by 0.2 internally.

Parameters

mode (string) – The mode is used to specify the calculation method, now supporting ‘iou’ (intersection over union) or ‘iof’ (intersection over foreground) mode. Default: ‘iou’.

Inputs:
  • anchor_boxes (Tensor) - Anchor boxes, tensor of shape (N, 4). “N” indicates the number of anchor boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be float16 or float32.

  • gt_boxes (Tensor) - Ground truth boxes, tensor of shape (M, 4). “M” indicates the number of ground truth boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be float16 or float32.

Outputs:

Tensor, the ‘iou’ values, tensor of shape (M, N), with the same data type as anchor_boxes.

Raises

KeyError – When mode is not ‘iou’ or ‘iof’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> iou = ops.IOU()
>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> output = iou(anchor_boxes, gt_boxes)
>>> print(output.shape)
(3, 3)
class tinyms.primitives.Identity(*args, **kwargs)[source]

Returns a Tensor with the same shape and contents as input.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

Tensor, the shape of tensor and the data type are the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> output = ops.Identity()(x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.Imag(*args, **kwargs)[source]

Returns a new tensor containing imaginary value of the input.

Inputs:
  • input (Tensor, complex) - The input tensor. types: complex64, complex128.

Outputs:

Tensor, has the float type.

Raises

TypeError – If the dtype of input is not one of: complex64, complex128.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> conj = ops.Imag()
>>> output = conj(x)
>>> print(output)
0.4
class tinyms.primitives.ImageSummary(*args, **kwargs)[source]

Outputs the image tensor to protocol buffer through image summary operator.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of image, the rank of tensor must be 4.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.summary = ops.ImageSummary()
...
...     def construct(self, x):
...         name = "image"
...         out = self.summary(name, x)
...         return out
...
class tinyms.primitives.InTopK(*args, **kwargs)[source]

Determines whether the targets are in the top k predictions.

Parameters

k (int) – Specifies the number of top elements to be used for computing precision.

Inputs:
  • x1 (Tensor) - A 2D Tensor defines the predictions of a batch of samples with float16 or float32 data type.

  • x2 (Tensor) - A 1D Tensor defines the labels of a batch of samples with int32 data type. The size of x2 must be equal to x1’s first dimension. The values of x2 can not be negative and must be equal to or less than index of x1’s second dimension.

Outputs:

Tensor has 1 dimension of type bool and the same shape with x2. For labeling sample i in x2, if the label in the first k predictions for sample i is in x1, then the value is True, otherwise False.

Raises
  • TypeError – If k is not an int.

  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If dtype of x1 is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> in_top_k = ops.InTopK(3)
>>> output = in_top_k(x1, x2)
>>> print(output)
[ True  False]
class tinyms.primitives.IndexAdd(*args, **kwargs)[source]

Adds tensor y to specified axis and indices of tensor x. The axis should be in the range from 0 to len(x.dim) - 1, and indices should be in the range from 0 to the size of x at the axis dimension.

Parameters

axis (int) – The dimension along which to index.

Inputs:
  • x (Parameter) - The input tensor to add to.

  • indices (Tensor) - The index of x on the axis th dimension to add to, with data type int32. The indices must be 1D with the same size as the size of the axis th dimension of y. The values of indices should be in the range of 0 to the size of the axis th dimension of x.

  • y (Tensor) - The input tensor with the value to add. Must have same data type as x. The shape must be the same as x except the axis th dimension.

Outputs:

Tensor, has the same shape and dtype as x.

Raises
  • TypeError – If x is not a Tensor.

  • TypeError – If neither indices nor y is a Tensor.

  • ValueError – If axis is out of x rank’s range.

  • ValueError – If x rank is not the same as y rank.

  • ValueError – If size of indices is not equal to dimension of y[axis].

  • ValueError – If y’s shape is not the same as x except the axis th dimension.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.index_add = ops.IndexAdd(axis=1)
...         self.x = Parameter(Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32))
...         self.indices = Tensor(np.array([0, 2]), mindspore.int32)
...
...     def construct(self, y):
...         return self.index_add(self.x, self.indices, y)
...
>>> y = Tensor(np.array([[0.5, 1.0], [1.0, 1.5], [2.0, 2.5]]), mindspore.float32)
>>> net = Net()
>>> output = net(y)
>>> print(output)
[[ 1.5  2.   4. ]
 [ 5.   5.   7.5]
 [ 9.   8.  11.5]]
class tinyms.primitives.InplaceAdd(*args, **kwargs)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to add with v. It is an integer or a tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • x (Tensor) - The first input is a tensor whose data type is float16, float32 or int32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • input_v (Tensor) - The second input is a tensor that has the same dimension sizes as x except the first dimension, which must be the same as indices’s size. It has the same data type with x.

Outputs:

Tensor, has the same shape and dtype as x.

Raises
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If length of shape of x is not equal to length of shape of input_v.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceAdd = ops.InplaceAdd(indices)
>>> output = inplaceAdd(x, input_v)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceSub(*args, **kwargs)[source]

Subtracts v into specified rows of x. Computes y = x; y[i, :] -= v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to subtract with v. It is a int or tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • x (Tensor) - The first input is a tensor whose data type is float16, float32 or int32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • input_v (Tensor) - The second input is a tensor who has the same dimension sizes as x except the first dimension, which must be the same as indices’s size. It has the same data type with x.

Outputs:

Tensor, has the same shape and dtype as x.

Raises
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If length of shape of x is not equal to length of shape of input_v.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceSub = ops.InplaceSub(indices)
>>> output = inplaceSub(x, input_v)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
class tinyms.primitives.InplaceUpdate(*args, **kwargs)[source]

Updates specified rows with values in v.

Parameters

indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to update with v. It is a int or tuple, whose value is in [0, the first dimension size of x).

Inputs:
  • x (Tensor) - A tensor which to be inplace updated. It can be one of the following data types: float32, float16 and int32.

  • v (Tensor) - A tensor with the same type as x and the same dimension size as x except the first dimension, which must be the same as the size of indices.

Outputs:

Tensor, with the same type and shape as the input x.

Raises
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple and its element is not an int.

Supported Platforms:

Ascend

Examples

>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplace_update = ops.InplaceUpdate(indices)
>>> output = inplace_update(x, v)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
class tinyms.primitives.InsertGradientOf(*args, **kwargs)[source]

Attaches callback to the graph node that will be invoked on the node’s gradient.

Parameters

f (Function) – MindSpore’s Function. Callback function.

Inputs:
  • input_x (Any) - The graph node to attach to.

Outputs:

Tensor, returns input_x directly. InsertGradientOf does not affect the forward result.

Raises

TypeError – If f is not a function of mindspore.

Supported Platforms:

Ascend GPU CPU

Examples

>>> def clip_gradient(dx):
...     ret = dx
...     if ret > 1.0:
...         ret = 1.0
...
...     if ret < 0.2:
...         ret = 0.2
...
...     return ret
...
>>> clip = ops.InsertGradientOf(clip_gradient)
>>> grad_all = ops.GradOperation(get_all=True)
>>> def InsertGradientOfClipDemo():
...     def clip_test(x, y):
...         x = clip(x)
...         y = clip(y)
...         c = x * y
...         return c
...
...     @ms_function
...     def f(x, y):
...         return clip_test(x, y)
...
...     def fd(x, y):
...         return grad_all(clip_test)(x, y)
...
...     print("forward: ", f(1.1, 0.1))
...     print("clip_gradient:", fd(1.1, 0.1))
...
class tinyms.primitives.Inv(*args, **kwargs)[source]

Computes Inv(Reciprocal) of input tensor element-wise.

\[out_i = out_i = \frac{1}{x_{i} }\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float16, float32, int32.

Outputs:

Tensor, has the same shape and data type as x.

Raises

TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend

Examples

>>> inv = ops.Inv()
>>> x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = inv(x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
class tinyms.primitives.Invert(*args, **kwargs)[source]

Flips all bits of input tensor element-wise.

\[out_i = -x_{i}\]
Inputs:
  • x (Tensor[int16], Tensor[uint16]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If dtype of x is neither int16 nor uint16.

Supported Platforms:

Ascend

Examples

>>> invert = ops.Invert()
>>> x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = invert(x)
>>> print(output)
[-26 -5 -14 -10]
class tinyms.primitives.InvertPermutation(*args, **kwargs)[source]

Computes the inverse of an index permutation.

This operator is mainly used to calculate the inverse of index permutation. It requires a 1-dimensional integer tensor x, which represents the index of a zero-based array, and exchanges each value with its index position. In other words, For output tensor y and input tensor x, this operation calculates the following values:

\(y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]\).

Note

These values must include 0. There must be no duplicate values and the values can not be negative.

Inputs:
  • input_x (Union(tuple[int], list[int]) - The input is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\) representing the indices. The values must include 0. There can be no duplicate values or negative values. Only constant value is allowed. The maximum value must be equal to length of input_x.

Outputs:

tuple[int]. It has the same length as the input.

Raises
  • TypeError – If input_x is neither tuple nor list.

  • TypeError – If element of input_x is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.InvertPermutation()
>>> input_data = (3, 4, 0, 2, 1)
>>> output = invert(input_data)
>>> print(output)
(2, 4, 3, 0, 1)
class tinyms.primitives.IsFinite(*args, **kwargs)[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True\ \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_finite = ops.IsFinite()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_finite(x)
>>> print(output)
[False  True False]
class tinyms.primitives.IsInf(*args, **kwargs)[source]

Determines which elements are inf or -inf for each position

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Inf},\ \ True \\ & \text{ if } x_{i} \ne \text{Inf},\ \ False \end{cases}\end{split}\]

where \(Inf\) means not a number.

Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

GPU

Examples

>>> is_inf = ops.IsInf()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_inf(x)
>>> print(output)
[False False True]
class tinyms.primitives.IsInstance(*args, **kwargs)[source]

Checks whether an object is an instance of a target type.

Inputs:
  • inst (Any Object) - The instance to be checked. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Raises

TypeError – If type_ is not a Type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inst = 1
>>> output = ops.IsInstance()(inst, mindspore.int32)
>>> print(output)
False
class tinyms.primitives.IsNan(*args, **kwargs)[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Nan},\ \ True \\ & \text{ if } x_{i} \ne \text{Nan},\ \ False \end{cases}\end{split}\]

where \(Nan\) means not a number.

Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

GPU CPU

Examples

>>> is_nan = ops.IsNan()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_nan(x)
>>> print(output)
[True False False]
class tinyms.primitives.IsSubClass(*args, **kwargs)[source]

Checks whether this type is a sub-class of another type.

Inputs:
  • sub_type (mindspore.dtype) - The type to be checked. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Raises

TypeError – If sub_type or type_ is not a Type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.IsSubClass()(mindspore.int32,  mindspore.intc)
>>> print(output)
True
class tinyms.primitives.KLDivLoss(*args, **kwargs)[source]

Computes the Kullback-Leibler divergence between the logits and the labels.

The updating formulas of KLDivLoss algorithm are as follows,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = y_n \cdot (\log y_n - x_n)\]

Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(x\) represents logits. \(y\) represents labels. \(\ell(x, y)\) represents output.

Parameters

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - The input Tensor. The data type must be float32.

  • labels (Tensor) - The label Tensor which has the same shape and data type as logits.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise it is a scalar.

Raises
  • TypeError – If reduction is not a str.

  • TypeError – If neither logits nor labels is a Tensor.

  • TypeError – If dtype of logits or labels is not float32.

Supported Platforms:

GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.kldiv_loss = ops.KLDivLoss()
...     def construct(self, logits, labels):
...         result = self.kldiv_loss(logits, labels)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = net(logits, labels)
>>> print(output)
-0.23333333
class tinyms.primitives.L2Loss(*args, **kwargs)[source]

Calculates half of the L2 norm of a tensor without using the sqrt.

Set input_x as x and output as loss.

\[loss = sum(x ** 2) / 2\]
Inputs:
  • input_x (Tensor) - A input Tensor. Data type must be float16 or float32.

Outputs:

Tensor, has the same dtype as input_x. The output tensor is the value of loss which is a scalar tensor.

Raises
  • TypeError – If input_x not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
>>> l2_loss = ops.L2Loss()
>>> output = l2_loss(input_x)
>>> print(output)
7.0
class tinyms.primitives.L2Normalize(*args, **kwargs)[source]

L2 Normalization Operator.

This operator will normalize the input using the given axis. The function is shown as follows:

\[\text{output} = \frac{x}{\sqrt{\text{max}(\text{sum} (\text{x}^2), \epsilon)}},\]

where \(\epsilon\) is epsilon.

Parameters
  • axis (Union[list(int), tuple(int), int]) – The starting axis for the input to apply the L2 Normalization. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-4.

Inputs:
  • x (Tensor) - Input to compute the normalization. Tensor of shape \((N, \ldots)\). Data type must be float16 or float32.

Outputs:

Tensor, with the same type and shape as the x.

Raises
  • TypeError – If axis is not one of the following: list, tuple or int.

  • TypeError – If epsilon is not a float.

  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> l2_normalize = ops.L2Normalize()
>>> x = Tensor(np.random.randint(-256, 256, (2, 3, 4)), mindspore.float32)
>>> output = l2_normalize(x)
>>> print(output.shape)
(2, 3, 4)
class tinyms.primitives.LARSUpdate(*args, **kwargs)[source]

Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient.

For more details, please refer to nn.LARS.

Parameters
  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • hyperpara (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

Inputs:
  • weight (Tensor) - A tensor, representing the weight. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient (Tensor) - The gradient of weight, which has the same shape and dtype with weight.

  • norm_weight (Tensor) - A scalar tensor, representing the sum of squares of weight.

  • norm_gradient (Tensor) - A scalar tensor, representing the sum of squares of gradient.

  • weight_decay (Union[Number, Tensor]) - Weight decay. It must be a scalar tensor or number.

  • learning_rate (Union[Number, Tensor]) - Learning rate. It must be a scalar tensor or number.

Outputs:

Tensor, represents the new gradient.

Raises
  • TypeError – If neither epsilon nor hyperpara is a float.

  • TypeError – If use_clip is a bool.

  • TypeError – If weight, gradient, norm_weight or norm_gradient is not a Tensor.

  • TypeError – If weight_decay or learning_rate is neither a Number nor a Tensor.

  • TypeError – If shape of gradient is not same as weight.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.lars = ops.LARSUpdate()
...         self.reduce = ops.ReduceSum()
...         self.square = ops.Square()
...     def construct(self, weight, gradient):
...         w_square_sum = self.reduce(self.square(weight))
...         grad_square_sum = self.reduce(self.square(gradient))
...         grad_t = self.lars(weight, gradient, w_square_sum, grad_square_sum, 0.0, 1.0)
...         return grad_t
...
>>> weight = Tensor(np.array([[0.5, 0.8, 0.2], [0.6, 0.4, 0.2]]).astype(np.float32))
>>> gradient = Tensor(np.array([[0.4, 0.4, 0.5], [0.2, 0.4, 0.3]]).astype(np.float32))
>>> net = Net()
>>> output = net(Tensor(weight), Tensor(gradient))
>>> print(output)
[[0.0005265  0.0005265 0.00065813]
 [0.00026325 0.0005265 0.00039488]]
class tinyms.primitives.LJEnergy(*args, **kwargs)[source]

Calculate the Van der Waals interaction energy described by Lennard-Jones potential for each atom. Assume the number of atoms is n, and the number of Lennard-Jones types for all atoms is P, which means there will be q = P*(P+1)/2 types of possible Lennard-Jones interactions for all kinds of atom pairs.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_a-x_b, y_a-y_b, z_a-z_b)\]
\[E = A/|dr|^{12} - B/|dr|^{6}\]
Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff_square (float32) – the square value of cutoff.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom.

    The data type is uint32 and the shape is \((n, 3)\)

  • LJtype (Tensor) - The Lennard-Jones type of each atom.

    The data type is int32 and the shape is \((n,)\)

  • charge (Tensor) - The charge carried by each atom.

    The data type is float32 and the shape is \((n,)\)

  • scaler (Tensor) - The scale factor between real space coordinate and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • nl_numbers - (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\)

  • nl_serial - (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • d_LJ_energy_atom (Tensor) - The Lennard-Jones potential energy of each atom.

    The data type is float32 and the shape is \((n,)\).

  • d_LJ_energy_sum (Scalar), the sum of Lennard-Jones potential energy of each atom. The data type is float32.

Supported Platforms:

GPU

class tinyms.primitives.LJForce(*args, **kwargs)[source]

Calculate the Van der Waals interaction force described by Lennard-Jones potential energy for each atom.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[dr = (x_a-x_b, y_a-y_b, z_a-z_b)\]
\[F = (-12*A/|dr|^{14} + 6*B/|dr|^{8}) * dr\]
Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff_square (float32) – the square value of cutoff.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • LJtype (Tensor) - The Lennard-Jones type of each atom.

    The data type is int32 and the shape is \((n,)\)

  • charge (Tensor) - The charge carried by each atom.

    The data type is float32 and the shape is \((n,)\)

  • scaler (Tensor) - The scale factor between real space coordinates and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • nl_numbers - (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\)

  • nl_serial - (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

outputs:
  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.LJForceWithPMEDirectForce(*args, **kwargs)[source]

Calculate the Lennard-Jones force and PME direct force together.

The calculation formula of Lennard-Jones part is the same as operator LJForce(), and the PME direct part is within PME method.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff_square (float32) – the square value of cutoff.

  • pme_beta (float32) – PME beta parameter, same as operator PMEReciprocalForce().

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJtype (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The scale factor between real space coordinate and its unsigned int value. The data type is float32 and the shape is \((3,)\).

  • nl_numbers - (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\).

  • nl_serial - (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. q is the number of atom pair. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc (Tensor), The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.LJForceWithPMEDirectForceUpdate(*args, **kwargs)[source]

Calculate the Lennard-Jones force and PME direct force together for pressure.

The calculation formula of Lennard-Jones part is the same as operator LJForce(), and the PME direct part is within PME method.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff (float32) – the square value of cutoff.

  • pme_beta (float32) – PME beta parameter, same as operator PMEReciprocalForce().

  • need_update (int32) – if need_update = 1, calculate the pressure, default 0.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJtype (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The scale factor between real space coordinate and its unsigned int value. The data type is float32 and the shape is \((3,)\).

  • nl_numbers (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\).

  • nl_serial (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • beta (Tensor) - PME beta parameter. The data type is float32 and the shape is \((1,)\).

Outputs:
  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.LJForceWithVirialEnergy(*args, **kwargs)[source]

Calculate the Lennard-Jones force, virial and atom energy together.

The calculation formula of Lennard-Jones part is the same as operator LJForce(), and the PME direct part is within PME method.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff (float32) – the square value of cutoff.

  • pme_beta (float32) – PME beta parameter, same as operator PMEReciprocalForce().

  • max_neighbor_numbers (int32) – the max neighbor numbers, default 800.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJtype (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The scale factor between real space coordinate and its unsigned int value. The data type is float32 and the shape is \((3,)\).

  • nl_numbers (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\).

  • nl_serial (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

Outputs:
  • frc (Tensor), The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • virial (Tensor), The virial felt by each atom. The data type is float32 and the shape is \((n,)\).

  • atom_energy (Tensor), The atom energy felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.LJForceWithVirialEnergyUpdate(*args, **kwargs)[source]

Calculate the Lennard-Jones force and PME direct force together for pressure.

The calculation formula of Lennard-Jones part is the same as operator LJForce(), and the PME direct part is within PME method.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • cutoff (float32) – the square value of cutoff.

  • pme_beta (float32) – PME beta parameter, same as operator PMEReciprocalForce().

  • max_neighbor_numbers (int32) – the max neighbor numbers, default 800.

  • need_update (int32) – if need_update = 1, calculate the pressure, default 0.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • LJtype (Tensor) - The Lennard-Jones type of each atom. The data type is int32 and the shape is \((n,)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\).

  • scaler (Tensor) - The scale factor. The data type is float32 and the shape is \((3,)\).

  • nl_numbers (Tensor) - The each atom. The data type is int32 and the shape is \((n,)\).

  • nl_serial (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\).

  • d_LJ_A (Tensor) - The Lennard-Jones A coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • d_LJ_B (Tensor) - The Lennard-Jones B coefficient of each kind of atom pair. The number of atom pair is q. The data type is float32 and the shape is \((q,)\).

  • beta (Tensor) - The PME beta parameter to be updated in pressure calculation. The data type is float32 and the shape is \((1,)\)

Outputs:
  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • virial (Tensor) - The accumulated potential virial for each atom. The data type is float32 and the shape is \((n, )\).

  • atom_energy (Tensor) - The accumulated potential energy for each atom. The data type is float32 and the shape is \((n, )\).

Supported Platforms:

GPU

class tinyms.primitives.LRN(*args, **kwargs)[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]

where the \(a_{c}\) indicates the represents the specific value of the pixel corresponding to c in feature map; where the \(n/2\) indicate the depth_radius; where the \(k\) indicate the bias; where the \(\alpha\) indicate the`alpha`; where the \(\beta\) indicate the beta.

Parameters
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default: 5.

  • bias (float) – An offset (usually positive to avoid dividing by 0). Default: 1.0.

  • alpha (float) – A scale factor, usually positive. Default: 1.0.

  • beta (float) – An exponent. Default: 0.5.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

Inputs:
  • x (Tensor) - A 4D Tensor with float16 or float32 data type.

Outputs:

Tensor, with the same shape and data type as x.

Raises
Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> lrn = ops.LRN()
>>> output = lrn(x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
class tinyms.primitives.LSTM(*args, **kwargs)[source]

Performs the Long Short-Term Memory (LSTM) on the input.

For detailed information, please refer to nn.LSTM.

Parameters
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked LSTM.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh.

  • bidirectional (bool) – Specifies whether it is a bidirectional LSTM.

  • dropout (float) – If not 0, append Dropout layer on the outputs of each LSTM layer except the last layer. The range of dropout is [0.0, 1.0].

Inputs:
  • input (Tensor) - Tensor of shape (seq_len, batch_size, input_size) or (batch_size, seq_len, input_size).

  • h (tuple) - Tensor of shape (num_directions * num_layers, batch_size, hidden_size).

  • c (tuple) - Tensor of shape (num_directions * num_layers, batch_size, hidden_size).

Outputs:

Tuple, a tuple contains (output, h_n, c_n, reserve, state).

  • output (Tensor) - Tensor of shape (seq_len, batch_size, num_directions * hidden_size).

  • h_n (Tensor) - Tensor of shape (num_directions * num_layers, batch_size, hidden_size).

  • c_n (Tensor) - Tensor of shape (num_directions * num_layers, batch_size, hidden_size).

  • reserve (Tensor) - Tensor of shape (r, 1).

  • state (Tensor) - Random number generator state and its shape is (s, 1).

Raises
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0].

Supported Platforms:

GPU CPU

Examples

>>> input_size = 10
>>> hidden_size = 2
>>> num_layers = 1
>>> seq_len = 5
>>> batch_size = 2
>>>
>>> net = ops.LSTM(input_size, hidden_size, num_layers, True, False, 0.0)
>>> input_tensor = Tensor(np.ones([seq_len, batch_size, input_size]).astype(np.float32))
>>> h0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> c0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> w = Tensor(np.ones([112, 1, 1]).astype(np.float32))
>>> output, hn, cn, _, _ = net(input_tensor, h0, c0, w)
>>> print(output)
[[[0.9640267  0.9640267 ]
  [0.9640267  0.9640267 ]]
 [[0.9950539  0.9950539 ]
  [0.9950539  0.9950539 ]]
 [[0.99932843 0.99932843]
  [0.99932843 0.99932843]]
 [[0.9999084  0.9999084 ]
  [0.9999084  0.9999084 ]]
 [[0.9999869  0.9999869 ]
  [0.9999869  0.9999869 ]]]
class tinyms.primitives.LastCrdToDr(*args, **kwargs)[source]

Calculate the displacement vector of each constrained atom pair.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • constrain_pair_numbers (int32) – the number of constrain pairs m.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • quarter_cof (Tensor) - The 3-D scale factor. The data type is float32 and the shape is \((3,)\).

  • uint_dr_to_dr (Tensor) - The 3-D scale factor (x, y, z) The data type is int32 and the shape is \((3,)\)..

  • atom_i_serials (Tensor) - The first atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • atom_j_serials (Tensor) - The second atom index of each constrained atom pair. The data type is int32 and the shape is \((m,)\).

  • constant_rs (Tensor) - The constrained distance of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

  • constrain_ks (Tensor) - The coefficient of each constrained atom pair. The data type is float32 and the shape is \((m,)\).

Outputs:
  • pair_dr (Tensor) - The displacement vector of each constrained atom pair. The data type is float32 and the shape is \((m, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.LayerNorm(*args, **kwargs)[source]

Applies the Layer Normalization to the input tensor.

This operator will normalize the input tensor on given axis. LayerNorm is described in the paper Layer Normalization.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • begin_norm_axis (int) – The begin axis of the input_x to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • begin_params_axis (int) – The begin axis of the parameter input (gamma, beta) to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • epsilon (float) – A value added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\). The input of LayerNorm.

  • gamma (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter gamma as the scale on norm.

  • beta (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter beta as the scale on norm.

Outputs:

tuple[Tensor], tuple of 3 tensors, the normalized input and the updated parameters.

  • output_x (Tensor) - The normalized input, has the same type and shape as the input_x. The shape is \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Raises
  • TypeError – If begin_norm_axis or begin_params_axis is not an int.

  • TypeError – If epsilon is not a float.

  • TypeError – If input_x, gamma or beta is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [1, 2, 3]]), mindspore.float32)
>>> gamma = Tensor(np.ones([3]), mindspore.float32)
>>> beta = Tensor(np.ones([3]), mindspore.float32)
>>> layer_norm = ops.LayerNorm()
>>> output, mean, variance = layer_norm(input_x, gamma, beta)
>>> print(output)
[[-0.2247448  1.         2.2247448]
 [-0.2247448  1.         2.2247448]]
>>> print(mean)
[[2.]
 [2.]]
>>> print(variance)
[[0.6666667]
 [0.6666667]]
class tinyms.primitives.Lerp(*args, **kwargs)[source]

Does a linear interpolation of two tensors start and end based on a float or tensor weight.

If weight is a tensor, the shapes of three inputs need to be broadcast; If weight is a float, the shapes of start and end need to be broadcast.

\[output_{i} = start_{i} + weight_{i} * (end_{i} - start_{i})\]
Inputs:
  • start (Tensor) - The tensor with the starting points. Data type must be float16 or float32.

  • end (Tensor) - The tensor with the ending points. Data type must be float16 or float32.

  • weight (Union[float, Tensor]) – The weight for the interpolation formula. Must be a float or a scalar tensor with float16 or float32 data type.

Outputs:

Tensor, has the same type and shape as input start.

Raises
  • TypeError – If start or end is not a tensor.

  • TypeError – If weight is neither float nor tensor.

  • TypeError – If dtype of start or end is neither float16 nor float32.

  • TypeError – If dtype of weight is neither float16 nor float32 when it is a tensor.

  • TypeError – If start and end have different data types.

  • TypeError – If start, end and weight have different data types when weight is a tensor.

  • ValueError – If end could not be broadcast to a tensor with shape of start.

  • ValueError – If weight could not be broadcast to tensors with shapes of start and end when it is a tensor.

Supported Platforms:

Ascend

Examples

>>> start = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> end = Tensor(np.array([10., 10., 10., 10.]), mindspore.float32)
>>> lerp = ops.Lerp()
>>> output = lerp(start, end, 0.5)
>>> print(output)
[5.5 6. 6.5 7. ]
class tinyms.primitives.Less(*args, **kwargs)[source]

Computes the boolean value of \(x < y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<y_{i} \\ & \text{False, if } x_{i}>=y_{i} \end{cases}\end{split}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less = ops.Less()
>>> output = less(x, y)
>>> print(output)
[False False True]
class tinyms.primitives.LessEqual(*args, **kwargs)[source]

Computes the boolean value of \(x <= y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool , and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<=y_{i} \\ & \text{False, if } x_{i}>y_{i} \end{cases}\end{split}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less_equal = ops.LessEqual()
>>> output = less_equal(x, y)
>>> print(output)
[ True False  True]
class tinyms.primitives.LinSpace(*args, **kwargs)[source]

The OP returns a Tensor whose value is num evenly spaced in the interval start and stop (including start and stop), and the length of the output Tensor is num.

\[\begin{split}\begin{aligned} &step = (stop - start)/(num - 1)\\ &output = [start, start+step, start+2*step, ... , stop] \end{aligned}\end{split}\]
Inputs:
  • start (Tensor[float32]) - Start value of interval, With shape of 0-D.

  • stop (Tensor[float32]) - Last value of interval, With shape of 0-D.

  • num (int) - Number of ticks in the interval, inclusive of start and stop.

Outputs:

Tensor, has the same shape as start.

Supported Platforms:

Ascend GPU

Examples

>>> linspace = ops.LinSpace()
>>> start = Tensor(1, mindspore.float32)
>>> stop = Tensor(10, mindspore.float32)
>>> num = 5
>>> output = linspace(start, stop, num)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
class tinyms.primitives.Log(*args, **kwargs)[source]

Returns the natural logarithm of a tensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy is subject to change.

Inputs:
  • x (Tensor) - The input tensor. The value must be greater than 0. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log = ops.Log()
>>> output = log(x)
>>> print(output)
[0.        0.6931472 1.3862944]
class tinyms.primitives.Log1p(*args, **kwargs)[source]

Returns the natural logarithm of one plus the input tensor element-wise.

Inputs:
  • x (Tensor) - The input tensor. With float16 or float32 data type. The value must be greater than -1. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as the x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log1p = ops.Log1p()
>>> output = log1p(x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
class tinyms.primitives.LogSoftmax(*args, **kwargs)[source]

Log Softmax activation function.

Applies the Log Softmax function to the input tensor on the specified axis. Supposes a slice in the given aixs, \(x\) for each element \(x_i\), the Log Softmax function is shown as follows:

\[\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),\]

where \(N\) is the length of the Tensor.

Parameters

axis (int) – The axis to perform the Log softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the logits.

Raises
  • TypeError – If axis is not an int.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • ValueError – If axis is not in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> log_softmax = ops.LogSoftmax()
>>> output = log_softmax(logits)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
class tinyms.primitives.LogUniformCandidateSampler(*args, **kwargs)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Random sampling a tensor of sampled classes from the range of integers [0, range_max).

Parameters
  • num_true (int) – The number of target classes per training example. Default: 1.

  • num_sampled (int) – The number of classes to randomly sample. Default: 5.

  • unique (bool) – Determines whether sample with rejection. If unique is True, all sampled classes in a batch are unique. Default: True.

  • range_max (int) – The number of possible classes. When unique is True, range_max must be greater than or equal to num_sampled. Default: 5.

  • seed (int) – Random seed, must be non-negative. Default: 0.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int64 and shape [batch_size, num_true].

Outputs:

Tuple of 3 Tensors.

  • sampled_candidates (Tensor) - A Tensor with shape (num_sampled,) and the same type as true_classes.

  • true_expected_count (Tensor) - A Tensor with the same shape as true_classes and type float32.

  • sampled_expected_count (Tensor) - A Tensor with the same shape as sampled_candidates and type float32.

Raises
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If unique is not a bool.

  • TypeError – If neither range_max nor seed is an int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> sampler = ops.LogUniformCandidateSampler(2, 5, True, 5)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1, 7], [0, 4], [3, 3]])))
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
class tinyms.primitives.LogicalAnd(*args, **kwargs)[source]

Computes the “logical AND” of two tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = x_{i} \wedge y_{i}\]

Note

LogicalAnd supports broadcasting.

Inputs:
  • x (Union[Tensor, bool]) - The first input is a bool or a tensor whose data type is bool.

  • y (Union[Tensor, bool]) - The second input is a bool when the first input is a tensor or a tensor whose data type is bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_and = ops.LogicalAnd()
>>> output = logical_and(x, y)
>>> print(output)
[ True False False]
class tinyms.primitives.LogicalNot(*args, **kwargs)[source]

Computes the “logical NOT” of a tensor element-wise.

\[out_{i} = \neg x_{i}\]
Inputs:
  • x (Tensor) - The input tensor whose dtype is bool. \((N,*)\) where \(*\) means,any number of additional dimensions.

Outputs:

Tensor, the shape is the same as the x, and the dtype is bool.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> logical_not = ops.LogicalNot()
>>> output = logical_not(x)
>>> print(output)
[False  True False]
class tinyms.primitives.LogicalOr(*args, **kwargs)[source]

Computes the “logical OR” of two tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = x_{i} \vee y_{i}\]

Note

LogicalOr supports broadcasting.

Inputs:
  • x (Union[Tensor, bool]) - The first input is a bool or a tensor whose data type is bool.

  • y (Union[Tensor, bool]) - The second input is a bool when the first input is a tensor or a tensor whose data type is bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_or = ops.LogicalOr()
>>> output = logical_or(x, y)
>>> print(output)
[ True  True  True]
class tinyms.primitives.MDIterationGradientDescent(*args, **kwargs)[source]

Update the coordinate of each atom in the direction of potential for energy minimization.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • learning_rate (float32) – the update step length.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor), The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

Output:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationLeapFrog(*args, **kwargs)[source]

One step of classical leap frog algorithm to solve the finite difference Hamiltonian equations of motion for certain system.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt (float32) – the simulation time step.

Inputs:
  • sqrt_mass_inverse (Tensor) - The square root of the inverse value of the mass of each atom. The data type is float32 and the shape is \((n,)\).

  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • acc (Tensor) - The acceleration of each atom. The data type is float32 and the shape is \((n, 3)\).

  • inverse_mass (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationLeapFrog(*args, **kwargs)[source]

One step of classical leap frog algorithm to solve the finite difference Hamiltonian equations of motion for certain system.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt (float32) – the simulation time step.

Inputs:
  • sqrt_mass_inverse (Tensor) - The square root of the inverse value of the mass of each atom. The data type is float32 and the shape is \((n,)\).

  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • acc (Tensor) - The acceleration of each atom. The data type is float32 and the shape is \((n, 3)\).

  • inverse_mass (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationLeapFrogLiujian(*args, **kwargs)[source]

One step of classical leap frog algorithm to solve the finite difference Hamiltonian equations of motion for certain system, using Langevin dynamics with Liu’s thermostat scheme. Assume the number of atoms is n and the target control temperature is T.

Detailed iteration formula can be found in this paper: A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics. DOI: 10.1063/1.4991621.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt (float32) – time step for finite difference.

  • half_dt (float32) – half of time step for finite difference.

  • exp_gamma (float32) – parameter in Liu’s dynamic.

Inputs:
  • inverse_mass (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n)\).

  • sqrt_mass_inverse (Tensor) - The inverse square root value of effect mass in Liu’s dynamics of each atom. The data type is float32 and the shape is \((n,)\).

  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • acc (Tensor) - The acceleration of each atom. The data type is float32 and the shape is \((n, 3)\).

  • rand_state (Tensor) - Random state to generate random force. The data type is float32 and the shape is \((math.ceil(n * 3.0 / 4.0) * 16, )\).

  • rand_frc (Tensor) - The random forces. The data type is float32 and the shape is \((n, 3)\).

Outputs:
  • output (Tensor) - The output coordinates. The data type is float32, and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationLeapFrogLiujianWithMaxVel(*args, **kwargs)[source]

One step of classical leap frog algorithm to solve the finite difference Hamiltonian equations of motion for certain system, using Langevin dynamics with Liu’s thermostat scheme, but with an maximum velocity limit. Assume the number of atoms is n and the target control temperature is T.

Detailed iteration formula can be found in this paper: A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics. DOI: 10.1063/1.4991621.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt (float32) – time step for finite difference.

  • half_dt (float32) – half of time step for finite difference.

  • exp_gamma (float32) – parameter in Liu’s dynamic, exp(-gamma_ln * dt).

  • max_vel (float32) – the maximum velocity limit.

Inputs:
  • inverse_mass (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

  • sqrt_mass_inverse (Tensor) - The inverse sqrt of the mass in Liu’s dynamics of each atom. The data type is float32 and the shape is \((n,)\).

  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • acc (Tensor) - The acceleration of each atom. The data type is float32 and the shape is \((n, 3)\).

  • rand_state (Tensor) - Random state to generate random force. The data type is float32 and the shape is \((math.ceil(n * 3.0 / 4.0) * 16, )\).

  • rand_frc (Tensor) - The random forces. The data type is float32 and the shape is \((n, 3)\).

Outputs:
  • output (float32) - The output coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationLeapFrogWithMaxVel(*args, **kwargs)[source]

Leap frog algorithm to solve the Hamiltonian equations of motion with a maximum velocity limit.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt (float32) – the simulation time step.

  • max_velocity (float32) – the maximum velocity limit.

Inputs:
  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • frc (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\).

  • acc (Tensor) - The acceleration of each atom. The data type is float32 and the shape is \((n, 3)\).

  • inverse_mass (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.MDIterationSetupRandState(*args, **kwargs)[source]

Compute the random state of the iteration.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • seed (int32) – random seed.

Outputs:
  • output (Tensor) random state. The data type is float32 and the shape is \((ceil(n * 3 / 4),)\).

Supported Platforms:

GPU

class tinyms.primitives.MDTemperature(*args, **kwargs)[source]

Compute the MD temperature.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • residue_numbers (int32) – the number of residues m.

  • atom_numbers (int32) – the number of atoms n.

Inputs:
  • start (Tensor) - The start atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • end (Tensor) - The end atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • atom_vel_f (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • atom_mass (Tensor) - The mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • ek (Tensor) - The temperature of each atom. The data type is float32 and the shape is \((n,)\).

Supported Platforms:

GPU

class tinyms.primitives.MakeRefKey(*args, **kwargs)[source]

Makes a RefKey instance by string. RefKey stores the name of Parameter, can be passed through the functions, and used for Assign target.

Parameters

tag (str) – Parameter name to make the RefKey.

Inputs:

No inputs.

Outputs:

RefKeyType, made from the Parameter name.

Raises

TypeError – If tag is not a str.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Parameter, Tensor
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.y = Parameter(Tensor(np.ones([2, 3]), mstype.int32), name="y")
...         self.make_ref_key = ops.MakeRefKey("y")
...
...     def construct(self, x):
...         key = self.make_ref_key()
...         ref = ops.make_ref(key, x, self.y)
...         return ref * x
...
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindspore.int32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
[[ 1  4  9]
 [16 25 36]]
class tinyms.primitives.MapCenterOfMass(*args, **kwargs)[source]

Map all atoms in the same residue to the same periodic box, scale if necessary (usually in pressurestat). Assume system has n atoms.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

residue_numbers (int32) – the number of residues m.

Inputs:
  • start (Tensor) - The start atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • end (Tensor) - The end atom index of each residue. The data type is int32 and the shape is \((m,)\).

  • center_of_mass (Tensor) - The coordinate of centroid of each residue. The data type is float32 and the shape is \((m, 3)\).

  • box_length (Tensor) - The box length of the simulation box. The data type is float32 and the shape is \((3,)\).

  • no_wrap_crd (Tensor) - The coordinate of each atom before wrap. The data type is float32 and the shape is \((n, 3)\).

  • crd (Tensor) - The coordinate of each atom after wrap. The data type is float32 and the shape is \((n, 3)\).

  • scaler (Tensor) - The scaler of system. The data type is float32 and the shape is \((1,)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.MaskedFill(*args, **kwargs)[source]

Fills elements of self tensor with value where mask is True.

The shapes of input and mask need to be the same or broadcast.

Inputs:
  • input (Tensor) - The source tensor whose data type is one of float16, float32, int8, int32.

  • mask (Tensor[bool]) - The boolean mask.

  • value (Union[float, Tensor]) – The value to fill in with, which only supports a 0-dimensional tensor or a float number.

Outputs:

Tensor, has the same type and shape as input.

Raises
  • TypeError – If input or mask is not a tensor.

  • TypeError – If value is neither float number nor tensor.

  • TypeError – If dtype of input or value is not one of float16, float32, int8, int32.

  • TypeError – If dtype of value is different from that of input.

  • TypeError – If dtype of mask is not bool.

  • ValueError – If the shapes of input and mask could not be broadcast.

Supported Platforms:

Ascend

Examples

>>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
>>> output = ops.MaskedFill()(input, mask, 0.5)
>>> print(output)
[0.5 0.5 3.  0.5]
class tinyms.primitives.MaskedSelect(*args, **kwargs)[source]

Returns a new 1-D Tensor which indexes the input tensor according to the boolean mask. The shapes of the mask tensor and the input tensor don’t need to match, but they must be broadcastable.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • mask (Tensor[bool]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

A 1-D Tensor, with the same type as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
>>> output = ops.MaskedSelect()(x, mask)
>>> print(output)
[1 3]
class tinyms.primitives.MatMul(*args, **kwargs)[source]

Multiplies matrix x and matrix y.

\[(Output)_{i j}=\sum_{k=1}^{p} a_{i k} b_{k j}=a_{i 1} b_{1 j}+a_{i 2} b_{2 j}+\cdots+a_{i p} b_{p j}, p\in N\]

where the \(i,j\) indicates the output of the i-th row and j-th column element.

Parameters
  • transpose_x (bool) – If true, x is transposed before multiplication. Default: False.

  • transpose_y (bool) – If true, y is transposed before multiplication. Default: False.

Inputs:
  • x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((N, C)\). If transpose_x is True, its shape must be \((N, C)\) after transpose.

  • y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((C, M)\). If transpose_y is True, its shape must be \((C, M)\) after transpose.

Outputs:

Tensor, the shape of the output tensor is \((N, M)\).

Raises
  • TypeError – If transpose_a or transpose_b is not a bool.

  • ValueError – If the column of matrix dimensions of x is not equal to the row of matrix dimensions of y.

  • ValueError – If length of shape of x or y is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> matmul = ops.MatMul()
>>> output = matmul(x, y)
>>> print(output)
[[3. 3. 3. 3.]]
check_shape_size(x1, x2)[source]

Check the shape size of inputs for MatMul.

class tinyms.primitives.MatrixInverse(*args, **kwargs)[source]

Returns the inverse of the input matrix. If the matrix is irreversible, an error may be reported or an unknown result may be returned.

Note

The parameter ‘adjoint’ is only supporting False right now. Because complex number is not supported at present.

Parameters

adjoint (bool) – An optional bool. Default: False.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be at least two dimensions, and the last two dimensions must be the same size. types: float32, float64.

Outputs:

Tensor, has the same type and shape as input x.

Raises
  • TypeError – If adjoint is not a bool.

  • TypeError – If dtype of x is neither float32 nor float64.

  • ValueError – If the last two dimensions of x is not same size.

  • ValueError – If the dimension of x is less than 2.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[[-0.710504  , -1.1207525],
...                       [-1.7651395 , -1.7576632]],
...                      [[ 0.52412605,  1.9070215],
...                       [ 1.3384849 ,  1.4274558]]]), mindspore.float32)
>>> matrix_inverse = ops.MatrixInverse(adjoint=False)
>>> output = matrix_inverse(x)
>>> print(output)
[[[ 2.4095483  -1.536419  ]
  [-2.4197974   0.97401696]]
 [[-0.79111797  1.0569006 ]
  [ 0.74180895 -0.2904787 ]]]
class tinyms.primitives.MaxPool(*args, **kwargs)[source]

Max pooling operation.

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of shape of input is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_op = ops.MaxPool(pad_mode="VALID", kernel_size=2, strides=1)
>>> output = maxpool_op(x)
>>> print(output)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPool3D(*args, **kwargs)[source]

3D max pooling operation.

Applies a 3D max pooling over an input Tensor which can be regarded as a composition of 3D planes.

Typically the input is of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of “pad” will be padded to the input Tensor borders. “pad” must be greater than or equal to 0.

  • pad_list (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Only effective in “pad” mode. When “pad_mode” is “pad” and “ceil_mode” is “None”, “ceil_mode” will be set as “False”. Default: None.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Data type must be float16 or float32.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the data type with x.

Raises
  • TypeError – If kernel_size or strides is neither an int not a tuple.

  • TypeError – If pad_mode or data_format is not a string.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If pad_mode is ‘same’ or ‘valid’, ‘ceil_mode’ is not None.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float32)
>>> max_pool3d = ops.MaxPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = max_pool3d(x)
>>> print(output)
[[[[[10. 11.]]]
  [[[22. 23.]]]]]
class tinyms.primitives.MaxPoolWithArgmax(*args, **kwargs)[source]

Performs max pooling on the input Tensor and returns both max values and indices.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\). Data type must be float16 or float32.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N, C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • mask (Tensor) - Max values’ index represented by the mask. Data type is int32.

Raises
  • TypeError – If the data type of x is neither float16 nor float32.

  • TypeError – If kernel_size or strides is neither an int nor a tuple.

  • TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_arg_op = ops.MaxPoolWithArgmax(pad_mode="VALID", kernel_size=2, strides=1)
>>> output_tensor, argmax = maxpool_arg_op(x)
>>> print(output_tensor)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.Maximum(*args, **kwargs)[source]

Computes the maximum of input tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> maximum = ops.Maximum()
>>> output = maximum(x, y)
>>> print(output)
[4. 5. 6.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = maximum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.Merge(*args, **kwargs)[source]

Merges all input data to one.

One and only one of the inputs must be selected as the output

Inputs:
  • inputs (Union(Tuple, List)) - The data to be merged. All tuple elements must have the same data type.

Outputs:

tuple. Output is tuple(data, output_index). The data has the same shape of inputs element.

Raises

TypeError – If inputs is neither Tuple nor list.

Examples

>>> merge = ops.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)
>>> input_y = Tensor(np.random.randint(-4, 4, (2, 4)), mindspore.float32)
>>> result = merge((input_x, input_y))
class tinyms.primitives.Meshgrid(*args, **kwargs)[source]

Generates coordinate matrices from given coordinate tensors.

Given N one-dimensional coordinate tensors, returns a tuple outputs of N N-D coordinate tensors for evaluating expressions on an N-D grid.

Parameters

indexing (str) – Either ‘xy’ or ‘ij’. Default: ‘xy’. When the indexing argument is set to ‘xy’ (the default), the broadcasting instructions for the first two dimensions are swapped.

Inputs:
  • input (Union[tuple]) - A Tuple of N 1-D Tensor objects. The length of input should be greater than 1. The data type is Number.

Outputs:

Tensors, A Tuple of N N-D Tensor objects. The data type is the same with the Inputs.

Raises
  • TypeError – If indexing is not a str or input is not a tuple.

  • ValueError – If indexing is neither ‘xy’ nor ‘ij’.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> inputs = (x, y, z)
>>> meshgrid = ops.Meshgrid(indexing="xy")
>>> output = meshgrid(inputs)
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
class tinyms.primitives.Minimum(*args, **kwargs)[source]

Computes the minimum of input tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> minimum = ops.Minimum()
>>> output = minimum(x, y)
>>> print(output)
[1. 2. 3.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = minimum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.MirrorPad(*args, **kwargs)[source]

Pads the input tensor according to the paddings and mode.

Parameters

mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • paddings (Tensor) - The paddings tensor. The value of paddings is a matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Outputs:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]].

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].

Raises
  • TypeError – If input_x or paddings is not a Tensor.

  • TypeError – If mode is not a str.

  • ValueError – If paddings.size is not equal to 2 * len(input_x).

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: mode="REFLECT"
>>> class Net(nn.Cell):
...    def __init__(self, mode):
...        super(Net, self).__init__()
...        self.pad = ops.MirrorPad(mode=mode)
...        self.paddings = Tensor([[1, 1], [2, 2]])
...    def construct(self, input_x):
...        return self.pad(input_x, self.paddings)
...
>>> input_x = Tensor([[1,2,3], [4,5,6], [7,8,9]])
>>> pad = Net("REFLECT")
>>> output = pad(input_x)
>>> print(output)
[[6 5 4 5 6 5 4]
 [3 2 1 2 3 2 1]
 [6 5 4 5 6 5 4]
 [9 8 7 8 9 8 7]
 [6 5 4 5 6 5 4]]
>>> # case2: mode="SYMMETRIC"
>>> pad = Net("SYMMETRIC")
>>> output = pad(input_x)
>>> print(output)
[[2 1 1 2 3 3 2]
 [2 1 1 2 3 3 2]
 [5 4 4 5 6 6 5]
 [8 7 7 8 9 9 8]
 [8 7 7 8 9 9 8]]
class tinyms.primitives.Mish(*args, **kwargs)[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

The function is shown as follows:

\[\text{output} = x * \tan(\log(1 + \exp(\text{x})))\]

See more details in A Self Regularized Non-Monotonic Neural Activation Function.

Inputs:
  • x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the x.

Supported Platforms:

Ascend

Raises

TypeError – If dtype of x is neither float16 nor float32.

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> mish = ops.Mish()
>>> output = mish(x)
>>> print(output)
[[-0.30273438  3.9974136 -0.015625]
 [ 1.9439697  -0.02929688 8.999999]]
class tinyms.primitives.Mod(*args, **kwargs)[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} // y_{i}\]

Warning

  • The input data does not support 0.

  • When the elements of input exceeds 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Inputs:
  • x (Union[Tensor, Number]) - The first input is a number or a tensor whose data type is number.

  • y (Union[Tensor, Number]) - When the first input is a tensor, The second input could be a number or a tensor whose data type is number. When the first input is a number, the second input must be a tensor whose data type is number.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

ValueError – When x and y are not the same dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> mod = ops.Mod()
>>> output = mod(x, y)
>>> print(output)
[-1.  1.  0.]
class tinyms.primitives.Mul(*args, **kwargs)[source]

Multiplies two tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} * y_{i}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> mul = ops.Mul()
>>> output = mul(x, y)
>>> print(output)
[ 4. 10. 18.]
class tinyms.primitives.MulNoNan(*args, **kwargs)[source]

Computes x * y element-wise. If y is zero, no matter what x is, it will return 0, and also If x is zero, no matter what y is, it will return 0.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcasted. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

The shapes of x and y should be the same or can be broadcasted.

Inputs:
  • x (Union[Tensor]) - The first input is a tensor whose data type is one of flota16, float32, int32, int64 currently or scalar.

  • y (Union[Tensor]) - The second input is a tensor whose data type is one of flota16, float32, int32, int64 currently or scalar.

Outputs:

Tensor, the shape is the same as the shape after broadcasting, and the data type is the one with higher precision among the two inputs.

Supported Platforms:

Ascend

Raises

TypeError – If neither x nor y is a bool Tensor.

Examples

>>> # case 1 : same data type and shape of two inputs, there are some 0 in y.
>>> x = Tensor(np.array([[-1.0, 6.0, np.inf], [np.nan, -7.0, 4.0]]), mindspore.float32)
>>> y = Tensor(np.array([[-1.0, 4.0, 0], [0, -3.0, 1.0]]), mindspore.float32)
>>> mul_no_nan = ops.MulNoNan()
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. 0.]
[ 0. 21. 4.]]
>>> # case 2 : the shape of two inputs is same, there are some 0 in x, y.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.int32)
>>> y = Tensor(np.array([[-1.0, 4.0, np.inf], [np.nan, 0, 1.0]]), mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. 0.]
 [ 0.  0. 4.]]
>>> print(output.dtype)
Float32
>>> # case 3 : the y is a scalar.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.float32)
>>> y = Tensor(0, mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 0. 0. 0.]
 [ 0. 0. 0.]]
class tinyms.primitives.Multinomial(*args, **kwargs)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • x (Tensor[float32]) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • num_samples (int32) - number of samples to draw.

Outputs:

Tensor with the same rows as x, each row has num_samples sampled indices.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If input is not a Tensor whose dtype is float32.

  • TypeError – If dtype of num_samples is not int32.

Supported Platforms:

GPU

Examples

>>> x = Tensor([0., 9., 4., 0.], mstype.float32)
>>> multinomial = ops.Multinomial(seed=10)
>>> output = multinomial(x, 2)
>>> print(output)
[2 1]
class tinyms.primitives.NLLLoss(*args, **kwargs)[source]

Gets the negative log likelihood loss between logits and labels.

The nll loss with reduction=none can be described as:

\[\ell(x, t)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{t_{n}} x_{n, t_{n}}, \quad w_{c}=\text { weight }[c] \cdot 1\]

where \(x\) is the logits, \(t\) is the labels, \(w\) is the weight, N is the batch size, \(c\) belonging [0, C-1] is class index, where \(C\) is the number of classes.

If reduction is not ‘none’ (default ‘mean’), then

\[\begin{split}\ell(x, t)=\left\{\begin{array}{ll} \sum_{n=1}^{N} \frac{1}{\sum_{n=1}^{N} w_{t n}} l_{n}, & \text { if reduction }=\text { 'mean'; } \\ \sum_{n=1}^{N} l_{n}, & \text { if reduction }=\text { 'sum' } \end{array}\right.\end{split}\]
Parameters

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’, Default: “mean”.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type only support float32 or float16.

  • labels (Tensor) - Ground truth labels, with shape \((N,)\). Data type only support int32.

  • weight (Tensor) - The rescaling weight to each class, with shape \((C,)\) and data type only support float32 or float16.

Outputs:

Tuple of 2 tensors composed with loss and total_weight.

  • loss (Tensor) - When reduction is ‘none’ and logits is 2D tensor, the loss shape is \((N,)\). Otherwise, the loss is a scalar. The data type is same with input’s.

  • total_weight (Tensor) - The total_weight is a scalar. The data type is same with weight’s.

Raises
  • TypeError – If dtype of logits or weight is neither float16 nor float32, labels is not int32.

  • ValueError – If logits is not a one or two dimension tensor, labels and weight not a one dimension tensor. When logits is a two dimension tensor, the first dimension of logits is not equal to labels, and second dimension of logits is not equal to weight. When logits is a one dimension tensor, the dimensions of logits, labels and weight should be equal to each other.

Supported Platforms:

Ascend GPU

Examples

>>> logits = Tensor(np.array([[0.5488135, 0.71518934],
...                           [0.60276335, 0.5448832],
...                           [0.4236548, 0.6458941]]).astype(np.float32))
>>> labels = Tensor(np.array([0, 0, 0]).astype(np.int32))
>>> weight = Tensor(np.array([0.3834415, 0.79172504]).astype(np.float32))
>>> nll_loss = ops.NLLLoss(reduction="mean")
>>> loss, weight = nll_loss(logits, labels, weight)
>>> print(loss)
-0.52507716
>>> print(weight)
1.1503246
class tinyms.primitives.NMSWithMask(*args, **kwargs)[source]

When object detection problem is performed in the computer vision field, object detection algorithm generates a plurality of bounding boxes. Selects some bounding boxes in descending order of score(Descending order is not supported in Ascend platform currently). Use the box with the highest score calculate the overlap between other boxes and the current box, and delete the box based on a certain threshold(IOU). The IOU is as follows,

\[\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\]

Warning

Only supports up to 2864 input boxes at one time.

Parameters

iou_threshold (float) – Specifies the threshold of overlap boxes with respect to IOU. Default: 0.5.

Inputs:
  • bboxes (Tensor) - The shape of tensor is \((N, 5)\). Input bounding boxes. N is the number of input bounding boxes. Every bounding box contains 5 values, the first 4 values are the coordinates(x0, y0, x1, y1) of bounding box which represents the point of top-left and bottom-right, and the last value is the score of this bounding box. The data type must be float16 or float32.

Outputs:

tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask.

  • selected_boxes (Tensor) - The shape of tensor is \((N, 5)\). The list of bounding boxes after non-max suppression calculation.

  • selected_idx (Tensor) - The shape of tensor is \((N,)\). The indexes list of valid input bounding boxes.

  • selected_mask (Tensor) - The shape of tensor is \((N,)\). A mask list of valid output bounding boxes.

Raises

ValueError – If the iou_threshold is not a float number, or if the first dimension of input Tensor is less than or equal to 0, or if the data type of the input Tensor is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bbox = np.array([[100.0, 100.0, 50.0, 68.0, 0.63], [150.0, 75.0, 165.0, 115.0, 0.55],
...                  [12.0, 190.0, 288.0, 200.0, 0.9], [28.0, 130.0, 106.0, 172.0, 0.3]])
>>> bbox[:, 2] += bbox[:, 0]
>>> bbox[:, 3] += bbox[:, 1]
>>> inputs = Tensor(bbox, mindspore.float32)
>>> nms = ops.NMSWithMask(0.1)
>>> output_boxes, indices, mask = nms(inputs)
>>> indices_np = indices.asnumpy()
>>> print(indices_np[mask.asnumpy()])
[0 1 2]
class tinyms.primitives.NPUAllocFloatStatus(*args, **kwargs)[source]

Allocates a flag to store the overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32.

Note

Examples: see NPUGetFloatStatus.

Outputs:

Tensor, has the shape of (8,).

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> output = alloc_status()
>>> print(output)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.NPUClearFloatStatus(*args, **kwargs)[source]

Clears the flag which stores the overflow status.

Note

The flag is in the register on the Ascend device. It will be reset and can not be reused again after the NPUClearFloatStatus is called. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use Depend to ensure the execution order.

Examples: see NPUGetFloatStatus.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Supported Platforms:

Ascend

Examples

>>> self.alloc_status = ops.NPUAllocFloatStatus()
>>> self.get_status = ops.NPUGetFloatStatus()
>>> self.clear_status = ops.NPUClearFloatStatus()
>>> init = self.alloc_status()
>>> init = F.Depend(init, input)  # Ensure clear_status after input
>>> clear_status = self.clear_status(init)
>>> input = F.Depend(input, clear_status)  # Ensure your compute after clear_status
>>> output = Compute(input)
>>> init = F.Depend(init, output)
>>> flag = self.get_status(init)  # Ensure get_status after your compute
>>> self.clear_status(init)
>>> print(init)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.NPUGetFloatStatus(*args, **kwargs)[source]

Updates the flag which is the output tensor of NPUAllocFloatStatus with the latest overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32. If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use Depend to ensure the execution order.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Raises
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> self.alloc_status = ops.NPUAllocFloatStatus()
>>> self.get_status = ops.NPUGetFloatStatus()
>>> self.clear_status = ops.NPUClearFloatStatus()
>>> init = self.alloc_status()
>>> init = F.Depend(init, input)  # Ensure clear_status after input
>>> clear_status = self.clear_status(init)
>>> input = F.Depend(input, clear_status)  # Ensure your compute after clear_status
>>> output = Compute(input)
>>> init = F.Depend(init, output)
>>> flag = self.get_status(init)  # Ensure get_status after your compute
>>> self.clear_status(init)
>>> print(init)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.Neg(*args, **kwargs)[source]

Returns a tensor with negative values of the input tensor element-wise.

\[out_{i} = - x_{i}\]
Inputs:
  • x (Tensor) - The input tensor whose dtype is number. \((N,*)\) where \(*\) means ,any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and dtype as input.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> neg = ops.Neg()
>>> x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = neg(x)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
class tinyms.primitives.NeighborListRefresh(*args, **kwargs)[source]

Update (or construct if first time) the Verlet neighbor list for the calculation of short-ranged force.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • grid_numbers (int32) – the total number of grids divided G.

  • atom_numbers (int32) – the number of atoms n.

  • not_first_time (int32) – whether to construct the neighbor list first time or not.

  • nxy (int32) – the total number of grids divided in xy plane.

  • excluded_atom_numbers (int32) – the total atom numbers in the excluded list E.

  • cutoff_square (float32) – the cutoff square distance for short-range force calculation.

  • half_skin_square (float32) – the maximum square value of the distance atom allowed to move between two updates.

  • cutoff_with_skin (float32) – cutoff + skin, indicates the radius of the neighbor list for each atom.

  • half_cutoff_with_skin (float32) – cutoff_with_skin/2.

  • cutoff_with_skin_square (float32) – the square value of cutoff_with_skin.

  • refresh_interval (int32) – the number of iteration steps between two updates of neighbor list. Default: 20.

  • cutoff (float32) – the cutoff distance for short-range force calculation. Default: 10.0.

  • skin (float32) – the maximum value of the distance atom allowed to move. Default: 2.0.

  • max_atom_in_grid_numbers (int32) – the maximum number of atoms in one grid k. Default: 64.

  • max_neighbor_numbers (int32) – The maximum number of neighbors m. Default: 800.

  • forced_update (int32) – the flag that decides whether to force an update. Default: 0.

  • forced_check (int32) – the flag that decides whether to force an check. Default: 0.

Inputs:
  • atom_numbers_in_grid_bucket (Tensor) - The number of atoms in each grid bucket. The data type is int32 and the shape is \((G,)\).

  • bucket (Tensor) - (Tensor) - The atom indices in each grid bucket. The data type is int32 and the shape is \((G, k)\).

  • crd (Tensor) - The coordinates of each atom. The data type is float32 and the shape is \((n, 3)\).

  • box_length (Tensor) - The box length of the simulation box. The data type is float32 and the shape is \((3,)\).

  • grid_n (Tensor) - The number of grids divided of 3 dimensions of the simulation box. The data type is int32 and the shape is \((3,)\).

  • grid_length_inverse (Tensor) - The inverse value of grid length. The data type is float32 and the shape is \((3,)\).

  • atom_in_grid_serial (Tensor) - The grid index for each atom. The data type is int32 and the shape is \((n,)\).

  • old_crd (Tensor) - The coordinates before update of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd_to_uint_crd_cof (Tensor) - The scale factor between the unsigned int coordinate and the real one. The data type is float32 and the shape is \((3,)\).

  • uint_crd (Tensor) - The unsigned int coordinates value fo each atom. The data type is unsigned int32 and the shape is \((n, 3)\).

  • gpointer (Tensor) - The nearest neighbor grids (including self) of each grid. The data type is int32 and the shape is \((G, 125)\).

  • nl_atom_numbers (Tensor) - The number of atoms in neighbor list of each atom. The data type is int32 and the shape is \((n,)\).

  • nl_atom_serial (Tensor) - The indices of atoms in neighbor list of each atom. The data type is int32 and the shape is \((n, m)\).

  • uint_dr_to_dr_cof (Tensor) - The scale factor. The data type is float32 and the shape is \((3,)\).

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\).

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. The data type is int32 and the shape is \((E,)\).

  • excluded_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\).

  • need_refresh_flag (Tensor) - Whether the neighbor list of each atom need update or not. The data type is int32 and the shape is \((1,)\).

  • refresh_count (Union[Tensor, Scalar]) - Count how many iteration steps have passed since last update. The data type is int32 and the shape is \((1,)\) or \(()\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.NeighborListUpdate(*args, **kwargs)[source]

Update (or construct if first time) the Verlet neighbor list for the calculation of short-ranged force. Assume the number of atoms is N, the number of grids divided is G, the maximum number of atoms in one grid is m, the maximum number of atoms in single atom’s neighbor list is L, and the number of total atom in excluded list is E.

Parameters
  • grid_numbers (int32) – the total number of grids divided.

  • not_first_time (int32) – whether to construct the neighbor list first time or not.

  • nxy (int32) – the total number of grids divided in xy plane.

  • excluded_atom_numbers (int32) – the total atom numbers in the excluded list.

  • cutoff (float32) – the cutoff distance for short-range force calculation.

  • skin (float32) – the overflow value of cutoff to maintain a neighbor list.

  • cutoff_square (float32) – the suqare value of cutoff.

  • half_skin_square (float32) – skin*skin/4, indicates the maximum square value of the distance atom allowed to move between two updates.

  • cutoff_with_skin (float32) – cutoff + skin, indicates the radius of the neighbor list for each atom.

  • half_cutoff_with_skin (float32) – cutoff_with_skin/2.

  • cutoff_with_skin_square (float32) – the square value of cutoff_with_skin.

  • refresh_interval (int32) – the number of iteration steps between two updates of neighbor list.

  • max_atom_in_grid_numbers (int32) – the maximum number of atoms in one grid k.

Inputs:
  • atom_numbers_in_grid_bucket (Tensor) - The number of atoms in each grid bucket. The data type is int32 and the shape is \((G,)\).

  • bucket (Tensor) - (Tensor) - The atom indices in each grid bucket. The data type is int32 and the shape is \((G, k)\).

  • crd (Tensor) - The coordinates of each atom. The data type is float32 and the shape is \((n, 3)\).

  • box_length (Tensor) - The box length of the simulation box. The data type is float32 and the shape is \((3,)\).

  • grid_N (Tensor) - The number of grids divided of 3 dimensions of the simulation box. The data type is int32 and the shape is \((3,)\).

  • grid_length_inverse (Tensor) - The inverse value of grid length. The data type is float32 and the shape is \((3,)\).

  • atom_in_grid_serial (Tensor) - The grid index for each atom. The data type is int32 and the shape is \((n,)\).

  • old_crd (Tensor) - The coordinates before update of each atom. The data type is float32 and the shape is \((n, 3)\).

  • crd_to_uint_crd_cof (Tensor) - The scale factor between the unsigned int coordinate and the real one. The data type is float32 and the shape is \((3,)\).

  • uint_crd (Tensor) - The unsigned int coordinates value fo each atom. The data type is unsigned int32 and the shape is \((n, 3)\).

  • gpointer (Tensor) - The nearest neighbor grids (including self) of each grid. The data type is int32 and the shape is \((G, 125)\).

  • nl_atom_numbers (Tensor) - The number of atoms in neighbor list of each atom. The data type is int32 and the shape is \((n,)\).

  • nl_atom_serial (Tensor) - The indices of atoms in neighbor list of each atom. The data type is int32 and the shape is \((n, m)\).

  • uint_dr_to_dr_cof (Tensor) - The scale factor. The data type is float32 and the shape is \((3,)\).

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\).

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. The data type is int32 and the shape is \((E,)\).

  • excluded_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\).

  • need_refresh_flag (Tensor) - Whether the neighbor list of each atom need update or not. The data type is int32 and the shape is \((1,)\).

  • refresh_count (Tensor) - Count how many iteration steps have passed since last update. The data type is int32 and the shape is \((1,)\) or \(()\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.NoRepeatNGram(*args, **kwargs)[source]

Updates log_probs with repeat n-grams.

During beam search, if consecutive ngram_size words exist in the generated word sequence, the consecutive ngram_size words will be avoided during subsequent prediction. For example, when ngram_size is 3, the generated word sequence is [1, 2, 3, 2, 3], the next predicted word will not be 2 and the value of log_probs will be replaced with -FLOAT_MAX. Because 3 consecutive words [2, 3, 2] do not appear twice in the word sequence.

Parameters

ngram_size (int) – Size of n-grams, must be greater than 0. Default: 1.

Inputs:
  • state_seq (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, m).

  • log_probs (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, vocab_size). The value of log_probs will be replaced with -FLOAT_MAX when n-grams repeated.

Outputs:
  • log_probs (Tensor) - The output Tensor with same shape and type as original log_probs.

Raises
  • TypeError – If ngram_size is not an int.

  • TypeError – If neither state_seq nor log_probs is a Tensor.

Supported Platforms:

Ascend

Examples

>>> no_repeat_ngram = ops.NoRepeatNGram(ngram_size=3)
>>> state_seq = Tensor([[[1, 2, 1, 2, 5, 1, 2],
...                      [9, 3, 9, 5, 4, 1, 5]],
...                     [[4, 8, 6, 4, 5, 6, 4],
...                      [4, 8, 8, 4, 3, 4, 8]]], dtype=mindspore.int32)
>>> log_probs = Tensor([[[0.7, 0.8, 0.6, 0.9, 0.2, 0.8, 0.4, 0.6, 0.2, 0.7],
...                      [0.4, 0.5, 0.6, 0.7, 0.8, 0.1, 0.9, 0.8, 0.7, 0.1]],
...                     [[0.9, 0.7, 0.6, 0.3, 0.5, 0.3, 0.5, 0.4, 0.8, 0.6],
...                      [0.5, 0.8, 0.8, 0.7, 0.7, 0.8, 0.2, 0.7, 0.9, 0.7]]], dtype=mindspore.float32)
>>> output = no_repeat_ngram(state_seq, log_probs)
>>> print(output)
[[[ 6.9999999e-01 -3.4028235e+38  6.0000002e-01  8.9999998e-01
    2.0000000e-01 -3.4028235e+38  4.0000001e-01  6.0000002e-01
    2.0000000e-01  6.9999999e-01]
  [ 4.0000001e-01  5.0000000e-01  6.0000002e-01  6.9999999e-01
    8.0000001e-01  1.0000000e-01  8.9999998e-01  8.0000001e-01
    6.9999999e-01  1.0000000e-01]]
 [[ 8.9999998e-01  6.9999999e-01  6.0000002e-01  3.0000001e-01
    5.0000000e-01 -3.4028235e+38  5.0000000e-01  4.0000001e-01
    8.0000001e-01  6.0000002e-01]
  [ 5.0000000e-01  8.0000001e-01  8.0000001e-01  6.9999999e-01
    6.9999999e-01  8.0000001e-01  2.0000000e-01  6.9999999e-01
   -3.4028235e+38  6.9999999e-01]]]
class tinyms.primitives.NotEqual(*args, **kwargs)[source]

Computes the non-equivalence of two tensors element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} \ne y_{i} \\ & \text{False, if } x_{i} = y_{i} \end{cases}\end{split}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, y)
>>> print(output)
[False False  True]
class tinyms.primitives.OneHot(*args, **kwargs)[source]

Computes a one-hot tensor.

Makes a new tensor, whose locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters

axis (int) – Position to insert the value. e.g. If shape of indices is \((N, C)\), and axis is -1, the output shape will be \((N, C, D)\), If axis is 0, the output shape will be \((D, N, C)\). Default: -1.

Inputs:
  • indices (Tensor) - A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be int32 or int64.

  • depth (int) - A scalar defining the depth of the one hot dimension.

  • on_value (Tensor) - A value to fill in output when indices[j] = i. With data type of float16 or float32.

  • off_value (Tensor) - A value to fill in output when indices[j] != i. Has the same data type as on_value.

Outputs:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Raises
  • TypeError – If axis or depth is not an int.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • TypeError – If indices, on_value or off_value is not a Tensor.

  • ValueError – If axis is not in range [-1, len(indices_shape)].

  • ValueError – If depth is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> onehot = ops.OneHot()
>>> output = onehot(indices, depth, on_value, off_value)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
class tinyms.primitives.Ones(*args, **kwargs)[source]

Creates a tensor filled with value ones.

Creates a tensor with shape described by the first argument and fills it with value ones in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor. Only constant positive int is allowed.

  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input shape value.

Raises

TypeError – If shape is neither tuple nor int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> ones = ops.Ones()
>>> output = ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ones((3, 3), mindspore.float32)
>>> print(output)
[[1. 1. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
class tinyms.primitives.OnesLike(*args, **kwargs)[source]

Creates a new tensor. The values of all elements are 1.

Returns a tensor of ones with the same shape and type as the input.

Inputs:
  • input_x (Tensor) - Input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x but filled with ones.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> oneslike = ops.OnesLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = oneslike(input_x)
>>> print(output)
[[1 1]
 [1 1]]
class tinyms.primitives.PMEEnergy(*args, **kwargs)[source]

Calculate the Coulumb energy of the system using PME method.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

\[E = sum_{ij} q_iq_j/r_{ij}\]
Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • excluded_numbers (int32) – the length of excluded list, E.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

  • fftx (int32) – the number of points for Fourier transform in dimension X.

  • ffty (int32) – the number of points for Fourier transform in dimension Y.

  • fftz (int32) – the number of points for Fourier transform in dimension Z.

  • box_length_0 (float32) – the value of boxlength idx 0

  • box_length_1 (float32) – the value of boxlength idx 1

  • box_length_2 (float32) – the value of boxlength idx 2

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

  • nl_numbers - (Tensor) - The each atom. The data type is int32 and the shape is \((n, 3)\)

  • nl_serial - (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, 800)\)

  • scaler (Tensor) - The scale factor between real space coordinates and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. E is the number of excluded atoms. The data type is int32 and the shape is \((E,)\)

  • excluded_atom_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

Outputs:
  • reciprocal_ene (Tensor) - The reciprocal term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • self_ene (Tensor) - The self term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • direct_ene (Tensor) - The direct term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • correction_ene (Tensor) - The correction term of PME energy. The data type is float32 and the the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.PMEEnergyUpdate(*args, **kwargs)[source]

Calculate the Coulumb energy of the system using PME method for pressure.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • excluded_numbers (int32) – the length of excluded list, E.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

  • fftx (int32) – the number of points for Fourier transform in dimension X.

  • ffty (int32) – the number of points for Fourier transform in dimension Y.

  • fftz (int32) – the number of points for Fourier transform in dimension Z.

  • box_length_0 (float32) – the value of boxlength idx 0.

  • box_length_1 (float32) – the value of boxlength idx 1.

  • box_length_2 (float32) – the value of boxlength idx 2.

  • max_neighbor_numbers (int32) – the max neighbor numbers, m, default 800.

  • need_update (int32) – if need_update = 1, calculate the pressure, default 0.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

  • nl_numbers - (Tensor) - The each atom. The data type is int32 and the shape is \((n, 3)\)

  • nl_serial - (Tensor) - The neighbor list of each atom, the max number is 800. The data type is int32 and the shape is \((n, m)\)

  • scaler (Tensor) - The scale factor between real space coordinates and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. E is the number of excluded atoms. The data type is int32 and the shape is \((E,)\)

  • excluded_atom_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • factor (Tensor) - The factor parameter to be updated in pressure calculation. The data type is float32 and the shape is \((1,)\)

  • beta (Tensor) - The PME beta parameter to be updated in pressure calculation. The data type is float32 and the shape is \((1,)\)

Outputs:
  • reciprocal_ene (Tensor) - The reciprocal term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • self_ene (Tensor) - The self term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • direct_ene (Tensor) - The direct term of PME energy. The data type is float32 and the the shape is \((1,)\).

  • correction_ene (Tensor) - The correction term of PME energy. The data type is float32 and the the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.PMEExcludedForce(*args, **kwargs)[source]

Calculate the excluded part of long-range Coulumb force using PME(Particle Meshed Ewald) method. Assume the number of atoms is n, and the length of excluded list is E.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • excluded_numbers (int32) – the length of excluded list, E.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • scaler (Tensor) - The scale factor between real space coordinates and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. E is the number of excluded atoms. The data type is int32 and the shape is \((E,)\)

  • excluded_atom_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

Outputs:
  • force (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\)

Supported Platforms:

GPU

class tinyms.primitives.PMEExcludedForceUpdate(*args, **kwargs)[source]

Calculate the excluded part of long-range Coulumb force using PME(Particle Meshed Ewald) method for pressure. Assume the number of atoms is n, and the length of excluded list is E.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • excluded_numbers (int32) – the length of excluded list, E.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

  • need_update (int32) – if need_update = 1, calculate the pressure, default 0.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • scaler (Tensor) - The scale factor between real space coordinates and its unsigned int value. The data type is float32 and the shape is \((3,)\)

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

  • excluded_list_start (Tensor) - The start excluded index in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • excluded_list (Tensor) - The contiguous join of excluded list of each atom. E is the number of excluded atoms. The data type is int32 and the shape is \((E,)\)

  • excluded_atom_numbers (Tensor) - The number of atom excluded in excluded list for each atom. The data type is int32 and the shape is \((n,)\)

  • beta (Tensor) - The PME beta parameter to be updated in pressure calculation. The data type is float32 and the shape is \((1,)\)

Outputs:
  • force (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\)

Supported Platforms:

GPU

class tinyms.primitives.PMEReciprocalForce(*args, **kwargs)[source]

Calculate the reciprocal part of long-range Coulumb force using PME(Particle Meshed Ewald) method. Assume the number of atoms is n.

The detailed calculation formula of PME(Particle Meshed Ewald) method can be found in this paper: A Smooth Particle Mesh Ewald Method. DOI: 10.1063/1.470117.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

  • fftx (int32) – the number of points for Fourier transform in dimension X.

  • ffty (int32) – the number of points for Fourier transform in dimension Y.

  • fftz (int32) – the number of points for Fourier transform in dimension Z.

  • box_length_0 (float32) – the value of boxlength idx 0

  • box_length_1 (float32) – the value of boxlength idx 1

  • box_length_2 (float32) – the value of boxlength idx 2

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinates value of each atom. The data type is uint32 and the shape is \((n, 3)\)

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

Outputs:
  • force (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\)

Supported Platforms:

GPU

class tinyms.primitives.PMEReciprocalForceUpdate(*args, **kwargs)[source]

Calculate the reciprocal part of long-range Coulumb force using PME(Particle Meshed Ewald) method for pressure. Assume the number of atoms is n.

The detailed calculation formula of PME(Particle Meshed Ewald) method can be found in this paper: A Smooth Particle Mesh Ewald Method. DOI: 10.1063/1.470117.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms, n.

  • beta (float32) – the PME beta parameter, determined by the non-bond cutoff value and simulation precision tolerance.

  • fftx (int32) – the number of points for Fourier transform in dimension X.

  • ffty (int32) – the number of points for Fourier transform in dimension Y.

  • fftz (int32) – the number of points for Fourier transform in dimension Z.

  • box_length_0 (float32) – the value of boxlength idx 0

  • box_length_1 (float32) – the value of boxlength idx 1

  • box_length_2 (float32) – the value of boxlength idx 2

  • need_update (int32) – if need_update = 1, calculate the pressure, default 0.

Inputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

  • charge (Tensor) - The charge carried by each atom. The data type is float32 and the shape is \((n,)\)

  • beta (Tensor) - The PME beta parameter to be updated in pressure calculation. The data type is float32 and the shape is \((1,)\)

Outputs:
  • force (Tensor) - The force felt by each atom. The data type is float32 and the shape is \((n, 3)\)

Supported Platforms:

GPU

class tinyms.primitives.PReLU(*args, **kwargs)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of an channel of the input, w is the weight of the channel.

Note

0-D or 1-D input_x is not supported on Ascend.

Inputs:
  • x (Tensor) - The first input tensor, representing the output of the preview layer. With data type of float16 or float32. The shape is \((N, C, *)\) where \(*\) means, any number of additional dimensions.

  • weight (Tensor) - The second input tensor. The data type is float16 or float32. There are only two shapes are legitimate, 1 or the number of channels of the input_x. Channel dim is the 2nd dim of input. When input is 0-D or 1-D tensor, the number of channels is 1.

Outputs:

Tensor, with the same type as x.

For detailed information, please refer to nn.PReLU.

Raises
  • TypeError – If dtype of x or weight is neither float16 nor float32.

  • TypeError – If the x or the weight is not a Tensor.

  • ValueError – If the x is a 0-D or 1-D Tensor on Ascned.

  • ValueError – If the weight is not a 1-D Tensor.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.prelu = ops.PReLU()
...     def construct(self, x, weight):
...         result = self.prelu(x, weight)
...         return result
...
>>> x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> output = net(x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]
class tinyms.primitives.Pack(**kwargs)[source]

Same as operator Stack. Pack will be deprecated in the future. Please use Stack instead.

class tinyms.primitives.Pad(*args, **kwargs)[source]

Pads the input tensor according to the paddings. For example, to pad only the last dimension of the input tensor, then pad has the form (padding_left,padding_right); to pad the last 2 dimensions of the input tensor, then use (padding_left,padding_right, padding_top,padding_bottom); to pad the last 3 dimensions, use (padding_left,padding_right, padding_top,padding_bottom padding_front,padding_back).

\[\begin{split}\begin{aligned} &\text{ input_x_shape} = (N_{1},N_{2},...,N_{n}) \\ &\begin{aligned} \text{output_shape = }(&N_{1}+paddings[0,0]+paddings[0,1], \\ & N_{2}+paddings[1,0]+paddings[1,1], \\ &... , \\ & N_{n}+paddings[n-1,0]+paddings[n-1,1]) \end{aligned} \end{aligned}\end{split}\]
Parameters

paddings (tuple) – The shape of parameter paddings is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, the tensor after padding.

Raises
  • TypeError – If paddings is not a tuple.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If shape of paddings is not \((N, 2)\).

  • ValueError – If paddings.size is not equal to 2 * len(input_x).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> pad_op = ops.Pad(((1, 2), (2, 1)))
>>> output = pad_op(input_x)
>>> print(output)
[[ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.  -0.1  0.3  3.6  0. ]
 [ 0.   0.   0.4  0.5 -3.2  0. ]
 [ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.   0.   0.   0.   0. ]]
class tinyms.primitives.Padding(*args, **kwargs)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Parameters

pad_dim_size (int) – The value of the last dimension of x to be extended, which must be positive. Default: 8.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The rank of x must be at least 2. The last dimension of x must be 1. The data type is Number.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises
Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.Padding(pad_dim_size)(x)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
class tinyms.primitives.ParallelConcat(*args, **kwargs)[source]

Concats tensor in the first dimension.

Concats input tensors along with the first dimension.

The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Note

The input tensors are all required to have size 1 in the first dimension.

Inputs:
  • values (tuple, list) - A tuple or a list of input tensors. The data type and shape of these tensors must be the same. The data type is Number except float64.

Outputs:

Tensor, data type is the same as values.

Raises
  • ValueError – If length of shape of values is less than 1.

  • ValueError – The data type and shape of these tensors are not the same.

Supported Platforms:

Ascend

Examples

>>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
>>> op = ops.ParallelConcat()
>>> output = op((data1, data2))
>>> print(output)
[[0 1]
 [2 1]]
class tinyms.primitives.Partial(*args, **kwargs)[source]

Makes a partial function instance, used for pynative mode.

Inputs:
  • args (Union[FunctionType, Tensor]) - The function and bind arguments.

Outputs:

FunctionType, partial function binded with arguments.

class tinyms.primitives.Poisson(*args, **kwargs)[source]

Produces random non-negative integer values i, distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!},\]
Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • mean (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. It must be greater than 0. With float32 data type.

Outputs:

Tensor. Its shape must be the broadcasted shape of shape and the shape of mean. The dtype is int32.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

Supported Platforms:

Ascend

Examples

>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mstype.float32)
>>> poisson = ops.Poisson(seed=5)
>>> output = poisson(shape, mean)
>>> result = output.shape
>>> print(result)
(4, 2)
class tinyms.primitives.PopulationCount(*args, **kwargs)[source]

Calculates population count.

Inputs:
  • input (Tensor) - The data type must be int16 or uint16.

Outputs:

Tensor, with the same shape as the input.

Raises

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend

Examples

>>> population_count = ops.PopulationCount()
>>> x_input = Tensor([0, 1, 3], mindspore.int16)
>>> output = population_count(x_input)
>>> print(output)
[0 1 2]
class tinyms.primitives.Pow(*args, **kwargs)[source]

Computes a tensor to the power of the second input.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} ^{ y_{i}}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = 3.0
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1. 16. 64.]
class tinyms.primitives.Print(*args, **kwargs)[source]

Outputs the tensor or string to stdout. The outputs are printed to screen by default. It can also be saved in a file by setting the parameter print_file_path in context. Once set, the output will be saved in the file specified by print_file_path. parse_print can be employed to reload the data. For more information, please refer to mindspore.context.set_context() and mindspore.parse_print().

Note

In pynative mode, please use python print function. In graph mode, the bool, int and float would be converted into Tensor to print, str remains unchanged.

Inputs:
  • input_x (Union[Tensor, bool, int, float, str]) - The graph node to attach to. Supports multiple inputs which are separated by ‘,’.

Outputs:

Tensor, has the same data type and shape as original input_x.

Raises

TypeError – If input_x is not one of the following: Tensor, bool, int, float, str.

Supported Platforms:

Ascend GPU

Examples

>>> class PrintDemo(nn.Cell):
...     def __init__(self):
...         super(PrintDemo, self).__init__()
...         self.print = ops.Print()
...
...     def construct(self, x, y):
...         self.print('Print Tensor x and Tensor y:', x, y)
...         return x
...
>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> net = PrintDemo()
>>> result = net(x, y)
Print Tensor x and Tensor y:
Tensor(shape=[2, 1], dtype=Int32, value=
[[1]
 [1]])
Tensor(shape=[2, 2], dtype=Int32, value=
[[1 1]
 [1 1]])
class tinyms.primitives.Pull(*args, **kwargs)[source]

Pulls weight from parameter server.

Inputs:
  • key (Tensor) - The key of the weight.

  • weight (Tensor) - The weight to be updated.

Outputs:

None.

class tinyms.primitives.PullWeight(*args, **kwargs)[source]

Pull weight by its names from server.

Inputs:
  • weight (Tensor) - The weight to be pulled.

  • name (String) - The full name of the weight.

  • index (Int) - The index of the weight.

Outputs:

None.

class tinyms.primitives.Push(*args, **kwargs)[source]

Pushes the inputs of the corresponding optimizer to parameter server.

Parameters
  • optim_type (string) – The optimizer type. Default: ‘ApplyMomentum’.

  • only_shape_indices (list) – The indices of input of which only shape will be pushed to parameter server. Default: None.

Inputs:
  • optim_inputs (tuple) - The inputs for this kind of optimizer.

  • optim_input_shapes (tuple) - The shapes of the inputs.

Outputs:

Tensor, the key of the weight which needs to be updated.

class tinyms.primitives.PushWeight(*args, **kwargs)[source]

Upload weight by its names to server.

Inputs:
  • weight (Tensor) - The weight to be uploaded.

  • name (String) - The full name of the weight.

  • index (Int) - The index of the weight.

Outputs:

None.

class tinyms.primitives.PyFunc(fn, in_types, in_shapes, out_types, out_shapes, stateful=True)[source]

Execute Python function.

PyFunc encapsulates Python functions as an operator which could be compiled into computation graph. Unlike normal operators, it cannot be exported to MindIR as it is executed in current Python context. As only the weights of the network is stored in the checkpoint, network include PyFunc could save checkpoint and load to the network again, but will lose any Python function state.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • fn (function) – Python function which inputs and outputs should be Python built-in scalar or numpy ndarray.

  • in_types (list[mindspore.dtype]) – The type of the inputs.

  • in_shapes (list[tuple[int]]) – The dimensionality of the inputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • out_types (list[mindspore.dtype]) – The type of the outputs.

  • out_shapes (list[tuple[int]]) – The dimensionality of the outputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • stateful (bool) – Whether the function is stateful or not. If True, the execution order is same with model definition.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors.

Outputs:

tuple[Tensor], execution results Python functions.

Raises
  • TypeError – The Python function execution failed.

  • TypeError – The attributes(in_types/in_shapes/out_types/out_shapes) are inconsistent with Python function specifications.

Supported Platforms:

CPU

Examples

>>> def func(x1, x2):
>>>     return x1 + x2
>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> x2 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> op = P.PyFunc(func, [x1.dtype, x2.dtype], [x1.shape, x2.shape], [x1.dtype], [x1.dtype])
>>> output = op((x1, x2))
>>> print(output[0].asnumpy())
[2. 4. 6.]
class tinyms.primitives.RNNTLoss(*args, **kwargs)[source]

Computes the RNNTLoss and its gradient with respect to the softmax outputs.

Parameters

blank_label (int) – blank label. Default: 0.

Inputs:
  • acts (Tensor) - Tensor of shape \((B, T, U, V)\). Data type must be float16 or float32.

  • labels (Tensor) - Tensor of shape \((B, U-1)\). Data type is int32.

  • input_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • label_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

Outputs:
  • costs (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • grads (Tensor) - Has the same shape and dtype as acts.

Raises
  • TypeError – If acts, labels, input_lengths or label_lengths is not a Tensor.

  • TypeError – If dtype of acts is neither float16 nor float32.

  • TypeError – If dtype of labels, input_lengths or label_lengths is not int32.

Supported Platforms:

Ascend

Examples

>>> B, T, U, V = 1, 2, 3, 5
>>> blank = 0
>>> acts = np.random.random((B, T, U, V)).astype(np.float32)
>>> labels = np.array([[1, 2]]).astype(np.int32)
>>> input_length = np.array([T] * B).astype(np.int32)
>>> label_length = np.array([len(l) for l in labels]).astype(np.int32)
>>> rnnt_loss = ops.RNNTLoss(blank_label=0)
>>> costs, grads = rnnt_loss(Tensor(acts), Tensor(labels), Tensor(input_length), Tensor(label_length))
>>> print(costs.shape)
(1,)
>>> print(grads.shape)
(1, 2, 3, 5)
class tinyms.primitives.ROIAlign(*args, **kwargs)[source]

Computes the Region of Interest (RoI) Align operator.

The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling points. The details of (RoI) Align operator are described in Mask R-CNN.

Parameters
  • pooled_height (int) – The output features height.

  • pooled_width (int) – The output features width.

  • spatial_scale (float) – A scaling factor that maps the raw image coordinates to the input feature map coordinates. Suppose the height of a RoI is ori_h in the raw image and fea_h in the input feature map, the spatial_scale must be fea_h / ori_h.

  • sample_num (int) – Number of sampling points. Default: 2.

  • roi_end_mode (int) – Number must be 0 or 1. Default: 1.

Inputs:
  • features (Tensor) - The input features, whose shape must be \((N, C, H, W)\).

  • rois (Tensor) - The shape is \((rois\_n, 5)\). With data type of float16 or float32. rois_n represents the number of RoI. The size of the second dimension must be 5 and the 5 colunms are \((image\_index, top\_left\_x, top\_left\_y, bottom\_right\_x, bottom\_right\_y)\). image_index represents the index of image. top_left_x and top_left_y represent the x, y coordinates of the top left corner of corresponding RoI, respectively. bottom_right_x and bottom_right_y represent the x, y coordinates of the bottom right corner of corresponding RoI, respectively.

Outputs:

Tensor, the shape is \((rois\_n, C, pooled\_height, pooled\_width)\).

Raises
  • TypeError – If pooled_height, pooled_width, sample_num or roi_end_mode is not an int.

  • TypeError – If spatial_scale is not a float.

  • TypeError – If features or rois is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> features = Tensor(np.array([[[[1., 2.], [3., 4.]]]]), mindspore.float32)
>>> rois = Tensor(np.array([[0, 0.2, 0.3, 0.2, 0.3]]), mindspore.float32)
>>> roi_align = ops.ROIAlign(2, 2, 0.5, 2)
>>> output = roi_align(features, rois)
>>> print(output)
[[[[1.775 2.025]
   [2.275 2.525]]]]
class tinyms.primitives.RandomCategorical(*args, **kwargs)[source]

Generates random samples from a given categorical distribution tensor.

Parameters

dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Inputs:
  • logits (Tensor) - The input tensor. 2-D Tensor with shape [batch_size, num_classes].

  • num_sample (int) - Number of sample to be drawn. Only constant values is allowed.

  • seed (int) - Random seed. Default: 0. Only constant values is allowed.

Outputs:
  • output (Tensor) - The output Tensor with shape [batch_size, num_samples].

Raises
  • TypeError – If dtype is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.

  • TypeError – If logits is not a Tensor.

  • TypeError – If neither num_sample nor seed is an int.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...   def __init__(self, num_sample):
...     super(Net, self).__init__()
...     self.random_categorical = ops.RandomCategorical(mindspore.int64)
...     self.num_sample = num_sample
...   def construct(self, logits, seed=0):
...     return self.random_categorical(logits, self.num_sample, seed)
...
>>> x = np.random.random((10, 5)).astype(np.float32)
>>> net = Net(8)
>>> output = net(Tensor(x))
>>> result = output.shape
>>> print(result)
(10, 8)
class tinyms.primitives.RandomChoiceWithMask(*args, **kwargs)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

The input must be a tensor of rank not less than 1. If its rank is greater than or equal to 2, the first dimension specifies the number of samples. The index tensor and the mask tensor have the fixed shapes. The index tensor denotes the index of the nonzero sample, while the mask tensor denotes which elements in the index tensor are valid.

Parameters
  • count (int) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • input_x (Tensor[bool]) - The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

Outputs:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> rnd_choice_mask = ops.RandomChoiceWithMask()
>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = rnd_choice_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
class tinyms.primitives.Randperm(*args, **kwargs)[source]

Generates n random samples from 0 to n-1 without repeating. If max_length > n, the last max_length-n elements will be filled with pad.

Parameters
  • max_length (int) – Number of items expected to get and the number must be greater than 0. Default: 1.

  • pad (int) – The pad value to be filled. Default: -1.

  • dtype (mindspore.dtype) – The type of output. Default: mindspore.int32.

Inputs:
  • n (Tensor[int32]) - The input tensor with shape: (1,) and the number must be in [0, max_length].

Outputs:
  • output (Tensor) - The output Tensor with shape: (max_length,) and type: dtype.

Raises
Supported Platforms:

Ascend GPU

Examples

>>> # The result of every execution is different because this operator will generate n random samples.
>>> randperm = ops.Randperm(max_length=30, pad=-1)
>>> n = Tensor([20], dtype=mindspore.int32)
>>> output = randperm(n)
>>> print(output)
[15 6 11 19 14 16 9 5 13 18 4 10 8 0 17 2 1 12 3 7
 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
class tinyms.primitives.Range(*args, **kwargs)[source]

Creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit.

The types of all 3 inputs must be the same. The type of the resulting tensor is the same as the type of the inputs.

Parameters

maxlen (int) – Memory that can fit maxlen many elements will be allocated for the output. Optional, must be positive, defaults to 1000000. If the output has more than maxlen elements, a runtime error will occur.

Inputs:
  • start (Tensor) - A scalar Tensor. The first number in the sequence. Must have type: int32 or float32

  • limit (Tensor) - A scalar Tensor. Upper limit of the sequence, exclusive. Must have type: int32 or float32

  • delta (Tensor) - A scalar Tensor. Number that increments start. Must have type: int32 or float32

Outputs:

A 1-D Tensor, with the same type as the inputs.

Supported Platforms:

GPU

Examples

>>> start = Tensor(0, mstype.int32)
>>> limit = Tensor(10, mstype.int32)
>>> delta = Tensor(4, mstype.int32)
>>> output = ops.Range()(start, limit, delta)
>>> print(output)
[0, 4, 8]
infer_value(start_value, limit_value, delat_value)[source]

Infer the value of input for Range.

class tinyms.primitives.Rank(*args, **kwargs)[source]

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor is the number of indices required to uniquely select each element of the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

Tensor. 0-D int32 Tensor representing the rank of input, i.e., \(R\). The data type is an int.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> rank = ops.Rank()
>>> output = rank(input_tensor)
>>> print(output)
2
>>> print(type(output))
<class 'int'>
class tinyms.primitives.ReLU(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) of input tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the operator will output one more Mask.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with number data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If dtype of input_x is not number.

  • TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu = ops.ReLU()
>>> output = relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
class tinyms.primitives.ReLU6(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If dtype of input_x is neither float16 nor float32.

  • TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu6 = ops.ReLU6()
>>> result = relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
class tinyms.primitives.ReLUV2(*args, **kwargs)[source]

Computes ReLU (Rectified Linear Unit) of input tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Note

The difference from ReLu is that the operator will output one more Mask, and the kernel of the operator is different from ReLu.

Inputs:
  • input_x (Tensor) - The input tensor must be a 4-D tensor.

Outputs:
  • output (Tensor) - Has the same type and shape as the input_x.

  • mask (Tensor) - A tensor whose data type must be uint8.

Raises
Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[[[1, -2], [-3, 4]], [[-5, 6], [7, -8]]]]), mindspore.float32)
>>> relu_v2 = ops.ReLUV2()
>>> output, mask= relu_v2(input_x)
>>> print(output)
[[[[1. 0.]
   [0. 4.]]
  [[0. 6.]
   [7. 0.]]]]
>>> print(mask)
[[[[[1 0]
    [2 0]]
   [[2 0]
    [1 0]]]]]
class tinyms.primitives.Real(*args, **kwargs)[source]

Returns a Tensor that is the real part of the input.

Inputs:
  • input (Tensor, complex) - The input tensor. types: complex64, complex128.

Outputs:

Tensor, has the float type.

Raises

TypeError – If the dtype of input is not one of: complex64, complex128.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> conj = ops.Real()
>>> output = conj(x)
>>> print(output)
1.3
class tinyms.primitives.RealDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} / y_{i}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> realdiv = ops.RealDiv()
>>> output = realdiv(x, y)
>>> print(output)
[0.25 0.4  0.5 ]
class tinyms.primitives.Reciprocal(*args, **kwargs)[source]

Returns reciprocal of a tensor element-wise.

\[out_{i} = \frac{1}{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> reciprocal = ops.Reciprocal()
>>> output = reciprocal(x)
>>> print(output)
[1.   0.5  0.25]
class tinyms.primitives.ReduceAll(*args, **kwargs)[source]

Reduces a dimension of a tensor by the “logicalAND” of all elements in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical and” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAll(keep_dims=True)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[False]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True False]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[False]
[ True]]
class tinyms.primitives.ReduceAny(*args, **kwargs)[source]

Reduces a dimension of a tensor by the “logical OR” of all elements in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical or” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAny(keep_dims=True)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[ True]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True True]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[True]
[ True]]
class tinyms.primitives.ReduceMax(*args, **kwargs)[source]

Reduces a dimension of a tensor by the maximum value in this dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the maximum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMax(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[9.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[7. 7. 7. 7. 7. 7.]
  [8. 8. 8. 8. 8. 8.]
  [9. 9. 9. 9. 9. 9.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[3. 3. 3. 3. 3. 3.]]
 [[6. 6. 6. 6. 6. 6.]]
 [[9. 9. 9. 9. 9. 9.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceMean(*args, **kwargs)[source]

Reduces a dimension of a tensor by averaging all elements in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the mean of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMean(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[5.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along the axis 0
>>> output = op(x, 0)
>>> print(output)
[[[4. 4. 4. 4. 4. 4.]
  [5. 5. 5. 5. 5. 5.]
  [6. 6. 6. 6. 6. 6.]]]
>>> # case 3: Reduces a dimension along the axis 1
>>> output = op(x, 1)
>>> print(output)
[[[2. 2. 2. 2. 2. 2.]]
 [[5. 5. 5. 5. 5. 5.]]
 [[8. 8. 8. 8. 8. 8.]]]
>>> # case 4: Reduces a dimension along the axis 2
>>> output = op(x, 2)
>>> print(output)
[[[1.       ]
  [2.       ]
  [3.       ]]
 [[4.       ]
  [5.       ]
  [6.       ]]
 [[7.0000005]
  [8.       ]
  [9.       ]]]
class tinyms.primitives.ReduceMin(*args, **kwargs)[source]

Reduces a dimension of a tensor by the minimum value in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the minimum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMin(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[1.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]
  [2. 2. 2. 2. 2. 2.]
  [3. 3. 3. 3. 3. 3.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]]
 [[4. 4. 4. 4. 4. 4.]]
 [[7. 7. 7. 7. 7. 7.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceOp[source]

Operation options for reducing tensors. This is an enumerated type, not an operator. Mainly used in data parallel mode.

The main calling methods are as follows:

  • SUM: ReduceOp.SUM.

  • MAX: ReduceOp.MAX.

  • MIN: ReduceOp.MIN.

  • PROD: ReduceOp.PROD.

There are four kinds of operation options, “SUM”, “MAX”, “MIN”, and “PROD”.

  • SUM: Take the sum.

  • MAX: Take the maximum.

  • MIN: Take the minimum.

  • PROD: Take the product.

Note

For more, refer to example. This needs to run in an environment with multiple graphics cards.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore.communication import init
>>> from mindspore import Tensor, ops
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM, group="nccl_world_group")
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[4. 5. 6. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0.]]
class tinyms.primitives.ReduceProd(*args, **kwargs)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceProd(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[2.2833798e+33]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[ 28.  28.  28.  28.  28.  28.]
  [ 80.  80.  80.  80.  80.  80.]
  [162. 162. 162. 162. 162. 162.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[  6.   6.   6.   6.   6.   6.]]
 [[120. 120. 120. 120. 120. 120.]]
 [[504. 504. 504. 504. 504. 504.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.00000e+00]
  [6.40000e+01]
  [7.29000e+02]]
 [[4.09600e+03]
  [1.56250e+04]
  [4.66560e+04]]
 [[1.17649e+05]
  [2.62144e+05]
  [5.31441e+05]]]
class tinyms.primitives.ReduceScatter(*args, **kwargs)[source]

Reduces and scatters tensors from the specified communication group.

Note

The back propagation of the op is not supported yet. Stay tuned for more. The tensors must have the same shape and format in all processes of the collection.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like SUM, MAX, AVG. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If any of operation and group is not a string.

  • ValueError – If the first dimension of the input cannot be divided by the rank size.

Supported Platforms:

Ascend GPU

Examples

>>> # This example should be run with two devices. Refer to the tutorial > Distributed Training on mindspore.cn
>>> from mindspore import Tensor, context
>>> from mindspore.communication import init
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.reducescatter = ops.ReduceScatter(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.reducescatter(x)
...
>>> input_ = Tensor(np.ones([8, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceSum(*args, **kwargs)[source]

Reduces a dimension of a tensor by summing all elements in the dimension, by Default. And also can reduces a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceSum(keep_dims=True)
>>> output = op(x, 1)
>>> output.shape
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[270.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[12. 12. 12. 12. 12. 12.]
  [15. 15. 15. 15. 15. 15.]
  [18. 18. 18. 18. 18. 18.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[ 6.  6.  6.  6.  6.  6.]]
 [[15. 15. 15. 15. 15. 15.]]
 [[24. 24. 24. 24. 24. 24.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[ 6.]
  [12.]
  [18.]]
 [[24.]
  [30.]
  [36.]]
 [[42.]
  [48.]
  [54.]]]
class tinyms.primitives.RefreshBoxmapTimes(*args, **kwargs)[source]

Refresh the box-crossing times of each atom.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

atom_numbers (int32) – the number of atoms n.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • old_crd (Tensor) - The coordinate of each atom at last update. The data type is float32 and the shape is \((n, 3)\).

  • box_length_inverse (Tensor) - The inverse value of box length in 3 dimensions. The data type is float32 and the shape is \((3,)\).

  • box_map_times (Tensor) - The number of times each atom has crossed the box. The data type is int32 and the shape is \((n, 3)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.RefreshCrdVel(*args, **kwargs)[source]

Refresh the coordinate and velocity of each constrained atom after all iterations have ended.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • dt_inverse (float32) – the inverse value of simulation time step.

  • dt (float32) – the simulation time step.

  • exp_gamma (float32) – constant value exp(gamma * dt).

  • half_exp_gamma_plus_half (float32) – constant value (1 + exp_gamma)/2.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • vel (Tensor) - The velocity of each atom. The data type is float32 and the shape is \((n, 3)\).

  • test_frc (Tensor) - The constraint force calculated in the last iteration. The data type is float32 and the shape is \((n, 3)\).

  • mass_inverse (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • res (Tensor) - The return value after updating successfully. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.RefreshUintCrd(*args, **kwargs)[source]

Refresh the unsigned coordinate of each constrained atom in each constrain iteration.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters
  • atom_numbers (int32) – the number of atoms n.

  • half_exp_gamma_plus_half (float32) – constant value (1.0 + exp(gamma * dt)) if Langvin-Liu thermostat is used, where gamma is friction coefficient and dt is the simulation time step, 1.0 otherwise.

Inputs:
  • crd (Tensor) - The coordinate of each atom. The data type is float32 and the shape is \((n, 3)\).

  • quarter_cof (Tensor) - The 3-D scale factor. The data type is float32 and the shape is \((3,)\).

  • test_frc (Tensor) - The constraint force. The data type is float32 and the shape is \((n, 3)\).

  • mass_inverse (Tensor) - The inverse value of mass of each atom. The data type is float32 and the shape is \((n,)\).

Outputs:
  • uint_crd (Tensor) - The unsigned int coordinate value of each atom. The data type is uint32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.Reshape(*args, **kwargs)[source]

Reshapes the input tensor with the same values based on a given shape tuple.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_shape (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((y_1, y_2, ..., y_S)\).

Raises

ValueError – Given a shape tuple, if it has several -1; or if the product of its elements is less than or equal to 0 or cannot be divided by the product of the input tensor shape; or if it does not match the input’s array size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> reshape = ops.Reshape()
>>> output = reshape(input_x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
class tinyms.primitives.ResizeBilinear(*args, **kwargs)[source]

Resizes an image to a certain size using the bilinear interpolation.

The resizing only affects the lower two dimensions which represent the height and width. The input images can be represented by different data types, but the data types of output images are always float32.

Parameters
  • size (Union[tuple[int], list[int]]) – A tuple or list of 2 int elements \((new\_height, new\_width)\), the new size of the images.

  • align_corners (bool) – If true, rescale input by \((new\_height - 1) / (height - 1)\), which exactly aligns the 4 corners of images and resized images. If false, rescale by \(new\_height / height\). Default: False.

Inputs:
  • x (Tensor) - Image to be resized. Input images must be a 4-D tensor with shape \((batch, channels, height, width)\), with data type of float32 or float16.

Outputs:

Tensor, resized image. 4-D with shape \((batch, channels, new\_height, new\_width)\), with the same data type as input x.

Raises
  • TypeError – If size is neither a tuple nor list.

  • TypeError – If align_corners is not a bool.

  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
>>> resize_bilinear = ops.ResizeBilinear((5, 5))
>>> output = resize_bilinear(x)
>>> print(output)
[[[[1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]]]]
class tinyms.primitives.ResizeNearestNeighbor(*args, **kwargs)[source]

Resizes the input tensor by using the nearest neighbor algorithm.

Resizes the input tensor to a given size by using the nearest neighbor algorithm. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters
  • size (Union[tuple, list]) – The target size. The dimension of size must be 2.

  • align_corners (bool) – Whether the centers of the 4 corner pixels of the input and output tensors are aligned. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of the tensor is \((N, C, H, W)\).

Outputs:
Tensor, the shape of the output tensor is \((N, C, NEW\_H, NEW\_W)\).

The data type is same as the input_x.

Raises
  • TypeError – If size is neither tuple nor list.

  • TypeError – If align_corners is not a bool.

  • ValueError – If length of size is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[[[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]]]), mindspore.float32)
>>> resize = ops.ResizeNearestNeighbor((2, 2))
>>> output = resize(input_tensor)
>>> print(output)
[[[[-0.1  0.3]
   [ 0.4  0.5]]]]
class tinyms.primitives.ReverseSequence(*args, **kwargs)[source]

Reverses variable length slices.

Parameters
  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Inputs:
  • x (Tensor) - The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) - Must be a 1-D vector with int32 or int64 types.

Outputs:

Reversed tensor with the same shape and data type as input.

Raises

TypeError – If seq_dim or batch_dim is not an int.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=0, batch_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 5. 9.]
 [4. 2. 6.]
 [7. 8. 3.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([2, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[2. 1. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([3, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[3. 2. 1.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([4, 4]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[4. 3. 2. 1.]
 [8. 7. 6. 5.]]
class tinyms.primitives.ReverseV2(*args, **kwargs)[source]

Reverses specific dimensions of a tensor.

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters

axis (Union[tuple(int), list(int)) – The indices of the dimensions to reverse.

Inputs:
  • input_x (Tensor) - The target tensor. The data type is Number except float64. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If axis is neither list nor tuple.

  • TypeError – If element of axis is not an int.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> op = ops.ReverseV2(axis=[1])
>>> output = op(input_x)
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
>>> op = ops.ReverseV2(axis=[1, 0])
>>> output = op(input_x)
>>> print(output)
[[8 7 6 5]
 [4 3 2 1]]
class tinyms.primitives.Rint(*args, **kwargs)[source]

Returns an integer that is closest to x element-wise.

Inputs:
  • input_x (Tensor) - The target tensor, which must be one of the following types: float16, float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises

TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-1.6, -0.1, 1.5, 2.0]), mindspore.float32)
>>> op = ops.Rint()
>>> output = op(input_x)
>>> print(output)
[-2.  0.  2.  2.]
>>> input_x = Tensor(np.array([[-2.0, -1.9, -1.8, -1.7, -1.6],
...                            [-2.0, -1.9, -1.8, -1.7, -1.6]]), mindspore.float32)
>>> output = op(input_x)
>>> print(output)
[[-2. -2. -2. -2. -2.]
 [-2. -2. -2. -2. -2.]]
class tinyms.primitives.Round(*args, **kwargs)[source]

Returns half to even of a tensor element-wise.

\[out_i \approx x_i\]
Inputs:
  • x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and type as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> round = ops.Round()
>>> output = round(x)
>>> print(output)
[ 1.  2.  2.  2. -4.]
class tinyms.primitives.Rsqrt(*args, **kwargs)[source]

Computes reciprocal of square root of input tensor element-wise.

\[out_{i} = \frac{1}{\sqrt{x_{i}}}\]
Inputs:
  • x (Tensor) - The input of Rsqrt. Each element must be a non-negative number. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same type and shape as x.

Raises

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> input_tensor = Tensor([[4, 4], [9, 9]], mindspore.float32)
>>> rsqrt = ops.Rsqrt()
>>> output = rsqrt(input_tensor)
>>> print(output)
[[0.5        0.5       ]
 [0.33333334 0.33333334]]
class tinyms.primitives.SGD(*args, **kwargs)[source]

Computes the stochastic gradient descent. Momentum is optional.

Nesterov momentum is based on the formula from paper On the importance of initialization and momentum in deep learning.

Note

For more details, please refer to nn.SGD.

Parameters
  • dampening (float) – The dampening for momentum. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0.

  • nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • parameters (Tensor) - Parameters to be updated. With float16 or float32 data type.

  • gradient (Tensor) - Gradient, with float16 or float32 data type.

  • learning_rate (Tensor) - Learning rate, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32)

  • accum (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type.

  • momentum (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32).

  • stat (Tensor) - States to be updated with the same shape as gradient, with float16 or float32 data type.

Outputs:

Tensor, parameters to be updated.

Raises
  • TypeError – If dampening or weight_decay is not a float.

  • TypeError – If nesterov is not a bool.

  • TypeError – If parameters, gradient, learning_rate, accum, momentum or stat is not a Tensor.

  • TypeError – If dtype of parameters, gradient, learning_rate, accum, momentum or stat is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sgd = ops.SGD()
>>> parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
>>> gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
>>> learning_rate = Tensor(0.01, mindspore.float32)
>>> accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
>>> momentum = Tensor(0.1, mindspore.float32)
>>> stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
>>> output = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
>>> print(output)
(Tensor(shape=[4], dtype=Float32,
 value= [ 1.98989999e+00, -4.90300000e-01,  1.69520009e+00,  3.98009992e+00]),)
class tinyms.primitives.SameTypeShape(*args, **kwargs)[source]

Checks whether the data type and shape of two tensors are the same.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_y (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_S)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_R)\), if data type and shape of input_x and input_y are the same.

Raises
  • TypeError – If the data types of input_x and input_y are not the same.

  • ValueError – If the shapes of input_x and input_y are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> input_y = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.SameTypeShape()(input_x, input_y)
>>> print(output)
[[2. 2.]
 [2. 2.]]
class tinyms.primitives.ScalarCast(*args, **kwargs)[source]

Casts the input scalar to another type.

Inputs:
  • input_x (scalar) - The input scalar. Only constant value is allowed.

  • input_y (mindspore.dtype) - The type to be cast. Only constant value is allowed.

Outputs:

Scalar. The type is the same as the python type corresponding to input_y.

Raises

TypeError – If neither input_x nor input_y is a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> scalar_cast = ops.ScalarCast()
>>> output = scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
class tinyms.primitives.ScalarSummary(*args, **kwargs)[source]

Outputs a scalar to a protocol buffer through a scalar summary operator.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of scalar, and the shape of value must be [] or [1].

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.ScalarSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         name = "x"
...         self.summary(name, x)
...         x = self.add(x, y)
...         return x
...
class tinyms.primitives.ScalarToArray(*args, **kwargs)[source]

Converts a scalar to a Tensor.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Raises

TypeError – If input_x is neither int nor float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToArray()
>>> input_x = 1.0
>>> print(type(input_x))
<class 'float'>
>>> output = op(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
1.0
class tinyms.primitives.ScalarToTensor(*args, **kwargs)[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

  • dtype (mindspore.dtype) - The target data type. Default: mindspore.float32. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Raises

TypeError – If input_x is neither int nor float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToTensor()
>>> data = 1
>>> output = op(data, mindspore.float32)
>>> print(output)
1.0
class tinyms.primitives.ScatterAdd(*args, **kwargs)[source]

Updates the value of the input tensor through the addition operation.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{+}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

This is an in-place update operator. Therefore, the input_x will be updated after the operation is completed.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[1. 1. 1.]
 [3. 3. 3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [7.0, 7.0, 7.0] = [10.0, 10.0, 10.0]
>>> # input_x[1] = [10.0, 10.0, 10.0] + [9.0, 9.0, 9.0] = [19.0, 19.0, 19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 1.  1.  1.]
 [19. 19. 19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [8.0, 8.0, 8.0] + [9.0, 9.0, 9.0] = [17.0, 17.0, 17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 3.  3.  3.]
 [17. 17. 17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [9.0, 9.0, 9.0] = [12.0, 12.0, 12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 8.  8.  8.]
 [12. 12. 12.]]
class tinyms.primitives.ScatterDiv(*args, **kwargs)[source]

Updates the value of the input tensor through the divide operation.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [7.0, 7.0, 7.0] = [15.0, 15.0, 15.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[21. 21. 21.]
 [15. 15. 15.]]
class tinyms.primitives.ScatterMax(*args, **kwargs)[source]

Updates the value of the input tensor through the maximum operation.

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do max operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the maximum operation with input_x, the data type is the same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> scatter_max = ops.ScatterMax()
>>> output = scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
class tinyms.primitives.ScatterMin(*args, **kwargs)[source]

Updates the value of the input tensor through the minimum operation.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 1.0, 2.0], [0.0, 0.0, 0.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_min = ops.ScatterMin()
>>> output = scatter_min(input_x, indices, update)
>>> print(output)
[[0. 1. 1.]
 [0. 0. 0.]]
class tinyms.primitives.ScatterMul(*args, **kwargs)[source]

Updates the value of the input tensor through the multiply operation.

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
>>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  1.   1.   1.]
 [378. 378. 378.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
>>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  3.   3.   3.]
 [126. 126. 126.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7.  7.  7.]
 [54. 54. 54.]]
class tinyms.primitives.ScatterNd(*args, **kwargs)[source]

Scatters a tensor into a new tensor depending on the specified indices.

Creates an empty tensor with the given shape, and set values by scattering the update tensor depending on indices.

The empty tensor has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of the empty tensor.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, shape_N, ..., shape_{P-1})\).

The following figure shows the calculation process of inserting two slices in the first dimension of a rank-3 with two matrices of new values:

tinyms/api_img/ScatterNd.png
Inputs:
  • indices (Tensor) - The index of scattering in the new tensor with int32 or int64 data type. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The source Tensor to be scattered. It has shape indices_shape[:-1] + shape[indices_shape[-1]:].

  • shape (tuple[int]) - Define the shape of the output tensor, has the same data type as indices. The shape of shape is \((x_1, x_2, ..., x_R)\), and length of ‘shape’ is greater than or equal 2. In other words, the shape of shape is at least \((x_1, x_2)\). And the value of any element in shape must be greater than or equal 1. In other words, \(x_1\) >= 1, \(x_2\) >= 1.

Outputs:

Tensor, the new tensor, has the same type as update and the same shape as shape.

Raises
  • TypeError – If shape is not a tuple.

  • ValueError – If any element of shape is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScatterNd()
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
>>> shape = (4, 4, 4)
>>> output = op(indices, updates, shape)
>>> print(output)
[[[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]
 [[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = op(indices, updates, shape)
>>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
>>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
>>> # [
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> # ]
>>> # Step 2: Modify the data at the specified location according to the indicators
>>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
>>> # means that the empty tensor in the 0th row and 1st col set to 3.2
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 0.   0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
>>> # means that the empty tensor in the 1th row and 1st col set to 1.1
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 1.1  0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # The final result is as follows:
>>> print(output)
[[0. 3.2 0.]
 [0. 1.1 0.]
 [0. 0.  0.]]
class tinyms.primitives.ScatterNdAdd(*args, **kwargs)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape[:-1] + x_shape[indices_shape[-1]:].

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_add = ops.ScatterNdAdd()
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_add = ops.ScatterNdAdd()
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]]
class tinyms.primitives.ScatterNdSub(*args, **kwargs)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor, with int32 data type. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).

  • updates (Tensor) - The tensor to be updated to the input tensor, has the same type as input. The shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape[:-1] + x_shape[indices_shape[-1]:].

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_sub = ops.ScatterNdSub()
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_sub = ops.ScatterNdSub()
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[[[-1 -1 -1 -1]
  [-2 -2 -2 -2]
  [-3 -3 -3 -3]
  [-4 -4 -4 -4]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]
 [[-5 -5 -5 -5]
  [-6 -6 -6 -6]
  [-7 -7 -7 -7]
  [-8 -8 -8 -8]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]]
class tinyms.primitives.ScatterNdUpdate(*args, **kwargs)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor, with int32 data type.

  • updates (Tensor) - The tensor to be updated to the input tensor, has the same type as input. The shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.ScatterNdUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[1.   0.3   3.6]
 [0.4  2.2  -3.2]]
class tinyms.primitives.ScatterNonAliasingAdd(*args, **kwargs)[source]

Applies sparse addition to the input using individual values or slices.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Inputs:
  • input_x (Parameter) - The target parameter. The data type must be float16, float32 or int32.

  • indices (Tensor) - The index to perform the addition operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the addition operation with input_x, the data type is the same as input_x, the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Outputs:

Parameter, the updated input_x.

Raises
  • TypeError – If dtype of indices is not int32.

  • TypeError – If dtype of input_x is not one of float16, float32, int32.

  • ValueError – If the shape of updates is not equal to indices_shape[:-1] + x_shape[indices_shape[-1]:].

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_non_aliasing_add = ops.ScatterNonAliasingAdd()
>>> output = scatter_non_aliasing_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
class tinyms.primitives.ScatterSub(*args, **kwargs)[source]

Updates the value of the input tensor through the subtraction operation.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{-}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

Supported Platforms:

Ascend CPU GPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[-1. -1. -1.]
 [-1. -1. -1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [7.0, 7.0, 7.0] = [-10.0, -10.0, -10.0]
>>> # input_x[1] = [-10.0, -10.0, -10.0] - [9.0, 9.0, 9.0] = [-19.0, -19.0, -19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -1.  -1.  -1.]
 [-19. -19. -19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-8.0, -8.0, -8.0] - [9.0, 9.0, 9.0] = [-17.0, -17.0, -17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -3.  -3.  -3.]
 [-17. -17. -17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [9.0, 9.0, 9.0] = [-12.0, -12.0, -12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -8.  -8.  -8.]
 [-12. -12. -12.]]
class tinyms.primitives.ScatterUpdate(*args, **kwargs)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor. With int32 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> op = ops.ScatterUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[2. 1.2  1.]
 [3. 1.2  1.]]
class tinyms.primitives.SeLU(*args, **kwargs)[source]

Computes SeLU (scaled exponential Linear Unit) of input tensors element-wise.

The activation function is defined as:

\[E_{i} = scale * \begin{cases} x_{i}, &\text{if } x_{i} \geq 0; \cr \text{alpha} * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(alpha\) and \(scale\) are pre-defined constants(\(alpha=1.67326324\) and \(scale=1.05070098\)).

See more details in Self-Normalizing Neural Networks.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Supported Platforms:

Ascend

Raises

TypeError – If dtype of input_x is neither float16 nor float32.

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> selu = ops.SeLU()
>>> output = selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
class tinyms.primitives.SearchSorted(*args, **kwargs)[source]

Find the indices from the innermost dimension of sequence such that the order of the innermost dimension within sequence would be preserved when the corresponding values in values were inserted before the indices.

Parameters
  • out_int32 (bool) – Output datatype. Optional. If True, the output datatype will be int32; if False, the output datatype will be int64. Default is False.

  • right (bool) – Search Strategy. Optional. If True, return the last suitable index found. If False, return the first such index. Default is False.

Inputs:
  • sequence (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R-1, x_R)\) or (x_1).

    It must contain monitonically increasing sequence on the innermost dimension.

  • values (Tensor) - The shape of tensor is : math:(x_1, x_2, …, x_R-1, x_S).

Outputs:

Tensor containing the indices from the innermost dimension of the input sequence such that, if insert the corresponding value in the values tensor, the order of the tensor sequence would be preserved. The shape of tensor is \((x_1, x_2, ..., x_R-1, x_S)\), whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.

Raises

ValueError – If sequence and values do not have proper shapes.

Supported Platforms:

CPU

Examples

>>> sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
>>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
>>> output = ops.SearchSorted()(sequence, values)
>>> print(output)
[[2, 4, 5]
 [1, 2, 4]]
class tinyms.primitives.Select(*args, **kwargs)[source]

Returns the selected elements, either from input \(x\) or input \(y\), depending on the condition.

Given a tensor as input, this operation inserts a dimension of 1 at the dimension, it was invalid when both math: ‘x’ and math: ‘y’ are none. Keep in mind that the shape of the output tensor can vary depending on how many true values are in the input. Indexes are output in row-first order.

The conditional tensor acts as an optional compensation (mask), which determines whether the corresponding element / row in the output must be selected from \(x\) (if true) or \(y\) (if false) based on the value of each element.

It can be defined as:

\[\begin{split}out_i = \begin{cases} x_i, & \text{if } condition_i \\ y_i, & \text{otherwise} \end{cases}\end{split}\]

If condition is a vector, then \(x\) and \(y\) are higher-dimensional matrices, then it chooses to copy that row (external dimensions) from \(x\) and \(y\). If condition has the same shape as \(x\) and \(y\), you can choose to copy these elements from \(x\) and \(y\).

Inputs:
  • input_cond (Tensor[bool]) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The condition tensor, decides which element is chosen.

  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The first input tensor.

  • input_y (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The second input tensor.

Outputs:

Tensor, has the same shape as input_x. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

Raises
  • TypeError – If input_x or input_y is not a Tensor.

  • ValueError – If shape of input_x is not equal to shape of input_y or shape of input_cond.

Supported Platforms:

Ascend GPU CPU

Examples

>>> select = ops.Select()
>>> input_cond = Tensor([True, False])
>>> input_x = Tensor([2,3], mindspore.float32)
>>> input_y = Tensor([1,2], mindspore.float32)
>>> output = select(input_cond, input_x, input_y)
>>> print(output)
[2. 2.]
class tinyms.primitives.Shape(*args, **kwargs)[source]

Returns the shape of the input tensor. And it used to be static shape.

static shape: A shape that can be obtained without running the graph. It is an inherent property of tensor and may be unknown. The static shape information can be completed by artificial setting. No matter what the input of the graph is, the static shape is not affected.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[int], the output tuple is constructed by multiple integers, \((x_1, x_2, ..., x_R)\).

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.Shape()
>>> output = shape(input_x)
>>> print(output)
(3, 2, 1)
class tinyms.primitives.Sigmoid(*args, **kwargs)[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If dtype of input_x is neither float16 nor float32.

  • TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> sigmoid = ops.Sigmoid()
>>> output = sigmoid(input_x)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
class tinyms.primitives.SigmoidCrossEntropyWithLogits(*args, **kwargs)[source]

Uses the given logits to compute sigmoid cross entropy between the logits and the label.

Measures the distribution error in discrete classification tasks where each class is independent and not mutually exclusive using cross entropy loss.

Sets input logits as \(X\), input label as \(Y\), output as \(loss\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})] \end{array}\end{split}\]
Inputs:
  • logits (Tensor) - Input logits. Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • label (Tensor) - Ground truth label. With the same shape and type as logits.

Outputs:

Tensor, with the same shape and type as input logits.

Raises

TypeError – If logits or label is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32))
>>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32))
>>> sigmoid = ops.SigmoidCrossEntropyWithLogits()
>>> output = sigmoid(logits, labels)
>>> print(output)
[[ 0.6111007   0.5032824   0.26318604]
 [ 0.58439666  0.5530153  -0.4368139 ]]
class tinyms.primitives.Sign(*args, **kwargs)[source]

Performs sign on the tensor element-wise.

\[sign(x) = \begin{cases} -1, &if\ x < 0 \cr 0, &if\ x = 0 \cr 1, &if\ x > 0\end{cases}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
>>> sign = ops.Sign()
>>> output = sign(x)
>>> print(output)
[[ 1.  0. -1.]]
class tinyms.primitives.Sin(*args, **kwargs)[source]

Computes sine of the input element-wise.

\[out_i = sin(x_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is

    \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sin = ops.Sin()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sin(x)
>>> print(output)
[0.5810352  0.27635565 0.41687083 0.5810352 ]
class tinyms.primitives.Sinh(*args, **kwargs)[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(input_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape as x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> sinh = ops.Sinh()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinh(x)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
class tinyms.primitives.Size(*args, **kwargs)[source]

Returns the size of a tensor.

Returns an int scalar representing the elements size of input, the total number of elements in the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

int. A scalar representing the elements size of input_x, tensor is the number of elements in a tensor, \(size=x_1*x_2*...x_R\). The data type is an int.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> size = ops.Size()
>>> output = size(input_x)
>>> print(output)
4
class tinyms.primitives.Slice(*args, **kwargs)[source]

Slices a tensor in the specified shape.

Slice the tensor input_x in shape of size and starting at the location specified by begin, The slice begin represents the offset in each dimension of input_x, The slice size represents the size of the output tensor.

Note that begin is zero-based and size is one-based.

If size[i] is -1, all remaining elements in dimension i are included in the slice. This is equivalent to setting \(size[i] = input_x.shape(i) - begin[i]\).

Inputs:
  • input_x (Tensor): The target tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • begin (Union[tuple, list]): The beginning of the slice. Only constant value(>=0) is allowed.

  • size (Union[tuple, list]): The size of the slice. Only constant value is allowed.

Outputs:

Tensor, the shape is : input size, the data type is the same as input_x.

Raises

TypeError – If begin or size is neither tuple nor list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> slice_op = ops.Slice()
>>> output = slice_op(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 2))
>>> print(output)
[[[3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 1))
>>> print(output)
[[[3]]]
>>> output = slice_op(data, (1, 1, 0), (1, 1, 3))
>>> print(output)
[[[4 4 4]]]
>>> output = slice_op(data, (1, 0, 1), (1, 1, 2))
>>> print(output)
[[[3 3]]]
class tinyms.primitives.SmoothL1Loss(*args, **kwargs)[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:

\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\text{beta}}, & \text{if } |x_i - y_i| < \text{beta} \\ |x_i - y_i| - 0.5 \text{beta}, & \text{otherwise. } \end{cases}\end{split}\]

Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. Its default value is 1.0. \(N\) is the batch size. This function returns an unreduced loss Tensor.

Warning

This operator does not perform the “reduce” operation on the loss value. Call other reduce operators to perform “reduce” operation on the loss if required.

Parameters

beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.

Inputs:
  • logits (Tensor) - Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions. Data type must be float16 or float32.

  • labels (Tensor) - Ground truth data, tensor of shape \((N, *)\), same shape and dtype as the logits.

Outputs:

Tensor, loss float tensor, same shape and dtype as the logits.

Raises
  • TypeError – If beta is not a float.

  • TypeError – If dtype of logits or labels is neither float16 not float32.

  • ValueError – If beta is less than or equal to 0.

  • ValueError – If shape of logits is not the same as labels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> loss = ops.SmoothL1Loss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
[0.  0.  0.5]
class tinyms.primitives.SoftMarginLoss(*args, **kwargs)[source]

SoftMarginLoss operation.

Creates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).

\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}\]
Parameters

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: “mean”.

Inputs:
  • logits (Tensor) - Predict data. Data type must be float16 or float32.

  • labels (Tensor) - Ground truth data, with the same type and shape as logits.

Outputs:

Tensor or Scalar, if reduction is “none”, its shape is the same as logits. Otherwise, a scalar value will be returned.

Raises
  • TypeError – If logits or labels is not a Tensor.

  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • ValueError – If shape of logits is not the same as labels.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend

Examples

>>> loss = ops.SoftMarginLoss()
>>> logits = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> labels = Tensor(np.array([[-1, 1], [1, -1]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.6764238
class tinyms.primitives.SoftShrink(*args, **kwargs)[source]

Applies the soft shrinkage function elementwise.

\[\begin{split}\text{SoftShrink}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters

lambd – the \(\lambda\) must be no less than zero value for the Softshrink formulation. Default: 0.5.

Inputs:
  • input_x (Tensor) - The input of SoftShrink with data type of float16 or float32. Any number of additional dimensions.

Outputs:

Tensor, has the same shape and data type as input_x.

Raises
  • TypeError – If lambd is not a float.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If lambd is less than 0.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mindspore.float16)
>>> softshrink = ops.SoftShrink()
>>> output = softshrink(input_x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
class tinyms.primitives.Softmax(*args, **kwargs)[source]

Softmax operation.

Applies the Softmax operation to the input tensor on the specified axis. Supposes a slice in the given aixs \(x\), then for each element \(x_i\), the Softmax function is shown as follows:

\[\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},\]

where \(N\) is the length of the tensor.

Parameters

axis (Union[int, tuple]) – The axis to perform the Softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the logits.

Raises
  • TypeError – If axis is neither an int nor a tuple.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softmax = ops.Softmax()
>>> output = softmax(logits)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
class tinyms.primitives.SoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Gets the softmax cross-entropy value between logits and labels with one-hot encoding.

The updating formulas of SoftmaxCrossEntropyWithLogits algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})} \end{array}\end{split}\]

where \(X\) represents logits. \(Y\) represents label. \(loss\) represents output.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N, C)\), has the same data type with logits.

Outputs:

Tuple of 2 tensors(loss, dlogits), the loss shape is \((N,)\), and the dlogits with the same shape as logits.

Raises
  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • TypeError – If logits or labels is not a Tensor.

  • ValueError – If shape of logits is not the same as labels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32)
>>> softmax_cross = ops.SoftmaxCrossEntropyWithLogits()
>>> loss, dlogits = softmax_cross(logits, labels)
>>> print(loss)
[0.5899297  0.52374405]
>>> print(dlogits)
[[ 0.02760027  0.20393994  0.01015357  0.20393994 -0.44563377]
 [ 0.08015892  0.02948882  0.08015892 -0.4077012   0.21789455]]
class tinyms.primitives.Softplus(*args, **kwargs)[source]

Softplus activation function.

Softplus is a smooth approximation to the ReLU function. It can be used to constrain the output of a machine to always be positive. The function is shown as follows:

\[\text{output} = \log(1 + \exp(\text{x})),\]
Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If input_x is not a Tensor.

  • TypeError – If the dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softplus = ops.Softplus()
>>> output = softplus(input_x)
>>> print(output)
[1.3132615 2.126928  3.0485873 4.01815   5.0067153]
class tinyms.primitives.Softsign(*args, **kwargs)[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{ 1 + |x|}\]
Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> softsign = ops.Softsign()
>>> output = softsign(input_x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
class tinyms.primitives.Sort(*args, **kwargs)[source]

Sorts the elements of the input tensor along a given dimension in ascending order by value.

Parameters
  • axis (int) – The dimension to sort along. Default: -1.

  • descending (bool) – Controls the sorting order. If descending is True then the elements are sorted in descending order by value. Default: False.

Inputs:
  • x (Tensor) - The input to sort, with float16 or float32 data type. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

Outputs:
  • y1 (Tensor) - A tensor whose values are the sorted values, with the same shape and data type as input.

  • y2 (Tensor) - The indices of the elements in the original input tensor. Data type is int32.

Raises
  • TypeError – If axis is not an int.

  • TypeError – If descending is not a bool.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.Sort()
>>> output = sort(x)
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
 [ 3.0000e+00,  5.0000e+00,  9.0000e+00],
 [ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
 [2, 0, 1],
 [0, 1, 2]]))
class tinyms.primitives.SpaceToBatch(*args, **kwargs)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_size, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the square of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters
  • block_size (int) – The block size of dividing blocks with value greater than or euqual to 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction lists. Each subtraction list contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_size.

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor. The data type is Number.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_size\) and \(paddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_size*block\_size)\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_size\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_size\)

Raises
Supported Platforms:

Ascend GPU

Examples

>>> block_size = 2
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch = ops.SpaceToBatch(block_size, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToBatchND(*args, **kwargs)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_shape, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the product of block_shape. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is a int, the block size of M dimendions are the same, equal to block_shape. M must be 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction list. Each contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_shape[i].

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_shape\) and \(padddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_shape[0]*block\_shape[1])\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_shape[0]\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_shape[1]\)

Raises
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If paddings is neither list nor tuple.

  • ValueError – If length of shape of block_shape is not equal to 1.

  • ValueError – If length of block_shape or paddings is not equal to 2.

Supported Platforms:

Ascend

Examples

>>> block_shape = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch_nd = ops.SpaceToBatchND(block_shape, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch_nd(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToDepth(*args, **kwargs)[source]

Rearranges blocks of spatial data into depth.

The output tensor’s height dimension is \(height / block\_size\).

The output tensor’s weight dimension is \(weight / block\_size\).

The depth of output tensor is \(block\_size * block\_size * input\_depth\).

The input tensor’s height and width must be divisible by block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide spatial data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. The data tyoe is Number. It must be a 4-D tensor.

Outputs:
Tensor, the same data type as x. It must be a 4-D tensor.Tensor of shape

\((N, ( C_{in} * \text{block_size} * 2), H_{in} / \text{block_size}, W_{in} / \text{block_size})\).

Raises
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
>>> block_size = 2
>>> space_to_depth = ops.SpaceToDepth(block_size)
>>> output = space_to_depth(x)
>>> print(output.shape)
(1, 12, 1, 1)
class tinyms.primitives.SparseApplyAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * (1 / sqrt(accum)) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – Learning rate.

  • update_slots (bool) – If True, accum will be updated. Default: True.

  • use_locking (bool) – If true, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradients has the same data type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises
  • TypeError – If lr is not a float.

  • TypeError – If neither update_slots nor use_locking is a bool.

  • TypeError – If dtype of var, accum or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is not int32.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad = ops.SparseApplyAdagrad(lr=1e-8)
...         self.var = Parameter(Tensor(np.array([[[0.2]]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[[0.1]]]).astype(np.float32)), name="accum")
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad(self.var, self.accum, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[[0.7]]]).astype(np.float32))
>>> indices = Tensor([0], mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.99999988e-01]]]), Tensor(shape=[1, 1, 1], dtype=Float32, value=
[[[1.00000001e-01]]]))
class tinyms.primitives.SparseApplyAdagradV2(*args, **kwargs)[source]

Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – Learning rate.

  • epsilon (float) – A small value added for numerical stability.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

  • update_slots (bool) – If True, the computation logic will be different to False. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradients has the same data type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises
  • TypeError – If neither lr nor epsilon is a float.

  • TypeError – If neither update_slots nor use_locking is a bool.

  • TypeError – If dtype of var, accum or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is not int32.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad_v2 = ops.SparseApplyAdagradV2(lr=1e-8, epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad_v2(self.var, self.accum, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[ 2.00000003e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[ 1.00000001e-01]]))
class tinyms.primitives.SparseApplyFtrl(*args, **kwargs)[source]

Updates relevant entries according to the FTRL-proximal scheme.

For more details, please refer to nn.FTRL.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - The linear coefficient to be updated, must be the same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. The type must be int32 or int64 and indices.shape[0] = grad.shape[0].

Outputs:
  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Raises
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, linear or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

Supported Platforms:

Ascend GPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.SparseApplyFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.6]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[2.00000003e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[1.00000001e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[6.00000024e-01]]))
class tinyms.primitives.SparseApplyFtrlV2(*args, **kwargs)[source]

Updates relevant entries according to the FTRL-proximal scheme. This class has one more attribute, named l2_shrinkage, than class SparseApplyFtrl.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • l2_shrinkage (float) – L2 shrinkage regularization.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – If True, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices in the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Raises
  • TypeError – If lr, l1, l2, lr_power or use_locking is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, linear or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is not int32.

Supported Platforms:

Ascend

Examples

>>> class SparseApplyFtrlV2Net(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlV2Net, self).__init__()
...         self.sparse_apply_ftrl_v2 = ops.SparseApplyFtrlV2(lr=0.01, l1=0.0, l2=0.0,
...                                                         l2_shrinkage=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.array([[0.2, 0.3]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.5, 0.9]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.7, 0.5]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl_v2(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlV2Net()
>>> grad = Tensor(np.array([[0.8, 0.5]]).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 2], dtype=Float32, value=
[[ 2.00000003e-01,  3.00000012e-01]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 5.00000000e-01,  8.99999976e-01]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 6.99999988e-01,  5.00000000e-01]]))
class tinyms.primitives.SparseApplyProximalAdagrad(*args, **kwargs)[source]

Updates relevant entries according to the proximal adagrad algorithm. Compared with ApplyProximalAdagrad, an additional index tensor is input.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters

use_locking (bool) – If true, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type..

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2, scalar or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.SparseApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[4.1, 7.2], [1.1, 3.0]], np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0, 0], [0, 0]], np.float32)), name="accum")
...         self.lr = 1.0
...         self.l1 = 1.0
...         self.l2 = 0.0
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[1, 1], [1, 1]], np.float32))
>>> indices = Tensor(np.array([0, 1], np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.09999990e+00,  5.19999981e+00],
 [ 0.00000000e+00,  1.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
class tinyms.primitives.SparseApplyRMSProp(*args, **kwargs)[source]

Update relevant entries according to the rmsprop algorithm.

\[\begin{split}\begin{array}{ll} \\ ms = rho * ms_{t-1} + (1 - rho) * grad * grad \\ mom = momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) \\ var = var - mom \end{array}\end{split}\]

Inputs of var, ms, mom and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters
  • rho (float) – Decay rate. The value should between 0 and 1, otherwise the behavior is undefined.

  • momentum (float) – Momentum. The value should be greater or equal to 0, otherwise the behavior is undefined.

  • epsilon (float) – A small value added for numerical stability. The value should be greater than 0, otherwise the behavior is undefined.

  • use_locking (bool) – If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • ms (Parameter) - The dict of mutable tensor ms. Must have the same shape and dtype as var.

  • mom (Parameter) - The dict of mutable tensor mom. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Learning rate. Must be a scalar. With float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • indices (Tensor) - A tensor of indices in the first dimension of var, ms and mom. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and indices.shape[0] = var.shape[0].

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • ms (Tensor) - The same shape and data type as ms.

  • mom (Tensor) - The same shape and data type as mom.

Raises
  • TypeError – If var, ms or mom is not a Parameter.

  • TypeError – If grad or indices is not a Tensor.

  • TypeError – If dtype of var, ms, mom, lr, grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • TypeError – If lr is neither a Number or a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of epsilon, rho, momentum is not a float.

  • ValueError – If shape of ms, mom, grad is not same as var.

  • ValueError – If the shape size of lr is not 0.

  • ValueError – If shape of indices is not same as shape of first dimension of var.

  • ValueError – If epsilon is less than or equal to 0.

  • ValueError – If momentum is less than 0.

  • ValueError – If rho is less than 0 or greater than 1.

  • ValueError – If dimension of var is less than 1.

Supported Platforms:

Ascend

Examples

>>> class SparseApplyRMSPropNet(nn.Cell):
...     def __init__(self, rho, momentum, epsilon, use_locking=False):
...         super(SparseApplyRMSPropNet, self).__init__()
...         self.sparse_apply_r_m_s_prop = P.SparseApplyRMSProp(rho, momentum, epsilon, use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.3], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.ms = Parameter(Tensor(np.array([[0.2, 0.4], [0.1, 0.3]]).astype(np.float32)), name="ms")
...         self.mom = Parameter(Tensor(np.array([[0.3, 0.1], [0.3, 0.6]]).astype(np.float32)), name="mom")
...     def construct(self, lr, grad, indices):
...         out = self.sparse_apply_r_m_s_prop(self.var, self.ms, self.mom, lr, grad, indices)
...         return out
...
>>> rho = 0.2
>>> momentum = 0.01
>>> epsilon = 1e-6
>>> net = SparseApplyRMSPropNet(rho, momentum, epsilon)
>>> lr = 0.01
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1], dtype=np.int32))
>>> out = net(lr, grad, indices)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.88035822e-01,  2.88811117e-01],
 [ 9.10239667e-02,  4.83422279e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.12000003e-01,  4.72000003e-01],
 [ 2.80000009e-02,  5.72000027e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.19641740e-02,  1.11888833e-02],
 [ 8.97603668e-03,  1.65777095e-02]]))
class tinyms.primitives.SparseGatherV2(*args, **kwargs)[source]

Returns a slice of input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor, must be in the range [0, input_param.shape[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> out = ops.SparseGatherV2()(input_params, input_indices, axis)
>>> print(out)
[[2. 7.]
 [4. 54.]
 [2. 55.]]
class tinyms.primitives.SparseSoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Sets input logits as X, input label as Y, output as loss. Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr -ln(1 - p_{ij}), & j \neq y_i \end{cases} \\ loss = \sum_{ij} loss_{ij} \end{array}\end{split}\]
Parameters

is_grad (bool) – If true, this operation returns the computed gradient. Default: False.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N)\). Data type must be int32 or int64.

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Raises
  • TypeError – If is_grad is not a bool.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • TypeError – If dtype of labels is neither int32 nor int64.

  • ValueError – If logits.shape[0] != labels.shape[0].

Supported Platforms:

GPU CPU

Examples

>>> logits = Tensor([[2, 3, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([0, 1], mindspore.int32)
>>> sparse_softmax_cross = ops.SparseSoftmaxCrossEntropyWithLogits()
>>> loss = sparse_softmax_cross(logits, labels)
>>> print(loss)
3.4878292
>>> sparse_softmax_cross_grad = ops.SparseSoftmaxCrossEntropyWithLogits(is_grad=True)
>>> loss_grad = sparse_softmax_cross_grad(logits, labels)
>>> print(loss_grad)
[[-0.48415753  0.04306427  0.00582811  0.11706084  0.3182043 ]
 [ 0.04007946 -0.4852556   0.04007946  0.2961494   0.10894729]]
class tinyms.primitives.SparseTensorDenseMatmul(*args, **kwargs)[source]

Multiplies sparse matrix A by dense matrix B. The rank of sparse matrix and dense matrix must be equal to 2.

Parameters
  • adjoint_st (bool) – If true, sparse tensor is transposed before multiplication. Default: False.

  • adjoint_dt (bool) – If true, dense tensor is transposed before multiplication. Default: False.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. Support float16, float32, float64, int32, int64. The shape should be \((n,)\).

  • sparse_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

  • dense (Tensor) - A 2-D Tensor, the dtype is same as values. If adjoint_st is False and adjoint_dt is False, the shape must be \((C, M)\). If adjoint_st is False and adjoint_dt is True, the shape must be \((M, C)\). If adjoint_st is True and adjoint_dt is False, the shape must be \((N, M)\). If adjoint_st is True and adjoint_dt is True, the shape must be \((M, N)\).

Outputs:

Tensor, the dtype is the same as values. If adjoint_st is False, the shape is \((N, M)\). If adjoint_st is True, the shape is \((C, M)\).

Raises
  • TypeError – If the type of adjoint_st or adjoint_dt is not bool, or the dtype of indices, dtype of values and dtype of dense don’t meet the parameter description.

  • ValueError – If sparse_shape, shape of indices, shape of `values, and shape of dense don’t meet the parameter description.

Supported Platforms:

CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=ms.int32)
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> sparse_shape = (3, 4)
>>> dense = Tensor([[1,1], [2,2], [3,3 ], [4, 4]], dtype=ms.float32)
>>> sparse_dense_matmul = ops.SparseTensorDenseMatmul()
>>> out = sparse_dense_matmul(indices, values, sparse_shape, dense)
>>> print(out)
[[2 2]
 [6 6]
 [0 0]]
class tinyms.primitives.SparseToDense(*args, **kwargs)[source]

Converts a sparse representation into a dense tensor.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. The shape should be \((n,)\).

  • sparse_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

Returns

Tensor, converted from sparse tensor. The dtype is same as values, and the shape is sparse_shape.

Raises
  • TypeError – If the dtype of indices is neither int32 nor int64.

  • ValueError – If sparse_shape, shape of indices and shape of `values don’t meet the parameter description.

Supported Platforms:

CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> sparse_shape = (3, 4)
>>> sparse_to_dense = ops.SparseToDense()
>>> out = sparse_to_dense(indices, values, sparse_shape)
>>> print(out)
[[0 1 0 0]
 [0 0 2 0]
 [0 0 0 0]]
class tinyms.primitives.Split(*args, **kwargs)[source]

Splits the input tensor into output_num of tensors along the given axis and output numbers.

The input_x tensor will be split into equally sized sub-tensors. This requires that input_x.shape(axis) is divisible by output_num.

Parameters
  • axis (int) – Index of the split position. Default: 0.

  • output_num (int) – The number of output tensors. Must be positive int. Default: 1.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], the shape of each output tensor is the same, which is \((y_1, y_2, ..., y_S)\). And the data type is the same with input_x.

Raises
  • TypeError – If axis or output_num is not an int.

  • ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)), or if the output_num is less than or equal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> split = ops.Split(1, 2)
>>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), mindspore.int32)
>>> print(x)
[[1 1 1 1]
 [2 2 2 2]]
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]), Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]))
>>> split = ops.Split(1, 4)
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]))
class tinyms.primitives.SplitV(*args, **kwargs)[source]

Splits the input tensor into num_split tensors along the given dimension.

The input_x tensor will be split into sub-tensors with individual shapes given by size_splits along the split dimension. This requires that input_x.shape(split_dim) is equal to the sum of size_splits.

The shape of input_x is \((x_1, x_2, ..., x_M, ..., x_R)\). The rank of input_x is R. Set the given split_dim as M, and \(-R \le M < R\). Set the given num_split as N, the given size_splits as \((x_{m_1}, x_{m_2}, ..., x_{m_N})\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The output is a list of tensor objects, for the \(i\)-th tensor, it has the shape of \((x_1, x_2, ..., x_{m_i}, ..., x_R)\). \(x_{m_i}\) is the \(M\)-th dimension of the \(i\)-th tensor. Then, the shape of the output tensor is

\[((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\]
Parameters
  • size_splits (Union[tuple, list]) – The list containing the sizes of each output tensor along the split dimension. Must sum to the dimension of value along split_dim. Can contain one -1 indicating that dimension is to be inferred.

  • split_dim (int) – The dimension along which to split. Must be in the range [-len(input_x.shape), len(input_x.shape)).

  • num_split (int) – The number of output tensors. Must be positive int.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ...,x_M ..., x_R)\).

Outputs:

Tensor, a list of num_split Tensor objects with the shape \(((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The data type is the same with input_x.

Raises
  • TypeError – If input_x is not a Tensor.

  • TypeError – If size_splits is not a tuple or a list.

  • TypeError – If element of size_splits is not an int.

  • TypeError – If split_dim or num_split is not an int.

  • ValueError – If rank of the size_splits is not equal to num_split.

  • ValueError – If sum of the size_splits is not equal to the dimension of value along split_dim.

  • ValueError – If split_dim is out of the range [-len(input_x.shape), len(input_x.shape)).

  • ValueError – If the num_split is less than or equal to 0.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[1, -1], split_dim=1, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[3, 1], dtype=Int32, value=
[[1],
 [4],
 [7]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[2, 3],
 [5, 6],
 [8, 9]]))
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[2, 1], split_dim=0, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Int32, value=
[[1, 2, 3],
 [4, 5, 6]]), Tensor(shape=[1, 3], dtype=Int32, value=
[[7, 8, 9]]))
class tinyms.primitives.Sqrt(*args, **kwargs)[source]

Returns square root of a tensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor whose dtype is number. \((N,*)\) where \(*\) means ,any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and data type as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> sqrt = ops.Sqrt()
>>> output = sqrt(x)
>>> print(output)
[1. 2. 3.]
infer_value(x)[source]

Infer the value of input for Sqrt.

class tinyms.primitives.Square(*args, **kwargs)[source]

Returns square of a tensor element-wise.

\[out_{i} = (x_{i})^2\]
Inputs:
  • x (Tensor) - The input tensor whose dtype is number. \((N,*)\) where \(*\) means ,any number of additional dimensions, its rank should less than 8.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> square = ops.Square()
>>> output = square(x)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.SquareSumAll(*args, **kwargs)[source]

Returns the square sum of a tensor element-wise

\[\begin{split}\left\{\begin{matrix}out_{x} = {\textstyle \sum_{0}^{N}} (x_{i})^2 \\out_{y} = {\textstyle \sum_{0}^{N}} (y_{i})^2 \end{matrix}\right.\end{split}\]
Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The input tensor has the same type and shape as the x.

Note

SquareSumAll only supports float16 and float32 data type.

Outputs:
  • output_y1 (Tensor) - The same type as the x.

  • output_y2 (Tensor) - The same type as the x.

Raises
  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([0, 0, 2, 0]), mindspore.float32)
>>> y = Tensor(np.array([0, 0, 2, 4]), mindspore.float32)
>>> square_sum_all = ops.SquareSumAll()
>>> output = square_sum_all(x, y)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4),
 Tensor(shape=[], dtype=Float32, value= 20))
class tinyms.primitives.SquaredDifference(*args, **kwargs)[source]

Subtracts the second input tensor from the first input tensor element-wise and returns square of it.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = (x_{i} - y_{i}) * (x_{i} - y_{i}) = (x_{i} - y_{i})^2\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32, int32 or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor or a tensor whose data type is float16, float32, int32 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – if x and y is not a Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 6.0]), mindspore.float32)
>>> squared_difference = ops.SquaredDifference()
>>> output = squared_difference(x, y)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.Squeeze(*args, **kwargs)[source]

Returns a tensor with the same data type but dimensions of 1 are removed based on axis.

If axis is specified, it will remove the dimensions of size 1 in the given axis. It axis is None, it will remove all the dimensions of size 1.

Note

The dimension index starts at 0 and must be in the range [-input.ndim, input.ndim).

Parameters

axis (Union[int, tuple(int)]) – Specifies the dimension indexes of shape to be removed, which will remove all the dimensions that are equal to 1. If specified, it must be int32 or int64. Default: (), an empty tuple.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_S)\).

Raises
  • TypeError – If axis is neither an int nor tuple.

  • TypeError – If axis is a tuple whose elements are not all int.

  • ValueError – If the corresponding dimension of the specified axis does not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> squeeze = ops.Squeeze(2)
>>> output = squeeze(input_x)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
class tinyms.primitives.Stack(*args, **kwargs)[source]

Stacks a list of tensors in specified axis.

Stacks the list of input tensors with the same rank R, output is a tensor of rank (R+1).

Given input tensors of shape \((x_1, x_2, ..., x_R)\). Set the number of input tensors as N. If \(0 \le axis\), the shape of the output tensor is \((x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)\).

Parameters

axis (int) – Dimension to stack. Default: 0. Negative values wrap around. The range is [-(R+1), R+1).

Inputs:
  • input_x (Union[tuple, list]) - A Tuple or list of Tensor objects with the same shape and type.

Outputs:

Tensor. A stacked Tensor with the same type as input_x.

Raises
  • TypeError – If the data types of elements in input_x are not the same.

  • ValueError – If the length of input_x is not greater than 1; or if axis is out of the range [-(R+1), R+1); or if the shapes of elements in input_x are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> stack = ops.Stack()
>>> output = stack([data1, data2])
>>> print(output)
[[0. 1.]
 [2. 3.]]
class tinyms.primitives.StandardLaplace(*args, **kwargs)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x;0,1) = \frac{1}{2}\exp(-|x|),\]
Parameters
  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend

Examples

>>> shape = (4, 16)
>>> stdlaplace = ops.StandardLaplace(seed=2)
>>> output = stdlaplace(shape)
>>> result = output.shape
>>> print(result)
(4, 16)
class tinyms.primitives.StandardNormal(*args, **kwargs)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Returns the tensor with the given shape, the random numbers in it drawn from normal distributions whose mean is 0 and standard deviation is 1.

\[f(x)=\frac{1}{\sqrt{2 \pi}} e^{\left(-\frac{x^{2}}{2}\right)}\]
Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape is the same as the input shape. The dtype is float32.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (3, 4)
>>> stdnormal = ops.StandardNormal(seed=2)
>>> output = stdnormal(shape)
>>> print(output)
[[-1.3031056   0.64198005 -0.65207404 -1.767485  ]
 [-0.91792876  0.6508565  -0.9098478  -0.14092612]
 [ 0.7806437   1.1585592   1.9676613  -0.00440959]]
class tinyms.primitives.StridedSlice(*args, **kwargs)[source]

Extracts a strided slice of a tensor.

Given an input tensor, this operation inserts a dimension of length 1 at the dimension. This operation extracts a fragment of size (end-begin)/stride from the given ‘input_tensor’. Starting from the beginning position, the fragment continues adding stride to the index until all dimensions are not less than the ending position.

Given a input_x[m1, m2, …, mn], begin, end and strides will be vectors of length n.

In each mask field (begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask) the ith bit will correspond to the ith m.

If the ith bit of begin_mask is set, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask is analogous, except with the end range.

As for a 5*6*7 tensor, x[2:,:3,:] is equivalent to x[2:5,0:3,0:7].

If the ith bit of ellipsis_mask is set, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask.

As for a 5*6*7*8 tensor, x[2:,…,:6] is equivalent to x[2:5,:,:,0:6]. x[2:,…] is equivalent to x[2:5,:,:,:].

If the ith bit of new_axis_mask is set, begin, end and strides are ignored and a new length 1 dimension is added at the specified position in tthe output tensor.

As for a 5*6*7 tensor, x[:2, newaxis, :6] will produce a tensor with shape (2, 1, 7).

If the ith bit of shrink_axis_mask is set, ith size shrinks the dimension by 1, taking on the value at index begin[i], end[i] and strides[i] are ignored.

As for a 5*6*7 tensor, x[:, 5, :] will result in shrink_axis_mask equal to 4.

Note

The stride may be negative value, which causes reverse slicing. The shape of begin, end and strides must be the same. begin and end are zero-indexed. The element of strides must be non-zero.

Parameters
  • begin_mask (int) – Starting index of the slice. Default: 0.

  • end_mask (int) – Ending index of the slice. Default: 0.

  • ellipsis_mask (int) – An int mask. Default: 0.

  • new_axis_mask (int) – An int mask. Default: 0.

  • shrink_axis_mask (int) – An int mask. Default: 0.

Inputs:
  • input_x (Tensor) - The input Tensor.

  • begin (tuple[int]) - A tuple which represents the location where to start. Only constant value is allowed.

  • end (tuple[int]) - A tuple or which represents the maximum location where to end. Only constant value is allowed.

  • strides (tuple[int]) - A tuple which represents the stride is continuously added before reaching the maximum location. Only constant value is allowed.

Outputs:

Tensor, The output is explained by following example.

In the 0th dimension, begin is 1, end is 2, and strides is 1, because \(1+1=2\geq2\), the interval is \([1,2)\). Thus, return the element with \(index = 1\) in 0th dimension, i.e., [[3, 3, 3], [4, 4, 4]].

In the 1st dimension, similarly, the interval is \([0,1)\). Based on the return value of the 0th dimension, return the element with \(index = 0\), i.e., [3, 3, 3].

In the 2nd dimension, similarly, the interval is \([0,3)\). Based on the return value of the 1st dimension, return the element with \(index = 0,1,2\), i.e., [3, 3, 3].

Finally, the output is [3, 3, 3].

Raises
  • TypeError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is not an int.

  • TypeError – If begin, end or strides is not a tuple.

  • ValueError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> #         [[[1. 1. 1.]
>>> #           [2. 2. 2.]]
>>> #
>>> #          [[3. 3. 3.]
>>> #           [4. 4. 4.]]
>>> #
>>> #          [[5. 5. 5.]
>>> #           [6. 6. 6.]]]
>>> # In order to visually view the multi-dimensional array, write the above as follows:
>>> #         [
>>> #             [
>>> #                 [1,1,1]
>>> #                 [2,2,2]
>>> #             ]
>>> #             [
>>> #                 [3,3,3]
>>> #                 [4,4,4]
>>> #             ]
>>> #             [
>>> #                 [5,5,5]
>>> #                 [6,6,6]
>>> #             ]
>>> #         ]
>>> strided_slice = ops.StridedSlice()
>>> output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
>>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
>>> # start = [1, 0, 2] , end = [3, 1, 3], stride = [1, 1, 1], Find a segment of (start, end),
>>> # note that end is an open interval
>>> # To facilitate understanding, this operator can be divided into three steps:
>>> # Step 1: Calculation of the first dimension:
>>> # start = 1, end = 3, stride = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
>>> # output_1th =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #         [4,4,4]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #         [6,6,6]
>>> #     ]
>>> # ]
>>> # Step 2: Calculation of the second dimension
>>> # 2nd dimension, start = 0, end = 1, stride = 1. So only 0th rows can be taken, and the output at this time.
>>> # output_2nd =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #     ]
>>> # ]
>>> # Step 3: Calculation of the third dimension
>>> # 3nd dimension,start = 2, end = 3, stride = 1, So can take 2th cols,
>>> # and you get the final output at this time.
>>> # output_3ed =
>>> # [
>>> #     [
>>> #         [3]
>>> #     ]
>>> #     [
>>> #         [5]
>>> #     ]
>>> # ]
>>> # The final output after finishing is:
>>> print(output)
[[[3.]]
 [[5.]]]
>>> # another example like :
>>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
class tinyms.primitives.Sub(*args, **kwargs)[source]

Subtracts the second input tensor from the first input tensor element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} - y_{i}\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If x and y is not a Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> sub = ops.Sub()
>>> output = sub(x, y)
>>> print(output)
[-3 -3 -3]
class tinyms.primitives.Tan(*args, **kwargs)[source]

Computes tangent of x element-wise.

\[out_i = tan(x_i)\]
Inputs:
  • x (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Data type must be float16, float32 or int32.

Outputs:

Tensor, has the same shape as x.

Raises
  • TypeError – If dtype of x is not one of the following: float16, float32, int32.

  • TypeError – If x is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> tan = ops.Tan()
>>> x = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = tan(x)
>>> print(output)
[-1.5574081 0. 1.5574081]
class tinyms.primitives.Tanh(*args, **kwargs)[source]

Tanh activation function.

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises
  • TypeError – If dtype of input_x is neither float16 nor float32.

  • TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> tanh = ops.Tanh()
>>> output = tanh(input_x)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
class tinyms.primitives.TensorAdd(**kwargs)[source]

Same as operator Add. TensorAdd will be deprecated in the future. Please use Add instead.

class tinyms.primitives.TensorScatterAdd(*args, **kwargs)[source]

Creates a new tensor by adding the values from the positions in input_x indicicated by indices, with values from updates. When multiple values are given for the same index, the updated result will be the sum of all values. This operation is almost equivalent to using ScatterNdAdd, except that the updates are applied on Tensor instead of Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterAdd()
>>> # 5, Perform the addition operation for the first time:
>>> #      first_input_x = input_x[0][0] + updates[0] = [[0.9, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the addition operation for the second time:
>>> #      second_input_x = input_x[0][0] + updates[1] = [[3.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 3.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMax(*args, **kwargs)[source]

By comparing the value at the position indicated by the index in input_x with the value in the update, the value at the index will eventually be equal to the largest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMax()
>>> # 5, Perform the max operation for the first time:
>>> #      first_input_x = Max(input_x[0][0], updates[0]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the max operation for the second time:
>>> #      second_input_x = Max(input_x[0][0], updates[0]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 2.2  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMin(*args, **kwargs)[source]

By comparing the value at the position indicated by the index in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMin()
>>> # 5, Perform the min operation for the first time:
>>> #      first_input_x = Min(input_x[0][0], updates[0]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the min operation for the second time:
>>> #      second_input_x = Min(input_x[0][0], updates[1]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ -0.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterSub(*args, **kwargs)[source]

Creates a new tensor by subtracting the values from the positions in input_x indicicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using ScatterNdSub, except that the updates are applied on Tensor instead of Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterSub()
>>> # 5, Perform the subtract operation for the first time:
>>> #      first_input_x = input_x[0][0] - updates[0] = [[-1.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the subtract operation for the second time:
>>> #      second_input_x = input_x[0][0] - updates[1] = [[-3.3, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
 [ 0.4        0.5       -3.2      ]]
class tinyms.primitives.TensorScatterUpdate(*args, **kwargs)[source]

Creates a new tensor by updating the positions in input_x indicicated by indices, with values from update. This operation is almost equivalent to using ScatterNd, except that the updates are applied on input_x instead of a zero tensor.

indices must have rank at least 2, the last axis is the depth of each index vectors. For each index vector, there must be a corresponding value in update. If the depth of each index tensor matches the rank of input_x, then each index vector corresponds to a scalar in input_x and each update updates a scalar. If the depth of each index tensor is less than the rank of input_x, then each index vector corresponds to a slice in input_x, and each update updates a slice.

The order in which updates are applied is nondeterministic, meaning that if there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1]. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions. The data type is Number.

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • update (Tensor) - The tensor to update the input tensor, has the same type as input, and update.shape = indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • ValueError – If the value of input_x are not match with input indices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.TensorScatterUpdate()
>>> output = op(input_x, indices, update)
>>> print(output)
[[ 1.   0.3  3.6]
 [ 0.4  2.2 -3.2]]
class tinyms.primitives.TensorSummary(*args, **kwargs)[source]

Outputs a tensor to a protocol buffer through a tensor summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.TensorSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.Tile(*args, **kwargs)[source]

Replicates a tensor with given multiples times.

Creates a new tensor by replicating input_x multiples times. The i’th dimension of output tensor has input_x.shape(i) * multiples[i] elements, and the values of input_x are replicated multiples[i] times along the i’th dimension.

Note

The length of multiples must be greater or equal to the length of dimension in input_x.

Inputs:
  • input_x (Tensor) - 1-D or higher Tensor. Set the shape of input tensor as \((x_1, x_2, ..., x_S)\).

  • multiples (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). The length of multiples cannot be smaller than the length of the shape of input_x. Only constant value is allowed.

Outputs:

Tensor, has the same data type as the input_x.

  • If the length of multiples is the same as the length of shape of input_x, then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((x_1*y_1, x_2*y_2, ..., x_S*y_R)\).

  • If the length of multiples is larger than the length of shape of input_x, fill in multiple 1 in the length of the shape of input_x until their lengths are consistent. Such as set the shape of input_x as \((1, ..., x_1, x_2, ..., x_S)\), then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((1*y_1, ..., x_S*y_R)\).

Raises
  • TypeError – If multiples is not a tuple or its elements are not all int.

  • ValueError – If the elements of multiples are not all greater than 0.

  • ValueError – If the length of multiples are smaller than the length of dimension in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tile = ops.Tile()
>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
>>> multiples = (2, 3, 2)
>>> output = tile(input_x, multiples)
>>> print(output)
[[[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]
 [[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]]
class tinyms.primitives.TopK(*args, **kwargs)[source]

Finds values and indices of the k largest entries along the last dimension.

Warning

  • If sorted set to ‘False’, it will use aicpu operator, performance may be reduced.

If the input_x is a one-dimensional Tensor, finds the k largest entries in the Tensor, and outputs its value and index as a Tensor. Therefore, values[k] is the k largest item in input_x, and its index is indices [k].

For a multi-dimensional matrix, calculates the first k entries in each row (corresponding vector along the last dimension), therefore:

\[values.shape = indices.shape = input.shape[:-1] + [k].\]

If the two compared elements are the same, the one with the smaller index value is returned first.

Parameters

sorted (bool) – If true, the obtained elements will be sorted by the values in descending order. Default: True.

Inputs:
  • input_x (Tensor) - Input to be computed, data type must be float16, float32 or int32.

  • k (int) - The number of top elements to be computed along the last dimension, constant input is needed.

Outputs:

Tuple of 2 tensors, the values and the indices.

  • values (Tensor) - The k largest elements in each slice of the last dimensional.

  • indices (Tensor) - The indices of values within the last dimension of input.

Raises
  • TypeError – If sorted is not a bool.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If k is not an int.

  • TypeError – If dtype of input_x is not one of the following: float16, float32 or int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> topk = ops.TopK(sorted=True)
>>> input_x = Tensor([1, 2, 3, 4, 5], mindspore.float16)
>>> k = 3
>>> values, indices = topk(input_x, k)
>>> print((values, indices))
(Tensor(shape=[3], dtype=Float16, value= [ 5.0000e+00,  4.0000e+00,  3.0000e+00]), Tensor(shape=[3],
  dtype=Int32, value= [4, 3, 2]))
class tinyms.primitives.Totalc6get(*args, **kwargs)[source]

Get the average dispersion constant of short range Lennard-Jones interaction, for the subsequent long range correction energy and virial. Assume system has m Lennard-Jones types of atoms.

Warning

This is an experimental prototype that is subject to change and/or deletion.

Parameters

atom_numbers (int32) – the number of atoms n.

Inputs:
  • atom_lj_type (Tensor) - The Lennard-Jones type of each atom. The data type is float32 and the shape is \((n,)\).

  • lj_b (Tensor) - The attraction coefficient of each type. the number of pair atoms is m. The data type is float32 and the shape is \((m,)\).

Outputs:
  • factor (Tensor) - The average dispersion constant of Lennard-Jones interaction. The data type is float32 and the shape is \((1,)\).

Supported Platforms:

GPU

class tinyms.primitives.TransferCrd(*args, **kwargs)[source]

Transfer the coordinates to angular and radial.

Because there is a large amount of inputs and each of them are related, there is no way to construct Examples using random methods. For details, refer the webpage SPONGE in MindSpore.

Parameters
  • start_serial (int32) – the index start position.

  • end_serial (int32) – the index end position.

  • number (int32) – the length of angular and radial.

Inputs:
  • crd (Tensor) - The coordinate of each atom. n is the number of atoms. The data type is float32 and the shape is \((n, 3)\).

  • old_crd (Tensor) - The last coordinate of each atom. n is the number of atoms. The data type is float32 and the shape is \((n, 3)\).

  • box (Tensor) - The length of 3 dimensions of the simulation box. The data type is float32 and the shape is \((3,)\).

Outputs:
  • radial (Tensor) - The array of radial transferred from coordinates. The data type is float32 and the shape is \((number,)\).

  • angular (Tensor) - The array of angular transferred from coordinates. The data type is float32 and the shape is \((number,)\).

  • nowarp_crd (Tensor) - The modified coordinate of each atom for computing radial and angular. The data type is float32 and the shape is \((n, 3)\).

  • box_map_times (Tensor) - The box map times for radial and angular. The data type is int32 and the shape is \((n, 3)\).

Supported Platforms:

GPU

class tinyms.primitives.Transpose(*args, **kwargs)[source]

Permutes the dimensions of the input tensor according to input permutation.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_perm (tuple[int]) - The permutation to be converted. The elements in input_perm are composed of the indexes of each dimension of input_x. The length of input_perm and the shape of input_x must be the same. Only constant value is allowed. Must be in the range [0, rank(input_x)).

Outputs:

Tensor, the type of output tensor is the same as input_x and the shape of output tensor is decided by the shape of input_x and the value of input_perm.

Raises
  • TypeError – If input_perm is not a tuple.

  • ValueError – If length of shape of input_x is not equal to length of shape of input_perm.

  • ValueError – If the same element exists in input_perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> transpose = ops.Transpose()
>>> output = transpose(input_x, input_perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
class tinyms.primitives.TruncateDiv(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise for integer types, negative numbers will round fractional quantities towards zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

Broadcasting is supported.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_div = ops.TruncateDiv()
>>> output = truncate_div(x, y)
>>> print(output)
[0 1 0]
class tinyms.primitives.TruncateMod(*args, **kwargs)[source]

Returns the remainder of division element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • The input data does not support 0.

  • When the elements of input exceeds 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If neither x nor y is one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_mod = ops.TruncateMod()
>>> output = truncate_mod(x, y)
>>> print(output)
[ 2  1 -1]
class tinyms.primitives.TruncatedNormal(*args, **kwargs)[source]

Returns a tensor of the specified shape filled with truncated normal values.

The generated values follow a normal distribution.

Parameters
  • seed (int) – A integer number used to create random seed. Default: 0.

  • dtype (mindspore.dtype) – Data type. Default: mindspore.float32.

Inputs:
  • shape (tuple[int]) - The shape of the output tensor, is a tuple of positive integer.

Outputs:

Tensor, the data type of output tensor is the same as attribute dtype.

Examples

>>> shape = (1, 2, 3)
>>> truncated_normal = ops.TruncatedNormal()
>>> output = truncated_normal(shape)
class tinyms.primitives.TupleToArray(*args, **kwargs)[source]

Converts a tuple to a tensor.

If the type of the first number in the tuple is integer, the data type of the output tensor is int. Otherwise, the data type of the output tensor is float.

Inputs:
  • input_x (tuple) - A tuple of numbers. These numbers have the same type. Only constant value is allowed. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

Outputs:

Tensor, if the input tuple contains N numbers, then the shape of the output tensor is (N,).

Raises
  • TypeError – If input_x is not a tuple.

  • ValueError – If length of input_x is less than or equal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = (1,2,3)
>>> print(type(input_x))
<class 'tuple'>
>>> output = ops.TupleToArray()(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
[1 2 3]
class tinyms.primitives.UniformCandidateSampler(*args, **kwargs)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution. If unique=True, candidates are drawn without replacement, else unique=False with replacement.

Parameters
  • num_true (int) – The number of target classes in each training example.

  • num_sampled (int) – The number of classes to randomly sample. The sampled_candidates will have a shape of num_sampled. If unique=True, num_sampled must be less than or equal to range_max.

  • unique (bool) – Whether all sampled classes in a batch are unique.

  • range_max (int) – The number of possible classes, must be non-negative.

  • seed (int) – Used for random number generation, must be non-negative. If seed has a value of 0, seed will be replaced with a randomly generated value. Default: 0.

  • remove_accidental_hits (bool) – Whether accidental hit is removed. Default: False.

Inputs:
  • true_classes (Tensor) - A Tensor. The target classes with a Tensor shape of (batch_size, num_true).

Outputs:
  • sampled_candidates (Tensor) - The sampled_candidates is independent of the true classes. Shape: (num_sampled, ).

  • true_expected_count (Tensor) - The expected counts under the sampling distribution of each of true_classes. Shape: (batch_size, num_true).

  • sampled_expected_count (Tensor) - The expected counts under the sampling distribution of each of sampled_candidates. Shape: (num_sampled, ).

Raises
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If neither unique nor remove_accidental_hits is a bool.

  • TypeError – If neither range_max nor seed is a int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

GPU

Examples

>>> sampler = ops.UniformCandidateSampler(1, 3, False, 4)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int32)))
>>> print(output1, output2, output3)
[1, 1, 3], [[0.75], [0.75], [0.75], [0.75], [0.75]], [0.75, 0.75, 0.75]
class tinyms.primitives.UniformInt(*args, **kwargs)[source]

Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function:

\[\text{P}(i|a,b) = \frac{1}{b-a+1},\]

where the \(a\) indicates the min distribution parameter, the \(b\) indicates the max distribution parameter.

Note

The number in tensor minval must be strictly less than maxval at any position after broadcasting.

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • minval (Tensor) - The distribution parameter, a. It defines the minimum possibly generated value, with int32 data type. Only one number is supported.

  • maxval (Tensor) - The distribution parameter, b. It defines the maximum possibly generated value, with int32 data type. Only one number is supported.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • TypeError – If neither minval nor maxval is a Tensor.

  • ValueError – If shape is not a constant value.

Outputs:

Tensor. The shape is the same as the input ‘shape’, and the data type is int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 4)
>>> minval = Tensor(1, mstype.int32)
>>> maxval = Tensor(5, mstype.int32)
>>> uniform_int = ops.UniformInt(seed=10)
>>> output = uniform_int(shape, minval, maxval)
>>> result = output.shape
>>> print(result)
(2, 4)
class tinyms.primitives.UniformReal(*args, **kwargs)[source]

Produces random floating-point values i, uniformly distributed to the interval [0, 1).

Parameters
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 2)
>>> uniformreal = ops.UniformReal(seed=2)
>>> output = uniformreal(shape)
>>> result = output.shape
>>> print(result)
(2, 2)
class tinyms.primitives.Unique(*args, **kwargs)[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

The output contains Tensor y and Tensor idx, the format is probably similar to (y, idx). The shape of Tensor y and Tensor idx is different in most cases, because Tensor y will be deduplicated, and the shape of Tensor idx is consistent with the input.

To get the same shape between idx and y, please ref to ‘UniqueWithPad’ operator.

Inputs:
  • input_x (Tensor) - The input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tuple, containing Tensor objects (y, idx), `y is a tensor with the same type as input_x, and contains the unique elements in x, sorted in ascending order. idx is a tensor containing indices of elements in the input corresponding to the output tensor.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.Unique()(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>> y = output[0]
>>> print(y)
[1 2 5]
>>> idx = output[1]
>>> print(idx)
[0 1 2 1]
>>> # As can be seen from the above, y and idx shape
>>> # note that for GPU, this operator must be wrapped inside a model, and executed in graph mode.
>>> class UniqueNet(nn.Cell):
...     def __init__(self):
...         super(UniqueNet, self).__init__()
...         self.unique_op = ops.Unique()
...
...     def construct(self, x):
...         output, indices = self.unique_op(x)
...         return output, indices
...
>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> net = UniqueNet()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
class tinyms.primitives.UniqueWithPad(*args, **kwargs)[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function. The returned tuple(y,`idx`) after the input Tensor x is processed by the unique operator, in which the shapes of y and idx are mostly not equal. Therefore, in order to solve the above situation, the UniqueWithPad operator will fill the y Tensor with the pad_num specified by the user to make it have the same shape as the Tensor idx.

Inputs:
  • x (Tensor) - The tensor need to be unique. Must be 1-D vector with types: int32, int64.

  • pad_num (int) - Pad num. The data type is an int.

Outputs:

tuple(Tensor), tuple of 2 tensors, y and idx. - y (Tensor) - The unique elements filled with pad_num, the shape and data type same as x. - idx (Tensor) - The index of each value of x in the unique output y, the shape and data type same as x.

Raises
  • TypeError – If dtype of x is neither int32 nor int64.

  • ValueError – If length of shape of x is not equal to 1.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([1, 1, 5, 5, 4, 4, 3, 3, 2, 2,]), mindspore.int32)
>>> pad_num = 8
>>> output = ops.UniqueWithPad()(x, pad_num)
>>> print(output)
(Tensor(shape=[10], dtype=Int32, value= [1, 5, 4, 3, 2, 8, 8, 8, 8, 8]),
 Tensor(shape=[10], dtype=Int32, value= [0, 0, 1, 1, 2, 2, 3, 3, 4, 4]))
class tinyms.primitives.Unpack(**kwargs)[source]

Same as operator Unstack. Unpack will be deprecated in the future. Please use Unstack instead.

class tinyms.primitives.UnsortedSegmentMax(*args, **kwargs)[source]

Computes the maximum along segments of a tensor.

The following figure shows the calculation process of UnsortedSegmentMax:

tinyms/api_img/UnsortedSegmentMax.png
\[\text { output }_i=\text{max}_{j \ldots} \text { data }[j \ldots]\]

where \(max\) over tuples \(j...\) such that \(segment\_ids[j...] == i\).

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with the minimum value of the input_x’s type.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). The data type must be float16, float32 or int32.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU

Examples

>>> # case 1: Only have two num_segments, where is 0 and 1, and segment_ids=[0, 1, 1]
>>> # num_segments = 2 indicates that there are two types of segment_id,
>>> # the first number '0' in [0, 1, 1] indicates input_x[0],
>>> # the second number '1' in [0, 1, 1] indicates input_x[1],
>>> # the third number '1' in [0, 1, 1] indicates input_x[2],
>>> # input_x[0], which is [1, 2, 3] will not be compared to other segment_id.
>>> # Only the same segment_id will be compared.
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
>>>
>>> # case 2: The segment_ids=[0, 0, 1, 1].
>>> # [1, 2, 3] will compare with [4, 2, 0],
>>> # and [4, 5, 6] will compare with [4, 2, 1].
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (4, 3)
>>> print(output)
    [[4. 2. 3.]
     [4. 5. 6.]]
>>> # case 3: If the input_x have three dimensions even more, what will happen?
>>> # The shape of input_x is (2, 4, 3),
>>> # and the length of segment_ids should be the same as the first dimension of input_x.
>>> # Because the segment_ids are different, input_x[0] will not be compared to input_x[1].
>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]],
>>>                            [[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (2, 4, 3)
>>> print(output)
    [[[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]
     [[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]]
>>> # case 4: It has the same input with the 3rd case.
>>> # Because num_segments is equal to 2, there are two segment_ids, but currently only one 0 is used.
>>> # the segment_id i is absent in the segment_ids, then output[i] will be filled with
>>> # the smallest possible value of the input_x's type.
>>> segment_ids = Tensor(np.array([0, 0]).astype(np.int32))
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
    [[[ 1.0000000e+00  2.0000000e+00  3.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  0.0000000e+00]
      [ 4.0000000e+00  5.0000000e+00  6.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  1.0000000e+00]]
     [[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]]]
class tinyms.primitives.UnsortedSegmentMin(*args, **kwargs)[source]

Computes the minimum of a tensor along segments.

The following figure shows the calculation process of UnsortedSegmentMin:

tinyms/api_img/UnsortedSegmentMin.png
\[\text { output }_i=\text{min}_{j \ldots} \text { data }[j \ldots]\]

where \(min\) over tuples \(j...\) such that \(segment_ids[j...] == i\).

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with the maximum value of the input_x’s type. The segment_ids must be non-negative tensor.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). The data type must be float16, float32 or int32.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_min = ops.UnsortedSegmentMin()
>>> output = unsorted_segment_min(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
class tinyms.primitives.UnsortedSegmentProd(*args, **kwargs)[source]

Computes the product of a tensor along segments.

The following figure shows the calculation process of UnsortedSegmentProd:

tinyms/api_img/UnsortedSegmentProd.png
Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. Data type must be int32.

  • num_segments (int) - The value specifies the number of distinct segment_ids, must be greater than 0.

Outputs:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_prod = ops.UnsortedSegmentProd()
>>> output = unsorted_segment_prod(input_x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
class tinyms.primitives.UnsortedSegmentSum(*args, **kwargs)[source]

Computes the sum of a tensor along segments.

Calculates a tensor such that \(\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]\), where \(j\) is a tuple describing the index of element in data. segment_ids selects which elements in data to sum up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range.

The following figure shows the calculation process of UnsortedSegmentSum:

tinyms/api_img/UnsortedSegmentSum.png

Note

If the segment_id i is absent in the segment_ids, then output[i] will be filled with 0.

If the sum of the given segment_ids \(i\) is empty, then \(\text{output}[i] = 0\). If the given segment_ids is negative, the value will be ignored. ‘num_segments’ must be equal to the number of different segment_ids.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) - Set the shape as \((x_1, x_2, ..., x_N)\), where 0 < N <= R.

  • num_segments (int) - Set \(z\) as num_segments.

Outputs:

Tensor, the shape is \((z, x_{N+1}, ..., x_R)\).

Raises
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
>>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
>>> num_segments = 6
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 2. 5. 0.]
class tinyms.primitives.Unstack(*args, **kwargs)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters

axis (int) – Dimension along which to pack. Default: 0. Negative values wrap around. The range is [-R, R).

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

Outputs:

A tuple of tensors, the shape of each objects is the same.

Raises

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> unstack = ops.Unstack()
>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = unstack(input_x)
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
class tinyms.primitives.UpdateState(*args, **kwargs)[source]

UpdateState is used for update side-effect state.

Inputs:
  • value (State) - the state value to be updated.

  • expr (Expression) - the expression to evaluate before state changes.

Outputs:

State, the updated state value.

class tinyms.primitives.Xdivy(*args, **kwargs)[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32 or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is float16, float32 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xdivy = ops.Xdivy()
>>> output = xdivy(x, y)
>>> print(output)
[ 1.   2.  -0.5]
class tinyms.primitives.Xlogy(*args, **kwargs)[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when x is zero.

\[out_i = x_{i}\ln{y_{i}}\]

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is float16, float32 or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is float16, float32 or bool. The value must be positive.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xlogy = ops.Xlogy()
>>> output = xlogy(x, y)
>>> print(output)
[-3.465736   0.        2.7725887]
class tinyms.primitives.Zeros(*args, **kwargs)[source]

Creates a tensor filled with value zeros.

Creates a tensor with shape described by the first argument and fills it with value zeros in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor. Only constant positive int is allowed.

  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input shape value.

Raises
  • TypeError – If shape is neither int nor tuple.

  • TypeError – If shape is a tuple whose elements are not all int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> zeros = ops.Zeros()
>>> output = zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.ZerosLike(*args, **kwargs)[source]

Creates a new tensor. All elements value are 0.

Returns a tensor of zeros with the same shape and data type as the input tensor.

Inputs:
  • input_x (Tensor) - Input tensor. The data type is int32, int64, float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and data type as input_x but filled with zeros.

Raises

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> zeroslike = ops.ZerosLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = zeroslike(input_x)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.GradOperation(get_all=False, get_by_list=False, sens_param=False)[source]

A higher-order function which is used to generate the gradient function for the input function.

The gradient function generated by GradOperation higher-order function can be customized by construction arguments.

Given an input function net = Net() that takes x and y as inputs, and has a parameter z, see Net in Examples.

To generate a gradient function that returns gradients with respect to the first input (see GradNetWrtX in Examples).

  1. Construct a GradOperation higher-order function with default arguments: grad_op = GradOperation().

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to the first input: grad_op(net)(x, y).

To generate a gradient function that returns gradients with respect to all inputs (see GradNetWrtXY in Examples).

  1. Construct a GradOperation higher-order function with get_all=True which indicates getting gradients with respect to all inputs, they are x and y in example function Net(): grad_op = GradOperation(get_all=True).

  2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  3. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to given parameters (see GradNetWithWrtParams in Examples).

  1. Construct a GradOperation higher-order function with get_by_list=True: grad_op = GradOperation(get_by_list=True).

  2. Construct a ParameterTuple that will be passed to the input function when constructing GradOperation higher-order function, it will be used as a parameter filter that determine which gradient to return: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

  4. Call the gradient function with input function’s inputs to get the gradients with respect to given parameters: gradient_function(x, y).

To generate a gradient function that returns gradients with respect to all inputs and given parameters in the format of ((dx, dy), (dz))(see GradNetWrtInputsAndParams in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and get_by_list=True: grad_op = GradOperation(get_all=True, get_by_list=True).

  2. Construct a ParameterTuple that will be passed along input function when constructing GradOperation higher-order function: params = ParameterTuple(net.trainable_params()).

  3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

  4. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs and given parameters: gradient_function(x, y).

We can configure the sensitivity(gradient with respect to output) by setting sens_param as True and passing an extra sensitivity input to the gradient function, the sensitivity input should has the same shape and type with input function’s output(see GradNetWrtXYWithSensParam in Examples).

  1. Construct a GradOperation higher-order function with get_all=True and sens_param=True: grad_op = GradOperation(get_all=True, sens_param=True).

  2. Define grad_wrt_output as sens_param which works as the gradient with respect to output: grad_wrt_output = Tensor(np.ones([2, 2]).astype(np.float32)).

  3. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

  4. Call the gradient function with input function’s inputs and sens_param to get the gradients with respect to all inputs: gradient_function(x, y, grad_wrt_output).

Parameters
  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter variables at the same time in the form of ((gradients with respect to inputs), (gradients with respect to parameters)). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sensor_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the location parameter or key-value pair parameter. If the value is transferred through the key-value pair parameter, the key must be sens.

Returns

The higher-order function which takes a function as argument and returns gradient function for it.

Raises

TypeError – If get_all, get_by_list or sens_param is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ParameterTuple
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
...     def construct(self, x, y):
...         x = x * self.z
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation()
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> output = GradNetWrtX(Net())(x, y)
>>> print(output)
[[1.4100001 1.5999999 6.6      ]
 [1.4100001 1.5999999 6.6      ]]
>>>
>>> class GradNetWrtXY(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXY, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXY(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00],
 [ 4.50999975e+00,  2.70000005e+00,  3.60000014e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 2.59999990e+00,  2.59999990e+00,  2.59999990e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]]))
>>>
>>> class GradNetWrtXYWithSensParam(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXYWithSensParam, self).__init__()
...         self.net = net
...         self.grad_op = GradOperation(get_all=True, sens_param=True)
...         self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y, self.grad_wrt_output)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXYWithSensParam(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.21099997e+00,  5.09999990e-01,  1.49000001e+00],
 [ 5.58800030e+00,  2.68000007e+00,  4.07000017e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.51999998e+00,  2.81999993e+00,  2.14000010e+00],
 [ 1.09999990e+00,  2.04999995e+00,  1.54999995e+00],
 [ 9.00000036e-01,  1.54999995e+00,  1.25000000e+00]]))
>>>
>>> class GradNetWithWrtParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWithWrtParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWithWrtParams(Net())(x, y)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
>>>
>>> class GradNetWrtInputsAndParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtInputsAndParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = GradOperation(get_all=True, get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.1, 0.6, 1.2], [0.5, 1.3, 0.1]], dtype=mstype.float32)
>>> y = Tensor([[0.12, 2.3, 1.1], [1.3, 0.2, 2.4], [0.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtInputsAndParams(Net())(x, y)
>>> print(output)
((Tensor(shape=[2, 3], dtype=Float32, value=
[[ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00],
 [ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 6.00000024e-01,  6.00000024e-01,  6.00000024e-01],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]])), (Tensor(shape=[1], dtype=Float32, value=
 [ 1.29020004e+01]),))
tinyms.primitives.grad(fn, grad_first_param=False)[source]

A wrapper function to generate the gradient function for the input function.

Parameters
  • fn (Function) – Function to do GradOperation.

  • grad_first_param (bool) – If True, get the gradient with respect to first input. If False, get all the gradients with respect to inputs. Default: False.

tinyms.primitives.pack(x)[source]

Call stack in this pack function.