tinyms.primitives

Primitives module. Operators can be used in the construct function of Layer.

Examples

>>> import tinyms as ts
>>> from tinyms.primitives import tensor_add
>>>
>>> x = ts.ones([2, 3])
>>> y = ts.ones([2, 3])
>>> print(tensor_add(x, y))
[[2. 2. 2.]
[2. 2. 2.]]
tinyms.primitives.add_flags(fn=None, **flags)[source]

A decorator that adds a flag to the function.

Note

Only supports bool value.

Parameters:
  • fn (Function) – Function or cell to add flag. Default: None.

  • flags (dict) – Flags use kwargs. Default: None.

Returns:

Function, the function with added flags.

Examples

>>> net = Net();
>>> net = add_flags(net, predit=True)
>>> print(hasattr(net, '_func_graph_flags'))
True
class tinyms.primitives.Map(ops=None, reverse=False)[source]

Map will apply the set operation on input sequences.

Apply the operations to every element of the sequence.

Parameters:
  • ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance. Default: None

  • reverse (bool) – The optimizer needs to be inverted in some scenarios to improve parallel performance, general users please ignore. Reverse is the flag to decide if apply the operation reversely. Only supported in graph mode. Default is False.

Inputs:
  • args (Tuple[sequence]) - If ops is not None, all the inputs should be the same length sequences, and each row of the sequences. e.g. If the length of args is 2, and for i in length of each sequence (args[0][i], args[1][i]) will be the input of the operation.

    If ops is None, the first input is the operation, and the other is inputs.

Outputs:

Sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import dtype as mstype
>>> from mindspore import Tensor, ops
>>> from mindspore.ops import MultitypeFuncGraph, Map
>>> tensor_list = (Tensor(1, mstype.float32), Tensor(2, mstype.float32), Tensor(3, mstype.float32))
>>> # square all the tensor in the list
>>>
>>> square = MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return ops.square(x)
>>>
>>> common_map = Map()
>>> output = common_map(square, tensor_list)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4),
Tensor(shape=[], dtype=Float32, value= 9))
>>> square_map = Map(square, False)
>>> output = square_map(tensor_list)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4),
Tensor(shape=[], dtype=Float32, value= 9))
class tinyms.primitives.MultitypeFuncGraph(name, read_value=False, doc_url='')[source]

MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs. Initialize an MultitypeFuncGraph object with name, and use register with input types as the decorator for the function to be registered. And the object can be called with different types of inputs, and work with HyperMap and Map.

Parameters:
  • name (str) – Operator name.

  • read_value (bool, optional) – If the registered function do not need to set value on Parameter, and all inputs will pass by value, set read_value to True. Default: False.

  • doc_url (str, optional) – The official document link corresponding to the registered function. Default:””.

Raises:

ValueError – If failed to find a matching function for the given arguments.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # `add` is a metagraph object which will add two objects according to
>>> # input type using ".register" decorator.
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>>
>>> tensor_add = ops.Add()
>>> add = ops.MultitypeFuncGraph('add')
>>> @add.register("Number", "Number")
... def add_scala(x, y):
...     return x + y
>>> @add.register("Tensor", "Tensor")
... def add_tensor(x, y):
...     return tensor_add(x, y)
>>> output = add(1, 2)
>>> print(output)
3
>>> output = add(Tensor([0.1, 0.6, 1.2], dtype=mstype.float32), Tensor([0.1, 0.6, 1.2], dtype=mstype.float32))
>>> print(output)
[0.2 1.2 2.4]
register(*type_names)[source]

Register a function for the given type string.

Parameters:

type_names (Union[str, mindspore.dtype]) – Inputs type names or types list.

Returns:

decorator, a decorator to register the function to run, when called under the types described in type_names.

class tinyms.primitives.GradOperation(get_all=False, get_by_list=False, sens_param=False)[source]

A higher-order function which is used to generate the gradient function for the input function.

The gradient function generated by GradOperation higher-order function can be customized by construction arguments.

For example, given an input function net = Net() that takes x and y as inputs, and has a parameter z, see Net in Examples.

  • Used to get the derivative of the input:

    1. Returns gradients with respect to the first input (see GradNetWrtX in Examples).

      1. Construct a GradOperation higher-order function with default arguments: grad_op = GradOperation().

      2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

      3. Call the gradient function with input function’s inputs to get the gradients with respect to the first input: grad_op(net)(x, y).

    2. Returns gradients with respect to all inputs (see GradNetWrtXY in Examples).

      1. Construct a GradOperation higher-order function with get_all=True which indicates getting gradients with respect to all inputs, they are x and y in example function Net(): grad_op = GradOperation(get_all=True).

      2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

      3. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs: gradient_function(x, y).

  • Used to get the derivative of the parameters:

    Returns gradients with respect to given parameters (see GradNetWithWrtParams in Examples).

    1. Construct a GradOperation higher-order function with get_by_list=True: grad_op = GradOperation(get_by_list=True).

    2. Construct a ParameterTuple that will be passed to the input function when constructing GradOperation higher-order function, it will be used as a parameter filter that determine which gradient to return: params = ParameterTuple(net.trainable_params()).

    3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

    4. Call the gradient function with input function’s inputs to get the gradients with respect to given parameters: gradient_function(x, y).

  • Used to get the derivative of the inputs and parameters at the same time: Returns gradients with respect to all inputs and given parameters in the format of ((dx, dy), (dz)) (see GradNetWrtInputsAndParams in Examples).

    1. Construct a GradOperation higher-order function with get_all=True and get_by_list=True: grad_op = GradOperation(get_all=True, get_by_list=True).

    2. Construct a ParameterTuple that will be passed along input function when constructing GradOperation higher-order function: params = ParameterTuple(net.trainable_params()).

    3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

    4. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs and given parameters: gradient_function(x, y).

  • We can configure the sensitivity(gradient with respect to output) by setting sens_param as True and passing an extra sensitivity input to the gradient function, the sensitivity input should has the same shape and type with input function’s output(see GradNetWrtXYWithSensParam in Examples).

    1. Construct a GradOperation higher-order function with get_all=True and sens_param=True: grad_op = GradOperation(get_all=True, sens_param=True).

    2. Define grad_wrt_output as sens_param which works as the gradient with respect to output: grad_wrt_output = Tensor(np.ones([2, 2]).astype(np.float32)).

    3. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

    4. Call the gradient function with input function’s inputs and sens_param to get the gradients with respect to all inputs: gradient_function(x, y, grad_wrt_output).

Note

For above gradient functions, the returned gradient result may vary for grad result element number:

  • Return a single value if only one result.

  • Return a tuple for multiple results.

  • Return an empty tuple for no result.

Parameters:
  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter free variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter free variables at the same time in the form of (“gradients with respect to inputs”, “gradients with respect to parameter free variables”). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sensor_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the location parameter or key-value pair parameter. If the value is transferred through the key-value pair parameter, the key must be sens.

Returns:

The higher-order function which takes a function as argument and returns gradient function for it.

Raises:

TypeError – If get_all, get_by_list or sens_param is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...         self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
...     def construct(self, x, y):
...         x = x * self.z
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation()
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> output = GradNetWrtX(Net())(x, y)
>>> print(output)
[[1.4100001 1.5999999 6.6      ]
 [1.4100001 1.5999999 6.6      ]]
>>>
>>> class GradNetWrtXY(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXY, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation(get_all=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.1, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXY(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 4.50000000e+00,  2.70000005e+00,  3.60000014e+00],
 [ 4.50000000e+00,  2.70000005e+00,  3.60000014e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 2.59999990e+00,  2.59999990e+00,  2.59999990e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]]))
>>>
>>> class GradNetWrtXYWithSensParam(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXYWithSensParam, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation(get_all=True, sens_param=True)
...         self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y, self.grad_wrt_output)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXYWithSensParam(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.21099997e+00,  5.09999990e-01,  1.49000001e+00],
 [ 5.58800030e+00,  2.68000007e+00,  4.07000017e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.51999998e+00,  2.81999993e+00,  2.14000010e+00],
 [ 1.09999990e+00,  2.04999995e+00,  1.54999995e+00],
 [ 9.00000036e-01,  1.54999995e+00,  1.25000000e+00]]))
>>>
>>> class GradNetWithWrtParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWithWrtParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = ops.GradOperation(get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWithWrtParams(Net())(x, y)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
>>>
>>> class GradNetWrtInputsAndParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtInputsAndParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = ops.GradOperation(get_all=True, get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.1, 0.6, 1.2], [0.5, 1.3, 0.1]], dtype=mstype.float32)
>>> y = Tensor([[0.12, 2.3, 1.1], [1.3, 0.2, 2.4], [0.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtInputsAndParams(Net())(x, y)
>>> print(output)
((Tensor(shape=[2, 3], dtype=Float32, value=
[[ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00],
 [ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 6.00000024e-01,  6.00000024e-01,  6.00000024e-01],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]])), (Tensor(shape=[1], dtype=Float32, value=
 [ 1.29020004e+01]),))
class tinyms.primitives.HyperMap(ops=None, reverse=False)[source]

Hypermap will apply the set operation to input sequences.

Apply the operations to every element of the sequence or nested sequence. Different from mindspore.ops.Map, the HyperMap supports to apply on nested structure.

Parameters:
  • ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance. Default is None.

  • reverse (bool) – The optimizer needs to be inverted in some scenarios to improve parallel performance, general users please ignore. reverse is the flag to decide if apply the operation reversely. Only supported in graph mode. Default is False.

Inputs:
  • args (Tuple[sequence]) -

    • If ops is not None, all the inputs should be sequences with the same length. And each row of the sequences will be the inputs of the operation.

    • If ops is None, the first input is the operation, and the others are inputs.

Note

Except for the operation input, the number of inputs should be equal to the number of inputs to ops.

Outputs:

Sequence or nested sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Raises:
  • TypeError – If ops is neither MultitypeFuncGraph nor None.

  • TypeError – If args is not a Tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),
...                     (Tensor(3, mstype.float32), Tensor(4, mstype.float32)))
>>> # square all the tensor in the nested list
>>>
>>> square = ops.MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return ops.square(x)
>>>
>>> common_map = ops.HyperMap()
>>> output = common_map(square, nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
>>> square_map = ops.HyperMap(square, False)
>>> output = square_map(nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
tinyms.primitives.zeros_like(input, *, dtype=None)[source]

Creates a tensor filled with 0, with the same size as x, and the given dtype.

If dtype = None, the tensor will have the same dtype as input input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, filled with 0.

Raises:

TypeError – If dtype is not a MindSpore dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(4).reshape(2, 2))
>>> output = ops.zeros_like(x, dtype=mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
tinyms.primitives.ones_like(input, *, dtype=None)[source]

Returns a Tensor with a value of 1 and its shape is the same as the input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, has the same shape as input but filled with ones.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = ops.ones_like(x)
>>> print(output)
[[1 1]
 [1 1]]
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Union[Tensor, int, float]) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Union[Tensor, int, float]) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. The value must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[1, 2, 3], [3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 3, 3)
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be the broadcasted shape of input shape and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore import ops as ops
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mindspore.float32)
>>> lambda_param = Tensor(1.0, mindspore.float32)
>>> output = ops.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – Type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mindspore.float32.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If ‘minval’ or ‘maxval’ is neither int32 nor float32 and dtype of ‘minval’ is not the same as ‘maxval’.

  • TypeError – If seed is not an int.

  • TypeError – If ‘dtype’ is neither int32 nor float32.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mindspore.int32)
>>> maxval = Tensor(2, mindspore.int32)
>>> output = ops.uniform(shape, minval, maxval, seed=5, dtype=mindspore.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> maxval = Tensor([8.0, 10.0], mindspore.float32)
>>> output = ops.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The \(\beta\) distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If neither alpha nor beta is a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha and beta is not float32.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: alpha_shape is (2, 2)
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> # case 2: alpha_shape is (2, 3), so shape is (3, 1, 3)
>>> shape = (3, 1, 3)
>>> alpha = Tensor(np.array([[1, 3, 4], [2, 5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> # case 3: beta_shape is (1, 2), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0, 2]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 2.2132034  5.8855834]]
 [ 3.3981476  7.5805717]
[[ 3.3981476  7.5805717]]
 [ 3.7190282 19.941492]
[[ 2.9512358  2.5969937]]
 [ 3.786061   5.160872 ]]]
>>> # case 4: beta_shape is (2, 1), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([[1.0], [2.0]]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 5.6085486  7.8280783]]
 [ 15.97684  16.116285]
[[ 1.8347423  1.713663]]
 [ 3.2434065 15.667398]
[[ 4.2922077  7.3365674]]
 [ 5.3876944  13.159832 ]]]
tinyms.primitives.poisson(shape, mean, seed=None)[source]

The ops.poisson is deprecated, please use mindspore.ops.random_poisson Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

  • TypeError – If seed is not an int.

Supported Platforms:

deprecated

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> # case 1: It can be broadcast.
>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
>>> # case 2: It can not be broadcast. It is recommended to use the same shape.
>>> shape = (2, 2)
>>> mean = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(2, 2)
tinyms.primitives.multinomial(input, num_samples, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • input (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_samples (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default: True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None.

Returns:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Raises:
  • TypeError – If input is not a Tensor whose dtype is not float32.

  • TypeError – If num_samples is not an int.

  • TypeError – If seed is neither an int nor None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> # case 1: The output is random, and the length of the output is the same as num_sample.
>>> input = Tensor([0, 9, 4, 0], mindspore.float32)
>>> output = ops.multinomial(input, 2)
>>> # print(output)
>>> # [1 2] or [2 1]
>>> # the case where the result is [2 1] in multiple times.
>>> # This is because the value corresponding to the index 1 is larger than the value of the index 2.
>>> print(len(output))
2
>>> # case 2: The output is random, and the length of the output is the same as num_sample.
>>> # replacement is False(Default).
>>> # If the extracted value is 0, the index value of 1 will be returned.
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4)
>>> print(output)
[1 1 2 1]
>>> # case 3: The output is random, num_sample == x_length = 4, and replacement is True,
>>> # Can extract the same elements。
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4, True)
>>> print(output)
[1 1 2 2]
tinyms.primitives.count_nonzero(x, axis=(), keep_dims=False, dtype=mindspore.int32)[source]

Count number of nonzero elements across axis of input tensor.

Parameters:
  • x (Tensor) – Input data is used to count non-zero numbers. With shape \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Default: (), reduce all dimensions.

  • keep_dims (bool, optional) – Whether to maintain dimensions specified by axis. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • dtype (Union[Number, mindspore.bool_], optional) – The data type of the output tensor. Default: mindspore.int32.

Returns:

Tensor, number of nonzero element across axis specified by axis. The data type is specified by dtype.

Raises:
  • TypeError – If axis is not int, tuple or list.

  • ValueError – If any value in axis is not in range [-x.ndim, x.ndim).

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> # case 1: each value specified.
>>> x = Tensor(np.array([[0, 1, 0], [1, 1, 0]]).astype(np.float32))
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0, 1], keep_dims=True, dtype=mindspore.int32)
>>> print(nonzero_num)
[[3]]
>>> # case 2: all value is default.
>>> nonzero_num = ops.count_nonzero(x=x)
>>> print(nonzero_num)
3
>>> # case 3: axis value was specified 0.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,])
>>> print(nonzero_num)
[1 2 0]
>>> # case 4: axis value was specified 1.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[1,])
>>> print(nonzero_num)
[1 2]
>>> # case 5: keep_dims value was specified.
>>> nonzero_num = ops.count_nonzero(x=x,  keep_dims=True)
>>> print(nonzero_num)
[[3]]
>>> # case 6: keep_dims and axis value was specified.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,], keep_dims=True)
>>> print(nonzero_num)
[[1 2 0]]
tinyms.primitives.cummin(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative minimum value of input Tensor input along the dimension axis, and indices is the index location of each minimum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = min(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative minimum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> output = ops.cummin(a, axis=0)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
tinyms.primitives.tensor_dot(x1, x2, axes)[source]

Computation of Tensor contraction on arbitrary axes between tensors a and b.

Contraction allows for the summation of products of elements of a and b on specified axes. The same number of axes must be specified for both x1 and x2, and values must be within range of number of dims of both a and b.

Selected dims in both inputs must also match.

axes = 0 leads to outer product. axes = 1 leads to normal matrix multiplication when inputs both 2D. axes = 1 is the same as axes = ((1,),(0,)) where both a and b are 2D. axes = 2 is the same as axes = ((1,2),(0,1)) where both a and b are 3D.

Parameters:
  • x1 (Tensor) – First tensor in tensor_dot with datatype float16 or float32

  • x2 (Tensor) – Second tensor in tensor_dot with datatype float16 or float32

  • axes (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]) – Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and first N dims from b input shape in order as axes for each respectively.

Returns:

Tensor, the shape of the output tensor is \((N + M)\). Where \(N\) and \(M\) are the free axes not contracted in both inputs

Raises:
  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If axes is not one of the following: int, tuple, list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 1, 2]), mindspore.float32)
>>> output = ops.tensor_dot(input_x1, input_x2, ((0,1),(1,2)))
>>> print(output)
[[2. 2. 2]
 [2. 2. 2]
 [2. 2. 2]]
tinyms.primitives.dot(input, other)[source]

Computation a dot product between samples in two tensors.

Parameters:
  • input (Tensor) – First tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

  • other (Tensor) – Second tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

Returns:

Tensor, dot product of input and other.

Raises:
  • TypeError – If type of input and other are not the same.

  • TypeError – If dtype of input or other is not float16 or float32.

  • ValueError – If rank of input or other less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor(np.ones(shape=[2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[3. 3.]]
 [[3. 3.]]]
>>> print(output.shape)
(2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]]
  [[3. 3.]]]]
>>> print(output.shape)
(1, 2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]
   [3. 3.]]
  [[3. 3.]
   [3. 3.]]]]
>>> print(output.shape)
(1, 2, 2, 2)
>>> input = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]]
>>> print(output.shape)
(3, 2, 2, 1, 2)
tinyms.primitives.batch_dot(x1, x2, axes=None)[source]

Computation of batch dot product between samples in two tensors containing batch dims.

\[output = x1[batch, :] * x2[batch, :]\]
Parameters:
  • x1 (Tensor) – First tensor in Batch Dot op with datatype float32 and the rank of x1 must be greater than or equal to 2.

  • x2 (Tensor) – Second tensor in Batch Dot op with datatype float32. The datatype of x2 should be same as x1 and the rank of x2 must be greater than or equal to 2.

  • axes (Union[int, tuple(int), list(int)]) – Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and last N dimensions from b input shape in order as axes for each respectively. Default: None.

Returns:

Tensor, batch dot product of x1 and x2. For example, the Shape of output for input x1 shapes (batch, d1, axes, d2) and x2 shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4), where d1 and d2 means any number.

Raises:
  • TypeError – If type of x1 and x2 are not the same.

  • TypeError – If dtype of x1 or x2 is not float32.

  • ValueError – If rank of x1 or x2 less than 2.

  • ValueError – If batch dim used in axes.

  • ValueError – If len(axes) less than 2.

  • ValueError – If axes is not one of those: None, int, (int, int).

  • ValueError – If axes reversed from negative int is too low for dimensions of input arrays.

  • ValueError – If axes value is too high for dimensions of input arrays.

  • ValueError – If batch size of x1 and x2 are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x1 = Tensor(np.ones(shape=[2, 2, 3]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (-1, -2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[[3. 3.]
  [3. 3.]]
 [[3. 3.]
  [3. 3.]]]
>>> x1 = Tensor(np.ones(shape=[2, 2]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (1, 2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[2. 2. 2.]
 [2. 2. 2.]]
>>> print(output.shape)
(2, 3)
>>> x1 = Tensor(np.ones(shape=[6, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[6, 5, 4, 8]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(6, 2, 3, 5, 8)
>>> x1 = Tensor(np.ones(shape=[2, 2, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 5, 4, 5]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(2, 2, 5, 5)
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat .

Parameters:
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive.

  • axis (int) – The axis along which to repeat, default 0.

Returns:

One tensor with values repeated along the specified axis. If x has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * rep, ..., sn)\). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : repeat on axis 0
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
>>> # case 2 : repeat on axis 1
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 1)
>>> print(output)
[[0 0 1 1 2 2]
 [3 3 4 4 5 5]]
tinyms.primitives.repeat_interleave(input, repeats, axis=None)[source]

Repeat elements of a tensor along an axis, like numpy.repeat.

Parameters:
  • input (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • repeats (int) – The number of times to repeat, must be positive.

  • axis (int, optional) – The axis along which to repeat, default: None. if dims is None, the input Tensor will be flattened and the output will alse be flattened.

Returns:

One tensor with values repeated along the specified axis. If input has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * repeats, ..., sn)\). The output type will be the same as the type of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_interleave(input, repeats=2, axis=0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
tinyms.primitives.sequence_mask(lengths, maxlen=None)[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape \((d_1, d_2, ..., d_n)\), then the resulting tensor mask has type and shape \((d_1, d_2, ..., d_n, maxlen)\), with mask \([i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\).

Parameters:
  • lengths (Tensor) – Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen.

  • maxlen (int) – size of the last dimension of returned tensor. Must be positive and same type as elements in lengths. Default is None.

Returns:

One mask tensor of shape lengths.shape + (maxlen,) .

Raises:
  • TypeError – If lengths is not a Tensor.

  • TypeError – If maxlen is not an int.

  • TypeError – If dtype of lengths is neither int32 nor int64.

Supported Platforms:

GPU CPU

Examples

>>> # case 1: When maxlen is assigned
>>> x = Tensor(np.array([1, 2, 3, 4]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[ True False False False False]
 [ True  True False False False]
 [ True  True  True False False]
 [ True  True  True  True False]]
>>> # case 2: When there is 0 in x
>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[[ True False False False False]
  [ True  True  True False False]]
 [[ True  True False False False]
  [False False False False False]]]
>>> # case 3: when the maxlen is not assigned
>>> x = Tensor(np.array([[1, 3], [2, 4]]))
>>> output = ops.sequence_mask(x)
>>> print(output)
[[[ True False False False ]
  [ True  True  True False ]]
 [[ True  True False False ]
  [ True  True  True  True ]]]
tinyms.primitives.matmul(input, other)[source]

Returns the matrix product of two tensors.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • input (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

  • other (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

Returns:

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both input, other are 1-d vectors.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of other, or if a scalar value is passed in.

  • ValueError – If the shape of input and other could not broadcast together.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : Reasonable application of broadcast mechanism
>>> input = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
>>> other = Tensor(np.arange(4*5).reshape(4, 5), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
>>> print(output.shape)
(2, 3, 5)
>>> # case 2 : the rank of `input` is 1
>>> input = Tensor(np.ones([1, 2]), mindspore.float32)
>>> other = Tensor(np.ones([2,]), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[2.]
>>> print(output.shape)
(1,)
tinyms.primitives.mm(input, mat2)[source]

Returns the matrix product of two arrays. If input is a \((n \times m)\) Tensor, mat2 is a \((m \times p)\) Tensor, out will be a \((n \times p)\) Tensor.

Note

This function cannot support broadcasting. Refer to mindspore.ops.matmul() instead if you need a broadcastable function.

Parameters:
  • input (Tensor) – The first matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

  • mat2 (Tensor) – The second matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

Returns:

Tensor or scalar, the matrix product of the inputs.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of mat2.

  • ValueError – If input or mat2 is not a matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x1 = ms.Tensor(np.random.rand(2, 3))
>>> x2 = ms.Tensor(np.random.rand(3, 4))
>>> out = ops.mm(x1, x2)
>>> print(out.shape)
(2, 4)
class tinyms.primitives.ACos[source]

Computes arccosine of input tensors element-wise.

Refer to mindspore.ops.acos() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> acos = ops.ACos()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = acos(x)
>>> print(output)
[0.737726  1.5307857 1.2661036 0.9764105]
class tinyms.primitives.Abs[source]

Returns absolute value of a tensor element-wise.

Refer to mindspore.ops.abs() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> abs = ops.Abs()
>>> output = abs(x)
>>> print(output)
[1. 1. 0.]
class tinyms.primitives.AccumulateNV2[source]

Computes accumulation of all input tensors element-wise.

Refer to mindspore.ops.accumulate_n() for more details.

Supported Platforms:

Ascend GPU

Examples

>>> class NetAccumulateNV2(nn.Cell):
...     def __init__(self):
...         super(NetAccumulateNV2, self).__init__()
...         self.accumulateNV2 = ops.AccumulateNV2()
...
...     def construct(self, *z):
...         return self.accumulateNV2(z)
...
>>> net = NetAccumulateNV2()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Acosh[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

Refer to mindspore.ops.acosh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, dtype
>>> acosh = ops.Acosh()
>>> x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), dtype.float32)
>>> output = acosh(x)
>>> print(output)
[0.        0.9624237 1.7627472 5.298292 ]
class tinyms.primitives.Adam(use_locking=False, use_nesterov=False)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

For more details, please refer to mindspore.nn.Adam.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, the shape should be the same as var.

  • v (Parameter) - the 2nd moment vector in the updating formula, the shape should be the same as var.

  • beta1_power (float) - \(beta_1^t(\beta_1^{t})\) in the updating formula.

  • beta2_power (float) - \(beta_2^t(\beta_2^{t})\) in the updating formula.

  • lr (float) - \(l\) in the updating formula. The paper suggested value is \(10^{-8}\).

  • beta1 (float) - The exponential decay rate for the 1st moment estimations. The paper suggested value is \(0.9\).

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations. The paper suggested value is \(0.999\).

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as Inputs var.

  • m (Tensor) - The same shape and data type as Inputs m.

  • v (Tensor) - The same shape and data type as Inputs v.

Raises:
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If var, m or v is not a Parameter.

  • TypeError – If beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adam = ops.Adam()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                               epsilon, grad)
...         return out
...
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.9, 0.999, 0.001, 0.9, 0.999, 1e-8, gradient)
>>> print(net.var.asnumpy())
[[0.9996838 0.9996838]
 [0.9996838 0.9996838]]
class tinyms.primitives.AdamNoUpdateParam(use_locking=False, use_nesterov=False)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm. This operator do not update the parameter, but calculate the value that should be added to the parameter instead.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ \Delta{w} = - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents the parameter to be updated, \(\epsilon\) represents epsilon.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • m (Tensor) - The 1st moment vector in the updating formula. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type must be float32.

  • v (Tensor) - the 2nd moment vector in the updating formula. The shape must be the same as m. The data type must be float32.

  • beta1_power (Tensor) - \(beta_1^t(\beta_1^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta2_power (Tensor) - \(beta_2^t(\beta_2^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • lr (Tensor) - \(l\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations. The shape is \((1, )\) and the data type must be float32.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations. The shape is \((1, )\) and the data type must be float32.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability. The shape is \((1, )\) and the data type must be float32.

  • gradient (Tensor) - Gradient, the shape must be the same as m, the data type must be float32.

Outputs:

Tensor, whose shape and data type are the same with Inputs gradient, is a value that should be added to the parameter to be updated.

Raises:
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If m, v, beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam = ops.AdamNoUpdateParam()
...         self.m = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="m")
...         self.v = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.adam(self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad)
...         return out
>>> net = Net()
>>> beta1_power = Tensor(0.9, ms.float32)
>>> beta2_power = Tensor(0.999, ms.float32)
>>> lr = Tensor(0.001, ms.float32)
>>> beta1 = Tensor(0.9, ms.float32)
>>> beta2 = Tensor(0.999, ms.float32)
>>> epsilon = Tensor(1e-8, ms.float32)
>>> gradient = Tensor(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]).astype(np.float32))
>>> result = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient)
>>> print(result)
[[-0.00010004 -0.00010004 -0.00010004]
[-0.00013441 -0.00013441 -0.00013441]]
class tinyms.primitives.AdamWeightDecay(use_locking=False)[source]

Updates gradients by the Adaptive Moment Estimation algorithm with weight decay (AdamWeightDecay).

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + \epsilon} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters:

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, it should have the the shape as var. The data type can be float16 or float32.

  • v (Parameter) - The 2nd moment vector in the updating formula, it should have the same shape and dtype as m.

  • lr (float) - \(lr\) in the updating formula. The paper suggested value is \(10^{-8}\), the data type should be float32.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations, the data type should be float32. The paper suggested value is \(0.9\)

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations, the data type should be float32. The paper suggested value is \(0.999\)

  • epsilon (float) - Term added to the denominator to improve numerical stability, the data type should be float32.

  • decay (float) - The weight decay value, must be a scalar tensor with float32 data type. Default: 0.0.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If lr, beta1, beta2, epsilon or decay is not a float32.

  • TypeError – If var, m or v is not a Parameter with dtype float16 or float32.

  • TypeError – If gradient is not a Tensor.

  • ValueError

    • If eps <= 0.

  • ValueError

    • If beta1, beta2 is not in range (0.0,1.0).

  • ValueError

    • If decay < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam_weight_decay = ops.AdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad):
...         out = self.adam_weight_decay(self.var, self.m, self.v, lr, beta1, beta2,
...                               epsilon, decay, grad)
...         return out
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient)
>>> print(net.var.asnumpy())
[[0.999 0.999]
 [0.999 0.999]]
class tinyms.primitives.AdaptiveAvgPool2D(output_size)[source]

AdaptiveAvgPool2D operation.

Refer to mindspore.ops.adaptive_avg_pool2d() for more details.

Supported Platforms:

GPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((None, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]]
>>> # case 2: output_size=2
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D(2)
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((1, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[4.5 5.5]]
 [[4.5 5.5]]
 [[4.5 5.5]]]
class tinyms.primitives.AdaptiveAvgPool3D(output_size)[source]

AdaptiveAvgPool3D operation.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.adaptive_avg_pool3d() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import nn, Tensor
>>> from mindspore.ops import AdaptiveAvgPool3D
>>> class AdaptiveAvgPool3DNet(nn.Cell):
...     def __init__(self, output_size):
...         super(AdaptiveAvgPool3DNet, self).__init__()
...         self.output_size_ = output_size
...         self.adaptive_avg_pool_3d = AdaptiveAvgPool3D(self.output_size_)
...     def construct(self, x_):
...         return self.adaptive_avg_pool_3d(x_)
...
>>> output_size=(1,1,1)
>>> input_x_val = np.zeros((1,1,2,2,2))
>>> input_x_val[:,:,0,:,:]  += 1
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> adaptive_avg_pool_3d = AdaptiveAvgPool3DNet(output_size)
>>> output = adaptive_avg_pool_3d(input_x)
>>> print(output)
[[[[[0.5]]]]]
class tinyms.primitives.AdaptiveMaxPool2D(output_size)[source]

Performs 2D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool2d() for more details.

Parameters:

output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Inputs:
  • input_x (Tensor) - The input of AdaptiveMaxPool2D, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((None, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D(2)
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((1, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
class tinyms.primitives.AdaptiveMaxPool3D[source]

Performs 3D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool3d() for more details.

Inputs:
  • x (Tensor) - Tensor, with shape \((C, D, H, W)\) or \((N, C, D, H, W)\).

  • output_size (Union[int, tuple]) - The specified output size, which is an integer that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. If it is None, the output size and input size of the corresponding dimension are the same.

Outputs:
  • y (Tensor) - Tensor, with the same number of dims and data type as the input.

  • argmax (Tensor) - Tensor, the indices of max value, which has the same shape as the y and it’s data type is int32.

Supported Platforms:

GPU CPU

Examples

>>> class AdaptiveMaxPool3DNet(nn.Cell):
...     def __init__(self):
...         super(AdaptiveMaxPool3DNet, self).__init__()
...         self.adaptive_max_pool_3d = ops.AdaptiveMaxPool3D()
...     def construct(self, x_, output_size_):
...         return self.adaptive_max_pool_3d(x_, output_size_)
>>> x = np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32)
>>> output_size = np.array([1, 1, 2], dtype=np.int32)
>>> net = AdaptiveMaxPool3DNet()
>>> output = net(Tensor(x), Tensor(output_size))
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
[[[[33 35]]]]
class tinyms.primitives.Add[source]

Adds two input tensors element-wise.

Refer to mindspore.ops.add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: x and y are both Tensor.
>>> add = ops.Add()
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 7. 9.]
>>> # case 2: x is a scalar and y is a Tensor
>>> add = ops.Add()
>>> x = Tensor(1, mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 6. 7.]
>>> # the data type of x is int32, the data type of y is float32,
>>> # and the output is the data format of higher precision float32.
>>> print(output.dtype)
Float32
class tinyms.primitives.AddN[source]

Computes addition of all input tensors element-wise.

Refer to mindspore.ops.addn() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetAddN(nn.Cell):
...     def __init__(self):
...         super(NetAddN, self).__init__()
...         self.addN = ops.AddN()
...
...     def construct(self, *z):
...         return self.addN(z)
...
>>> net = NetAddN()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Addcdiv[source]

Performs the element-wise division of tensor x1 by tensor x2, multiply the result by the scalar value and add it to input_data.

\[y[i] = input\_data[i] + value[i] * (x1[i] / x2[i])\]
Inputs:
  • input_data (Tensor) - The tensor to be added.

  • x1 (Tensor) - The numerator tensor.

  • x2 (Tensor) - The denominator tensor.

  • value (Tensor) - The multiplier for tensor x1/x2.

Outputs:

Tensor, has the same shape and dtype as x1/x2.

Raises:
  • TypeError – If dtype of x1, x2, value, input_data is not tensor.

  • TypeError – If dtype of x1, x2, value, input_data are not the same.

  • ValueError – If x1 could not be broadcast to x2.

  • ValueError – If value could not be broadcast to x1/x2.

  • ValueError – If input_data could not be broadcast to value*(x1/x2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([1, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.array([4, 3, 2, 1]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> addcdiv = ops.Addcdiv()
>>> y = addcdiv(input_data, x1, x2, value)
>>> print(y)
[1.25      1.6666667 2.5       5.       ]
class tinyms.primitives.Addcmul[source]

Performs the element-wise product of tensor x1 and tensor x2, multiply the result by the scalar value and add it to input_data.

\[output[i] = input\_data[i] + value[i] * (x1[i] * x2[i])\]
Inputs:
  • input_data (Tensor) - The tensor to be added.

  • x1 (Tensor) - The tensor to be multiplied.

  • x2 (Tensor) - The tensor to be multiplied.

  • value (Tensor) - The multiplier for tensor x1*x2.

Outputs:

Tensor, has the same shape and dtype as x1*x2.

Raises:
  • TypeError – If dtype of x1, x2, value, input_data is not tensor.

  • TypeError – If dtype of x1, x2, value, input_data are not the same.

  • ValueError – If x1 could not be broadcast to x2.

  • ValueError – If value could not be broadcast to x1 * x2.

  • ValueError – If input_data could not be broadcast to value*(x1*x2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> x2 = Tensor(np.array([[1, 2, 3]]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> addcmul = ops.Addcmul()
>>> y = addcmul(input_data, x1, x2, value)
>>> print(y)
[[ 2.  3.  4.]
 [ 3.  5.  7.]
 [ 4.  7. 10.]]
class tinyms.primitives.AdjustHue[source]

Adjust hue of RGB images.

Note

A convenience method that transform an RGB image to float representation. The image is adjusted by transforming the image to HSV and shifting the intensities in the hue channel, then transform back to original data mode. It is recommended to minimize the number of redundant transformations when several adjustments are chained.

Inputs:
  • image (Tensor): RGB image or images, a Tensor has at least 3-D. The last dimension is interpreted as channels whose size must be three. the dtype is float16 or float32.

  • delta (Tensor): How much to add to the hue channel, the dtype is float32. Must be 0-D.

Outputs:

Adjusted image(s), same shape and dtype as image.

Raises:
  • TypeError – If neither image nor delta is a tensor.

  • TypeError – If the dtype of image is neither float16 nor float32.

  • TypeError – If the dtype of delta not float32.

  • ValueError – If the dimension of image is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class AdjustHue(nn.Cell):
...   def __init__(self):
...     super(AdjustHue, self).__init__()
...     self.adjustHue = ops.AdjustHue()
...   def construct(self, image, delta):
...     return self.adjustHue(image, delta)
...
>>> image = np.array([[[1, 2, 3], [4, 5, 6]],
...                   [[7, 8, 9], [10, 11, 12]],
...                   [[13, 14, 15], [16, 17, 18]]]).astype(np.float32)
>>> delta = 0.2
>>> adjust_hue = AdjustHue()
>>> output = adjust_hue(Tensor(image), Tensor(delta))
>>> print("output", output)
output [[[ 2.3999996  1.         3.       ]
         [ 5.3999996  4.         6.       ]]
        [[ 8.4        7.         9.       ]
         [11.4       10.        12.       ]]
        [[14.4       13.        15.       ]
         [17.4       16.        18.       ]]]
class tinyms.primitives.AdjustSaturation[source]

Adjust saturation of RGB images.

Note

This is a convenience method that converts RGB images to float representation, converts them to HSV, adds an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

Inputs:
  • image (Tensor) - Images to adjust. Must be one of the following types: float16, float32. At least 3-D. The last dimension is interpreted as channels, and must be three.

  • scale (Tensor) - A scale factor determines the amount of saturation adjustment to apply to the image. A value greater than 1.0 increases the saturation, while a value less than 1.0 decreases the saturation. A value of 1.0 leaves the saturation unchanged. Must be 0-D Tensor of type float32.

Outputs:

Adjusted image(s), same shape and dtype as image.

Raises:
  • TypeError – If any iput is not Tensor.

  • TypeError – If the type of image is not one of the following dtype: float16, float32.

  • TypeError – If the type of scale is not float32.

  • ValueError – If the dimension of the ‘image’ is less than 3.

  • ValueError – If the last dimension of the ‘image’ is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[1.0, 2.0, 3.0],
...       [4.0, 5.0, 6.0]],
...     [[7.0, 8.0, 9.0],
...       [10.0, 11.0, 12.0]]])
>>> scale = Tensor(float(0.5))
>>> adjustsaturation = ops.AdjustSaturation()
>>> output = adjustsaturation(x, scale)
>>> print(output)
       [[[ 2.         2.4999998  3.       ]
    [ 5.         5.5        6.       ]]
   [[ 8.         8.5        9.       ]
    [11.        11.5       12.       ]]]
class tinyms.primitives.AffineGrid(align_corners=False)[source]

Creates a 2D or 3D flow field (sampling grid) based on a batch of affine matrices theta.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.affine_grid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> affinegrid = ops.AffineGrid(align_corners=False)
>>> theta = Tensor([[[0.8, 0.5, 0],[-0.5, 0.8, 0]]], mindspore.float32)
>>> out_size = (1, 3, 2, 3)
>>> output = affinegrid(theta, out_size)
>>> print(output)
[[[[-0.78333336 -0.06666666]
[-0.25       -0.4       ]
[ 0.28333336 -0.73333335]]
[[-0.28333336  0.73333335]
[ 0.25        0.4       ]
[ 0.78333336  0.06666666]]]]
class tinyms.primitives.AllGather(group='hccl_world_group')[source]

Gathers tensors from the specified communication group.

Note

  • The tensors must have the same shape and format in all processes of the collection.

  • Currently only supports GRAPH_MODE and it should be called in Cell.

Parameters:

group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).

Raises:
  • TypeError – If group is not a str.

  • ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allgather = ops.AllGather()
...
...     def construct(self, x):
...         return self.allgather(x)
...
>>> input_x = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]]
class tinyms.primitives.AllReduce(op='sum', group='hccl_world_group')[source]

Reduces the tensor data across all devices in such a way that all devices will get the same final result.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • op (str) – Specifies an operation used for element-wise reductions, like sum, prod, max, and min. On the CPU, only ‘sum’ is supported. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.

Raises:

TypeError – If any of op and group is not a str, or fusion is not an integer, or the input’s dtype is bool.

Supported Platforms:

Ascend GPU CPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.AlltoAll(split_count, split_dim, concat_dim, group='hccl_world_group')[source]

AlltoAll is a collective operation.

AlltoAll sends data from the all processes to the all processes in the specified group. It has two phases:

  • The scatter phase: On each process, the operand is split into split_count number of blocks along the split_dimensions, and the blocks are scattered to all processes, e.g., the ith block is send to the ith process.

  • The gather phase: Each process concatenates the received blocks along the concat_dimension.

Note

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details .

Parameters:
  • split_count (int) – On each process, divide blocks into split_count number.

  • split_dim (int) – On each process, split blocks along the split_dim.

  • concat_dim (int) – On each process, gather the received blocks along the concat_dimension.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the shape of input tensor is \((x_1, x_2, ..., x_R)\), then the shape of output tensor is \((y_1, y_2, ..., y_R)\), where:

  • \(y_{split\_dim} = x_{split\_dim} / split\_count\)

  • \(y_{concat\_dim} = x_{concat\_dim} * split\_count\)

  • \(y_{other} = x_{other}\).

Raises:

TypeError – If group is not a string.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 8 devices.

>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.alltoall = ops.AlltoAll(split_count = 8, split_dim = -2, concat_dim = -1)
...
...     def construct(self, x):
...         out = self.alltoall(x)
...         return out
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> net = Net()
>>> rank_id = int(os.getenv("RANK_ID"))
>>> input_x = Tensor(np.ones([1, 1, 8, 1]) * rank_id, dtype = ms.float32)
>>> output = net(input_x)
>>> print(output)
[[[[0. 1. 2. 3. 4. 5. 6. 7.]]]]
class tinyms.primitives.Angle[source]

Returns the element-wise argument of a complex tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.angle() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([-1.5 + 7.8j, 3 + 5.75j], mindspore.complex64)
>>> angle = ops.Angle()
>>> output = angle(input)
>>> print(output)
[1.7607845 1.0899091]
class tinyms.primitives.ApplyAdaMax[source]

Updates relevant entries according to the adamax scheme.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \max(\beta_2 * v_{t}, \left| g \right|) \\ var = var - \frac{l}{1 - \beta_1^{t+1}} * \frac{m_{t+1}}{v_{t+1} + \epsilon} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(v\) represents the 2nd moment vector, \(v_{t}\) is the last moment of \(v_{t+1}\), \(l\) represents scaling factor lr, \(g\) represents grad, \(\beta_1, \beta_2\) represent beta1 and beta2, \(\beta_1^{t+1}\) represents beta1_power, \(var\) represents the variable to be updated, \(\epsilon\) represents epsilon.

Inputs of var, m, v and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and type as var. With float32 or float16 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients with the same shape and type as var. With float32 or float16 data type.

  • beta1_power (Union[Number, Tensor]) - \(beta_1^t\) in the updating formula, must be a scalar. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - Learning rate, \(l\) in the updating formula, must be a scalar. With float32 or float16 data type.

  • beta1 (Union[Number, Tensor]) - The exponential decay rate for the 1st moment estimations, must be a scalar. With float32 or float16 data type.

  • beta2 (Union[Number, Tensor]) - The exponential decay rate for the 2nd moment estimations, must be a scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient, has the same shape and type as var. With float32 or float16 data type.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Raises:
  • TypeError – If dtype of var, m, v, beta_power, lr, beta1, beta2, epsilon or grad is neither float16 nor float32.

  • TypeError – If beta_power, lr, beta1, beta2 or epsilon is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, m, v and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_ada_max = ops.ApplyAdaMax()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.array([[0.9, 0.1],
...                                             [0.7, 0.8]]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_ada_max(self.var, self.m, self.v, beta1_power, lr, beta1, beta2, epsilon, grad)
...         return out
...
>>> net = Net()
>>> beta1_power =Tensor(0.9, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.99, mindspore.float32)
>>> epsilon = Tensor(1e-10, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(beta1_power, lr, beta1, beta2, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.93602717e-01,  3.92571449e-01],
 [ 9.72582996e-02,  4.92249995e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.69999993e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000005e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.90999973e-01,  6.99999988e-01],
 [ 6.93000019e-01,  8.00000012e-01]]))
class tinyms.primitives.ApplyAdadelta[source]

Updates relevant entries according to the adadelta scheme.

The Adadelta algorithm is proposed in ADADELTA: AN ADAPTIVE LEARNING RATE METHOD.

\[\begin{split}\begin{array}{ll} \\ \text{accum} = \rho * \text{accum} + (1 - \rho) * \text{grad}^2 \\ \text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{\text{grad}}{\sqrt{\text{accum} + \epsilon}} \\ \text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * \text{update}^2 \\ \text{var} = \text{var} - \text{lr} * \text{update} \end{array}\end{split}\]

where \(\rho\) represents rho, \(\epsilon\) represents epsilon.

Inputs of var, accum, accum_update and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated, has the same shape and data type as var.

  • accum_update (Parameter) - Accum_update to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - Learning rate, must be a scalar. With float32 or float16 data type.

  • rho (Union[Number, Tensor]) - Decay rate, must be a scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - Gradients, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Raises:
  • TypeError – If dtype of var, accum, accum_update, lr, rho, epsilon or grad is neither float16 nor float32.

  • TypeError – If accum_update, lr, rho or epsilon is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum, accum_update and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import nn, Tensor, ops, Parameter
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adadelta = ops.ApplyAdadelta()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                        [0.7, 0.8]]).astype(np.float32)),
...                                                             name="accum_update")
...     def construct(self, lr, rho, epsilon, grad):
...         out = self.apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, epsilon, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> rho = Tensor(0.0, mindspore.float32)
>>> epsilon = Tensor(1e-6, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, rho, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99051356e-01,  3.99683774e-01],
 [ 9.91633832e-02,  4.99105573e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.00000036e-02,  4.89999980e-01],
 [ 1.00000007e-02,  6.40000045e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.99990857e-01,  1.00000791e-01],
 [ 6.99930906e-01,  7.99999774e-01]]))
class tinyms.primitives.ApplyAdagrad(update_slots=True)[source]

Updates relevant entries according to the adagrad scheme. The Adagrad algorithm was proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. This module can adaptively assign different learning rates for each parameter in view of the uneven number of samples for different parameters.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum}} \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float or complex data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float or complex data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If dtype of var, accum, lr or grad is neither float nor complex.

  • TypeError – If lr is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad = ops.ApplyAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAdagradDA(use_locking=False)[source]

Update var according to the proximal adagrad scheme. The Adagrad algorithm was proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.

\[\begin{split}\begin{array}{ll} \\ grad\_accum += grad \\ grad\_squared\_accum += grad * grad \\ tmp\_val= \begin{cases} sign(grad\_accum) * max\left \{|grad\_accum|-l1*global\_step, 0\right \} & \text{ if } l1>0 \\ grad\_accum & \text{ otherwise } \\ \end{cases} \\ x\_value = -1 * lr * tmp\_val \\ y\_value = l2 * global\_step * lr + \sqrt{grad\_squared\_accum} \\ var = \frac{ x\_value }{ y\_value } \end{array}\end{split}\]

Inputs of var, gradient_accumulator, gradient_squared_accumulator and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If True, updating of the var and accum tensors will be protected by a lock. Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient_accumulator (Parameter) - The dict of mutable tensor \(grad\_accum\). Must have the same shape and dtype as var.

  • gradient_squared_accumulator (Parameter) - The dict of mutable tensor \(grad\_squared\_accum\). Must have the same shape and dtype as var.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Scaling factor. Must be a scalar. With float32 or float16 data type.

  • l1 ([Number, Tensor]) - L1 regularization. Must be a scalar. With float32 or float16 data type.

  • l2 ([Number, Tensor]) - L2 regularization. Must be a scalar. With float32 or float16 data type.

  • global_step ([Number, Tensor]) - Training step number. Must be a scalar. With int32 or int64 data type.

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • gradient_accumulator (Tensor) - The same shape and data type as gradient_accumulator.

  • gradient_squared_accumulator (Tensor) - The same shape and data type as gradient_squared_accumulator.

Raises:
  • TypeError – If var, gradient_accumulator or gradient_squared_accumulator is not a Parameter.

  • TypeError – If grad is not a Tensor.

  • TypeError – If lr, l1, l2 or global_step is neither a Number nor a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, gradient_accumulator, gradient_squared_accumulator, grad, lr, l1 or l2 is neither float16 nor float32.

  • TypeError – If dtype of gradient_accumulator, gradient_squared_accumulator or grad is not same as var.

  • TypeError – If dtype of global_step is not int32 nor int64.

  • ValueError – If the shape size of lr, l1, l2 and global_step is not 0.

  • RuntimeError – If the data type of var, gradient_accumulator, gradient_squared_accumulator and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyAdagradDANet(nn.Cell):
...     def __init__(self, use_locking=False):
...         super(ApplyAdagradDANet, self).__init__()
...         self.apply_adagrad_d_a = ops.ApplyAdagradDA(use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...         self.gradient_squared_accumulator = Parameter(Tensor(np.array([[0.2, 0.1],
...                                                                        [0.1, 0.2]]).astype(np.float32)),
...                                                       name="gradient_squared_accumulator")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...     def construct(self, grad, lr, l1, l2, global_step):
...         out = self.apply_adagrad_d_a(self.var, self.gradient_accumulator,
...                                      self.gradient_squared_accumulator, grad, lr, l1, l2, global_step)
...         return out
...
>>> net = ApplyAdagradDANet()
>>> grad = Tensor(np.array([[0.3, 0.4], [0.1, 0.2]]).astype(np.float32))
>>> lr = Tensor(0.001, mstype.float32)
>>> l1 = Tensor(0.001, mstype.float32)
>>> l2 = Tensor(0.001, mstype.float32)
>>> global_step = Tensor(2, mstype.int32)
>>> output = net(grad, lr, l1, l2, global_step)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[-7.39064650e-04, -1.36888528e-03],
 [-5.96988888e-04, -1.42478070e-03]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.00000006e-01,  7.00000048e-01],
 [ 2.00000003e-01,  6.99999988e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.90000021e-01,  2.60000020e-01],
 [ 1.09999999e-01,  2.40000010e-01]]))
class tinyms.primitives.ApplyAdagradV2(epsilon, update_slots=True)[source]

Updates relevant entries according to the adagradv2 scheme.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

The difference is that ApplyAdagradV2 has one more small constant value \(\epsilon\) than ApplyAdagrad.

Parameters:
  • epsilon (float) – A small value added for numerical stability.

  • update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If dtype of var, accum, lr or grad is neither float16 nor float32.

  • TypeError – If lr is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad_v2 = ops.ApplyAdagradV2(epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad_v2(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAdamWithAmsgrad(beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False)[source]

Update var according to the Adam algorithm.

\[\begin{split}\begin{array}{l1} \\ lr_t:=learning\_rate*\sqrt{1-\beta_2^t}/(1-\beta_1^t) \\ m_t:=\beta_1*m_{t-1}+(1-\beta_1)*g \\ v_t:=\beta_2*v_{t-1}+(1-\beta_2)*g*g \\ \hat v_t:=max(\hat v_{t-1}, v_t) \\ var:=var-lr_t*m_t/(\sqrt{\hat v_t}+\epsilon) \\ \end{array}\end{split}\]
Parameters:
  • beta1 (float) – A Tensor. Must have the same type as beta1_power. Momentum factor. Must be a scalar.

  • beta2 (float) – A Tensor. Must have the same type as beta1_power. Momentum factor. Must be a scalar.

  • epsilon (float) – A Tensor. Must have the same type as beta1_power. Ridge term. Must be a scalar.

  • use_locking (bool) – use_locking: If True , updating of the var, m, and v tensors will be protected by a lock; Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, the shape and data type value should be the same as var.

  • v (Parameter) - the 2nd moment vector in the updating formula, the shape and data type value should be the same as var.

  • vhat (Parameter) - \(\hat v_t\) in the updating formula, the shape and data type value should be the same as var.

  • beta1_power (Union[float, Tensor]) - \(beta_1^t(\beta_1^{t})\) in the updating formula, a scalar tensor with float16 or float32 data type.

  • beta2_power (Union[float, Tensor]) - \(beta_2^t(\beta_2^{t})\) in the updating formula, a scalar tensor with float16 or float32 data type.

  • lr (Union[float, Tensor]) - Scaling factor, a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - The gradient, has the same shape and data type as var.

Outputs:

Tuple of 4 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

  • vhat (Tensor) - The same shape and data type as vhat.

Raises:
  • TypeError – If var, m, v, vhat is not a Parameter.

  • TypeError – If beta1_power, beta2_power, lr is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • TypeError – If dtype of var, m, v, vhat, beta1_power, beta2_power, lr, grad, momentum is not float32 or float16.

  • ValueError – If m or v or vhat or grad doesn’t have the same shape of var.

  • ValueError – If the shape of beta1_power, beta2_power, lr is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyAdamWithAmsgradNet(nn.Cell):
...     def __init__(self, beta1=0.9, beta2=0.999, epsilon=1e-8, use_locking=False):
...         super(ApplyAdamWithAmsgradNet, self).__init__()
...         self.apply_adam_with_amsgrad = P.ApplyAdamWithAmsgrad(beta1, beta2, epsilon, use_locking)
...         self.var = Parameter(Tensor(np.array([[0.2, 0.2], [0.2, 0.2]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.1, 0.2], [0.4, 0.3]]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.array([[0.2, 0.1], [0.3, 0.4]]).astype(np.float32)), name="v")
...         self.vhat = Parameter(Tensor(np.array([[0.1, 0.2], [0.6, 0.2]]).astype(np.float32)), name="vhat")
...     def construct(self, beta1_power, beta2_power, lr, grad):
...         out = self.apply_adam_with_amsgrad(self.var, self.m, self.v, self.vhat,
...                                            beta1_power, beta2_power, lr, grad)
...         return out
>>> net = ApplyAdamWithAmsgradNet()
>>> grad = Tensor(np.array([[0.4, 0.2], [0.2, 0.3]]).astype(np.float32))
>>> output = net(Tensor(0.9, mstype.float32), Tensor(0.999, mstype.float32), Tensor(0.01, mstype.float32), grad)
>>> print(net.var.asnumpy())
[[0.19908068 0.1985858 ]
[0.19844866 0.19849943]]
class tinyms.primitives.ApplyAddSign[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = (\alpha + \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\alpha\) represents alpha, \(\beta\) represents beta.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. The data type of inputs must be float16 or float32 on Ascend and float16, float32 or float64 on CPU and GPU.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float16, float32 or float64 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float16, float32 or float64 data type.

  • alpha (Union[Number, Tensor]) - Must be a scalar. With float16, float32 or float64 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float16, float32 or float64 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float16, float32 or float64 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises:
  • TypeError – If dtype of var, lr, alpha, sign_decay or beta is not float16, float32 or float64.

  • TypeError – If lr, alpha or sign_decay is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_add_sign = ops.ApplyAddSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.alpha = 1.0
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_add_sign(self.var, self.m, self.lr, self.alpha, self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99403024e-01,  3.98607016e-01],
 [ 9.98010039e-02,  4.98407990e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyCenteredRMSProp(use_locking=False)[source]

Optimizer that implements the centered RMSProp algorithm. Please refer to the usage in source code of mindspore.nn.RMSProp.

The updating formulas of ApplyCenteredRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ g_{t+1} = \rho g_{t} + (1 - \rho)\nabla Q_{i}(w) \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} - g_{t+1}^2 + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(g_{t+1}\) represents mean_gradient, \(g_{t}\) is the last moment of \(g_{t+1}\). \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Note

The difference between ApplyCenteredRMSProp and ApplyRMSProp is that the former uses the centered RMSProp algorithm, and the centered RRMSProp algorithm uses an estimate of the centered second moment(i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncertained) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Warning

In dense implementation of this algorithm, mean_gradient, mean_square, and moment will update even if the grad is zero. But in this sparse implementation, mean_gradient, mean_square, and moment will not update in iterations during which the grad is zero.

Parameters:

use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated.

  • mean_gradient (Tensor) - Mean gradients, must be the same type as var.

  • mean_square (Tensor) - Mean square gradients, must be the same type as var.

  • moment (Tensor) - Delta of var, must be the same type as var.

  • grad (Tensor) - Gradient, must be the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_gradient, mean_square, moment or grad is not a Tensor.

  • TypeError – If learing_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of learing_rate is neither float16 nor float32.

  • TypeError – If decay, momentum or epsilon is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_centerd_rms_prop = ops.ApplyCenteredRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_grad, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_centerd_rms_prop(self.var, mean_grad, mean_square, moment, grad,
...                                           lr, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_grad, mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.68377227  0.68377227]
 [0.68377227  0.68377227]]
class tinyms.primitives.ApplyFtrl(use_locking=False)[source]

Updates relevant entries according to the FTRL scheme.

For more details, please refer to mindspore.nn.FTRL.

Note

Currently, only positive numbers are supported on the Ascend platform, and the calculation results for other scenarios are not defined.

Parameters:

use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same shape and data type as var.

  • linear (Parameter) - The linear coefficient to be updated, must be same shape and data type as var.

  • grad (Tensor) - Gradient. The data type must be float16 or float32.

  • lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.

Outputs:
  • var (Tensor) - Represents the updated var. As the input parameters has been updated in-place, this value is always zero when the platform is GPU.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, grad, lr, l1, l2 or lr_power is neither float16 nor float32.

  • TypeError – If lr, l1, l2 or lr_power is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the parameter types of var, accum and linear are inconsistent.

  • RuntimeError – If the parameter types of grad, lr, l1, l2, lr_power are inconsistent with var and the precision is greater than var.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(ApplyFtrlNet, self).__init__()
...         self.apply_ftrl = ops.ApplyFtrl()
...         self.lr = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...         self.lr_power = -0.5
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                  [0.7, 0.8]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad):
...         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
...                               self.lr_power)
...         return out
...
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(input_x)
>>> print(net.var.asnumpy())
[[ 0.0390525  0.11492836]
 [ 0.00066425 0.15075898]]
class tinyms.primitives.ApplyGradientDescent[source]

Updates var by subtracting alpha * delta from it.

\[var = var - \alpha * \delta\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same shape and data type as var.

Outputs:

Tensor, represents the updated var.

Raises:
  • TypeError – If dtype of var or alpha is neither float16 nor float32.

  • TypeError – If delta is not a Tensor.

  • TypeError – If alpha is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var and delta conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_gradient_descent = ops.ApplyGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...     def construct(self, delta):
...         out = self.apply_gradient_descent(self.var, self.alpha, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.9999 0.9999]
 [0.9999 0.9999]]
class tinyms.primitives.ApplyKerasMomentum(use_locking=False, use_nesterov=False)[source]

Update var according to the momentum scheme.

\[\begin{split}\begin{array}{ll} \\ accum = accum * momentum - grad * lr \\ var = \begin{cases} var + accum * momentum - grad * lr, &\text{if use_nesterov} \\ var + accum, &\text{else} \end{cases} \end{array}\end{split}\]

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters:
  • use_locking (bool) – If True, updating of the var and accum tensors will be protected by a lock; Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

  • use_nesterov (bool) – If True, the tensor passed to compute grad will be var + momentum * accum, so in the end, the var you get is actually var + momentum * accum. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type.

  • accum (Parameter) - Must have the same shape and type as var. With float16 or float32 data type.

  • lr (Union[Number, Tensor]) - Scaling factor. Must be a scalar. With float16 or float32 data type.

  • grad (Tensor) - The gradient. Must have the same shape and type as var. With float16 or float32 data type.

  • momentum (Union[Number, Tensor]) - Momentum. Must be a scalar. With float16 or float32 data type.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If the use_locking or use_nesterov is not a bool.

  • TypeError – If var or accum is not a Parameter.

  • TypeError – If lr is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • TypeError – If momentum is neither a Number nor a Tensor.

  • TypeError – If dtype of var, accum, lr, grad, momentum is neither float16 nor float32.

  • ValueError – If accum or grad doesn’t have the same shape as var.

  • ValueError – If the shape size of lr, momentum is not 0.

Supported Platforms:

Ascend

Examples

>>> class ApplyKerasMomentumNet(nn.Cell):
...     def __init__(self, use_locking=False, use_nesterov=False):
...         super(ApplyKerasMomentumNet, self).__init__()
...         self.apply_keras_momentum = P.ApplyKerasMomentum(use_locking, use_nesterov)
...         self.var = Parameter(Tensor(np.array([[0.2, 0.3], [0.1, 0.4]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.2, 0.3], [0.1, 0.4]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad, momentum):
...         out = self.apply_keras_momentum(self.var, self.accum, lr, grad, momentum)
...         return out
...
>>> net = ApplyKerasMomentumNet()
>>> lr = Tensor(0.001, mstype.float32)
>>> grad = Tensor(np.array([[0.3, 0.2], [0.4, 0.1]]).astype(np.float32))
>>> momentum = Tensor(0.99, mstype.float32)
>>> output = net(lr, grad, momentum)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.97700012e-01,  5.96800029e-01],
[ 1.98599994e-01,  7.95899987e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.97699994e-01,  2.96800017e-01],
[ 9.86000001e-02,  3.95900011e-01]]))
class tinyms.primitives.ApplyMomentum(use_nesterov=False, use_locking=False, gradient_scale=1.0)[source]

Optimizer that implements the Momentum algorithm.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • gradient_scale (float) – The scale of the gradient. Default: 1.0.

Inputs:
  • variable (Parameter) - Weights to be updated. Data type must be float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128.

  • accumulation (Parameter) - Accumulated gradient value by moment weight, has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 number or a scalar tensor with float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 number or a scalar tensor with float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 data type.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If the use_locking or use_nesterov is not a bool or gradient_scale is not a float.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...    def __init__(self):
...        super(Net, self).__init__()
...        self.apply_momentum = ops.ApplyMomentum()
...        self.variable = Parameter(Tensor(np.array([[0.6, 0.4],
...                                            [0.1, 0.5]]).astype(np.float32)), name="variable")
...        self.accumulate = Parameter(Tensor(np.array([[0.6, 0.5],
...                                            [0.2, 0.6]]).astype(np.float32)), name="accumulate")
...    def construct(self, lr, grad, moment):
...        out = self.apply_momentum(self.variable, self.accumulate, lr, grad, moment)
...        return out
>>> net = Net()
>>> lr = Tensor(0.1, mindspore.float32)
>>> moment = Tensor(0.9, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad, moment)
>>> print(output)
[[0.51600003 0.285     ]
[0.072      0.366     ]]
class tinyms.primitives.ApplyPowerSign[source]

Updates relevant entries according to the AddSign algorithm.

The AddSign algorithm was proposed in Neural Optimizer Search with Reinforcement Learning.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = \exp(\text{logbase} * \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\beta\) represents beta.

All of inputs comply with the implicit type conversion rules to make the data types consistent. If lr, logbase, sign_decay or beta is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. If inputs are tensors and have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

On Ascend, input data type of float64 is currently not supported.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float64, float32 or float16 data type. If data type of var is float16, all inputs must have the same data type as var. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, should be a scalar or Tensor with float64, float32 or float16 data type.

  • logbase (Union[Number, Tensor]) - Should be a scalar or Tensor with float64, float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Should be a scalar or Tensor with float64, float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, should be a scalar or Tensor with float64, float32 or float16 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises:
  • TypeError – If dtype of var, lr, logbase, sign_decay, beta or grad is not one of float16,

  • float32 or float64.

  • TypeError – If lr, logbase, sign_decay or beta is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of lr, logbase, sign_decay and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_power_sign = ops.ApplyPowerSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.logbase = np.e
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_power_sign(self.var, self.m, self.lr, self.logbase,
...                                        self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.95575690e-01,  3.89676481e-01],
 [ 9.85252112e-02,  4.88201708e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyProximalAdagrad(use_locking=False)[source]

Updates relevant entries according to the proximal adagrad algorithm. The proximal adagrad algorithm was proposed in Efficient Learning using Forward-Backward Splitting.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated, must have the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. The data type must be float16 or float32.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a scalar. The data type must be float16 or float32.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a scalar. The data type must be float16 or float32.

  • grad (Tensor) - Gradient with the same shape and dtype as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If use_blocking is not a bool.

  • TypeError – If dtype of var, lr, l1 or l2 is neither float16 nor float32.

  • TypeError – If lr, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_adagrad = ops.ApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.lr = 0.01
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, grad):
...         out = self.apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1, self.l2, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.96388459e-01,  3.92964751e-01],
 [ 9.78178233e-02,  4.92815793e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyProximalGradientDescent[source]

Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm. Refer to the paper Efficient Learning using Forward-Backward Splitting for more details.

\[\begin{split}\begin{array}{ll} \\ \text{prox_v} = var - \alpha * \delta \\ var = \frac{sign(\text{prox_v})}{1 + \alpha * l2} * \max(\left| \text{prox_v} \right| - \alpha * l1, 0) \end{array}\end{split}\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a scalar. With float32 or float16 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change.

Outputs:

Tensor, represents the updated var.

Raises:
  • TypeError – If dtype of var, alpha, l1 or l2 is neither float16 nor float32.

  • TypeError – If alpha, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If delta is not a Tensor.

  • RuntimeError – If the data type of var, and delta conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_gradient_descent = ops.ApplyProximalGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...         self.l1 = 0.1
...         self.l2 = 0.1
...     def construct(self, delta):
...         out = self.apply_proximal_gradient_descent(self.var, self.alpha, self.l1, self.l2, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.99969995 0.99969995]
 [0.99969995 0.99969995]]
class tinyms.primitives.ApplyRMSProp(use_locking=False)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of mindspore.nn.RMSProp.

The updating formulas of ApplyRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Warning

Note that in dense implementation of this algorithm, “mean_square” and “moment” will update even if “grad” is 0, but in this sparse implementation, “mean_square” and “moment” will not update in iterations during which “grad” is 0.

Parameters:

use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated.

  • mean_square (Tensor) - Mean square gradients, must be the same type as var.

  • moment (Tensor) - Delta of var, must be the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - Gradient, must be the same type as var.

  • decay (float) - Decay rate. Only constant value is allowed.

  • momentum (float) - Momentum. Only constant value is allowed.

  • epsilon (float) - Ridge term. Only constant value is allowed.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_square, moment or decay is not a Tensor.

  • TypeError – If learning_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of decay, momentum or epsilon is not float.

  • TypeError – If dtype of learning_rate is neither float16 nor float32.

  • ValueError – If decay, momentum or epsilon is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_rms_prop = ops.ApplyRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_rms_prop(self.var, mean_square, moment, lr, grad, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.990005  0.990005]
 [0.990005  0.990005]]
class tinyms.primitives.ApproximateEqual(tolerance=1e-05)[source]

Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False.

\[\begin{split}out_i = \begin{cases} & \text{ if } \left | x_{i} - y_{i} \right | < \text{tolerance},\ \ True \\ & \text{ if } \left | x_{i} - y_{i} \right | \ge \text{tolerance},\ \ False \end{cases}\end{split}\]

where tolerance indicates Acceptable maximum tolerance.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to the relatively highest precision data type.

Parameters:

tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Inputs:
  • x (Tensor) - A tensor. Must be one of the following types: float32, float16. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • y (Tensor) - A tensor of the same type and shape as x.

Outputs:

Tensor, the shape is the same as the shape of x, and the data type is bool.

Raises:
  • TypeError – If tolerance is not a float.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([2, 3, 6]), mindspore.float32)
>>> approximate_equal = ops.ApproximateEqual(2.)
>>> output = approximate_equal(x, y)
>>> print(output)
[ True  True  False]
class tinyms.primitives.ArgMaxWithValue(axis=0, keep_dims=False)[source]

Calculates the maximum value along with the given axis for the input tensor, and returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Also see: func: mindspore.ops.max.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Inputs:
  • x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor.

  • index (Tensor) - The index for the maximum value of the input tensor, with dtype int32. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • values (Tensor) - The maximum value of input tensor, with the same shape as index, and same dtype as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMaxWithValue()(input_x)
>>> print(index, output)
3 0.7
>>> index, output = ops.ArgMaxWithValue(keep_dims=True)(input_x)
>>> print(index, output)
[3] [0.7]
class tinyms.primitives.ArgMinWithValue(axis=0, keep_dims=False)[source]

Calculates the minimum value along with the given axis for the input tensor, and returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Also see: func: mindspore.ops.min.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Inputs:
  • x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\) .Complex tensor is not supported.

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor.

  • index (Tensor) - The index for the minimum value of the input tensor, with dtype int32. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • values (Tensor) - The minimum value of input tensor, with the same shape as index, and same dtype as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMinWithValue()(x)
>>> print(index, output)
0 0.0
>>> index, output = ops.ArgMinWithValue(keep_dims=True)(x)
>>> print(index, output)
[0] [0.0]
class tinyms.primitives.Argmax(axis=-1, output_type=mindspore.int32)[source]

Returns the indices of the maximum value of a tensor across the axis.

Refer to mindspore.ops.argmax() for more details.

Parameters:
  • axis (int) – Axis where the Argmax operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions. Support data type list as follows:

    • Ascend: Float16, Float32.

    • GPU: Float16, Float32.

    • CPU: Float16, Float32, Float64.

Outputs:

Tensor, indices of the max value of input tensor across the axis.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.Argmax(output_type=mindspore.int32)(input_x)
>>> print(output)
[1 0 0]
class tinyms.primitives.Argmin(axis=-1, output_type=mindspore.int32)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters:
  • axis (int) – Axis where the Argmin operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

    • Ascend: Float16, Float32, Float64, Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64.

Outputs:

Tensor, whose dtype is determined by output_type.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If output_type is neither int32 nor int64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.Argmin()(input_x)
>>> print(index)
2
class tinyms.primitives.Asin[source]

Computes arcsine of input tensors element-wise.

Refer to mindspore.ops.asin() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asin = ops.Asin()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = asin(x)
>>> print(output)
[0.8330704  0.04001067 0.30469266 0.5943858 ]
class tinyms.primitives.Asinh[source]

Computes inverse hyperbolic sine of the input element-wise.

Refer to mindspore.ops.asinh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asinh = ops.Asinh()
>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = asinh(x)
>>> print(output)
[-2.3124382  1.1947632  1.8184465  5.298342 ]
class tinyms.primitives.Assert(summarize=3)[source]

Asserts whether the given condition is True. If input condition is identified to be false, print a list of the tensor in data.

Parameters:

summarize (int, optional) – The number of entries to be printed in each tensor while the given condition is identified to be False. Default: 3.

Inputs:
  • condition (Union[Tensor[bool], bool]) - The condition to be identified.

  • input_data (Union[tuple[Tensor], list[Tensor]]) - The tensors to be printed out when the condition is false.

Raises:
  • TypeError – If summarize is not an int.

  • TypeError – If condition is neither a Tensor nor a bool.

  • TypeError – If input_data is neither a tuple nor a list.

Supported Platforms:

GPU CPU

Examples

>>> a = Tensor(np.array([-1, 0, 1, 2, 3]).astype(np.int32))
>>> b = Tensor(np.array([1, 2, 3, 4, 5]).astype(np.float32))
>>> assert1 = ops.Assert(3)
>>> assert1(False, [a, b])
For 'Assert' condition is false.
input data: [-1 0 1]
input data: [1 2 3]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "mindspore/ops/primitive.py", line 294, in __call__
    return _run_op(self, self.name, args)
  File "mindspore/common/api.py", line 99, in wrapper
    results = fn(*arg, **kwargs)
  File "mindspore/ops/primitive.py", line 743, in _run_op
    output = real_run_op(obj, op_name, args)
RuntimeError: assert failed
class tinyms.primitives.Assign[source]

Assigns Parameter with a value.

Refer to mindspore.ops.assign() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> value = Tensor([2.0], mindspore.float32)
>>> variable = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="variable")
>>> assign = ops.Assign()
>>> x = assign(variable, value)
>>> print(variable.asnumpy())
[2.]
class tinyms.primitives.AssignAdd[source]

Updates a Parameter by adding a value to it.

Refer to mindspore.ops.assign_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignAdd = ops.AssignAdd()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
...
...     def construct(self, x):
...         self.AssignAdd(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> output = net(value)
>>> print(net.variable.asnumpy())
[101]
class tinyms.primitives.AssignSub[source]

Updates a Parameter by subtracting a value from it.

Refer to mindspore.ops.assign_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignSub = ops.AssignSub()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
...
...     def construct(self, x):
...         self.AssignSub(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int32)*100)
>>> output = net(value)
>>> print(net.variable.asnumpy())
[-99]
class tinyms.primitives.Atan[source]

Computes the trigonometric inverse tangent of the input element-wise.

Refer to mindspore.ops.atan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> atan = ops.Atan()
>>> output = atan(x)
>>> print(output)
[0.7853982 0.       ]
class tinyms.primitives.Atan2[source]

Returns arctangent of x/y element-wise.

Refer to mindspore.ops.atan2() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 1]), mindspore.float32)
>>> y = Tensor(np.array([1, 1]), mindspore.float32)
>>> atan2 = ops.Atan2()
>>> output = atan2(x, y)
>>> print(output)
[0.        0.7853982]
class tinyms.primitives.Atanh[source]

Computes inverse hyperbolic tangent of the input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.atanh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -0.5]), mindspore.float32)
>>> atanh = ops.Atanh()
>>> output = atanh(x)
>>> print(output)
[ 0.         -0.54930615]
class tinyms.primitives.AvgPool(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

Average pooling operation.

Refer to mindspore.ops.avg_pool2d() for more details.

Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is ‘same’ or ‘valid’. Default: ‘valid’.

    • same: The height and width of the output are the same as the input divided by ‘strides’ and rounded up.

    • valid: Returns the output of the valid calculation without filling. Redundant pixels that do not satisfy the calculation will be discarded.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.avgpool_op = ops.AvgPool(pad_mode="VALID", kernel_size=2, strides=1)
...
...     def construct(self, x):
...         result = self.avgpool_op(x)
...         return result
...
>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
class tinyms.primitives.AvgPool3D(kernel_size=1, strides=1, pad_mode='valid', pad=0, ceil_mode=False, count_include_pad=True, divisor_override=0, data_format='NCDHW')[source]

3D Average pooling operation.

Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), AvgPool3D outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

Warning

“kernel_size” is in the range [1, 255]. “strides” is in the range [1, 63].

\[\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents depth, height and width are both kernel_size, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be the same as the input. The total number of padding will be calculated in depth, horizontal and vertical directions and evenly distributed to head and tail, top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int], list[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (bool) – If True, ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the same data type with x.

Raises:
  • TypeError – If kernel_size, strides or pad is neither an int not a tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If pad_mode or data_format is not a string.

  • TypeError – If divisor_override is not an int.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If element of pad is less than 0.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to 0 or (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float16)
>>> avg_pool3d = ops.AvgPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = avg_pool3d(x)
>>> print(output)
[[[[[ 5.  6.]]]
  [[[17. 18.]]]]]
class tinyms.primitives.BCEWithLogitsLoss(reduction='mean')[source]

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

Sets input logits as \(X\), input label as \(Y\), input weight as \(W\), output as \(L\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})] \end{array}\end{split}\]

\(i\) indicates the \(i^{th}\) sample, \(j\) indicates the category. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

\(\ell\) indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor weight assigns different weights to each piece of data in the batch, and the tensor pos_weight adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

\[\begin{split}\begin{array}{ll} \\ p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\ L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})] \end{array}\end{split}\]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and \(P_c\) is the weight of the positive answer for the class c. \(P_c>1\) increases the recall, \(P_c<1\) increases the precision.

Parameters:

reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’, not case sensitive. If ‘none’, do not perform reduction. Default: ‘mean’.

Inputs:
  • logits (Tensor) - Input logits. Data type must be float16 or float32. Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • label (Tensor) - Ground truth label, has the same shape as logits. Data type must be float16 or float32.

  • weight (Tensor) - A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • pos_weight (Tensor) - A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

Outputs:

Tensor or Scalar, if reduction is ‘none’, it’s a tensor with the same shape and type as input logits. Otherwise, the output is a scalar.

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If data type of any input is neither float16 nor float32.

  • TypeError – If data type of reduction is not string.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> label = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> loss = ops.BCEWithLogitsLoss()
>>> output = loss(logits, label, weight, pos_weight)
>>> print(output)
0.3463612
class tinyms.primitives.BNTrainingReduce(data_format='NCHW')[source]

The BNTrainingReduce interface is deprecated, please use the mindspore.ops.BatchNorm instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BNTrainingUpdate(isRef=True, epsilon=1e-05, factor=0.1, data_format='NCHW')[source]

The BNTrainingUpdate interface is deprecated, please use the mindspore.ops.BatchNorm instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BartlettWindow(periodic=True, dtype=mindspore.float32)[source]

Bartlett window function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bartlett_window() for more details.

Parameters:
  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Default: True.

  • dtype (mindspore.dtype, optional) – The desired datatype of returned tensor. Only float16, float32 and float64 are allowed. Default: mstype.float32.

Inputs:
  • window_length (Tensor) - The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

Outputs:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(5, mstype.int32)
>>> bartlett_window = ops.BartlettWindow(periodic=True, dtype=mstype.float32)
>>> output = bartlett_window(window_length)
>>> print(output)
[0.  0.4 0.8 0.8 0.4]
class tinyms.primitives.BasicLSTMCell(keep_prob=1.0, forget_bias=1.0, state_is_tuple=True, activation='tanh')[source]

It’s similar to operator mindspore.ops.DynamicRNN. BasicLSTMCell will be deprecated in the future. Please use DynamicRNN instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BatchMatMul(transpose_a=False, transpose_b=False)[source]

Computes matrix multiplication between two tensors by batch.

\[\text{output}[..., :, :] = \text{matrix}(x[..., :, :]) * \text{matrix}(y[..., :, :])\]

The first input tensor must be not less than 3 and the second input must be not less than 2.

Parameters:
  • transpose_a (bool) – If true, the last two dimensions of x is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, the last two dimensions of y is transposed before multiplication. Default: False.

Inputs:
  • x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions. If transpose_a is True, its shape must be \((*B, C, N)\).

  • y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\). If transpose_b is True, its shape must be \((*B, M, C)\).

Outputs:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Raises:
  • TypeError – If transpose_a or transpose_b is not a bool.

  • ValueError – If length of shape of x is not equal to length of shape of y or length of shape of x is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[2, 4, 1, 3]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul()
>>> output = batmatmul(x, y)
>>> print(output.shape)
(2, 4, 1, 4)
>>> x = Tensor(np.ones(shape=[2, 4, 3, 1]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul(transpose_a=True)
>>> output = batmatmul(x, y)
>>> print(output.shape)
(2, 4, 1, 4)
class tinyms.primitives.BatchNorm(is_training=False, epsilon=1e-05, momentum=0.1, data_format='NCHW')[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over inputs to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon, \(mean\) is the mean of \(x\), \(variance\) is the variance of \(x\).

Warning

  • If the operation is used for inference, and outputs “reserve_space_1” and “reserve_space_2” are available, then “reserve_space_1” has the same value as “mean” and “reserve_space_2” has the same value as “variance”.

  • For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Parameters:
  • is_training (bool) – If is_training is True, mean and variance are computed during training. If is_training is False, they’re loaded from checkpoint during inference. Default: False.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = (1 - momentum) * running\_mean + momentum * current\_mean\)). Momentum value must be [0, 1]. Default: 0.1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’, and the ‘NHWC’ format is only supported in GPU target. Default: “NCHW”.

Inputs:

If is_training is False, inputs are Tensors.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Tensor) - Tensor of shape \((C,)\), with float16 or float32 data type.

  • bias (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • mean (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • variance (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

If is_training is True, scale, bias, mean and variance are Parameters.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Parameter) - Parameter of shape \((C,)\), with float16 or float32 data type.

  • bias (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • mean (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • variance (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

Outputs:

Tuple of 5 Tensors, the normalized inputs and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

  • batch_mean (Tensor) - Tensor of shape \((C,)\).

  • batch_variance (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_1 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_2 (Tensor) - Tensor of shape \((C,)\).

Raises:
  • TypeError – If is_training is not a bool.

  • TypeError – If dtype of epsilon or momentum is not float.

  • TypeError – If data_format is not a str.

  • TypeError – If input_x, scale, bias, mean or variance is not a Tensor.

  • TypeError – If dtype of input_x, scale is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> bias = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> batch_norm = ops.BatchNorm()
>>> output = batch_norm(input_x, scale, bias, mean, variance)
>>> print(output[0])
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.BatchToSpace(block_size, crops)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_size, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_size with given amount to crop from dimension, respectively.

Parameters:
  • block_size (int) – The block size of division, has the value not less than 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction lists. Each list contains 2 integers. All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which corresponds to the input dimension i+2. It is required that \(input\_shape[i+2]*block\_size > crops[i][0]+crops[i][1]\) .

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape. The data type is float16 or float32.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is \((n, c, h, w)\) with block_size and crops. The output shape will be \((n', c', h', w')\), where

\(n' = n//(block\_size*block\_size)\)

\(c' = c\)

\(h' = h*block\_size-crops[0][0]-crops[0][1]\)

\(w' = w*block\_size-crops[1][0]-crops[1][1]\)

Raises:
  • TypeError – If block_size or element of crops is not an int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If block_size is less than 2.

Supported Platforms:

Ascend GPU

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpace(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceND(block_shape, crops)[source]

ops.BatchToSpaceND is deprecated from version 2.0 and will be removed in a future version, use ops.batch_to_space_nd instead.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpaceND(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceNDV2[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

Refer to mindspore.ops.batch_to_space_nd() for more details.

Supported Platforms:

Ascend

class tinyms.primitives.Bernoulli(seed=-1, offset=0)[source]

Randomly set the elements of output to 0 or 1 with the probability of P which follows the Bernoulli distribution.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bernoulli() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor([0.1, 0.2, 0.3], mindspore.float32)
>>> bernoulli = ops.Bernoulli()
>>> output = bernoulli(input_x, Tensor([1.0]))
>>> print(output)
[1. 1. 1.]
>>> input_p = Tensor([0.0, 1.0, 1.0], mindspore.float32)
>>> output = bernoulli(input_x, input_p)
>>> print(output)
[0. 1. 1.]
class tinyms.primitives.BesselI0[source]

Computes BesselI0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bessel_i0() for more details.

Supported Platforms:

GPU CPU

Examples

>>> bessel_i0 = ops.BesselI0()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0(x)
>>> print(output)
[1.0144521 1.1797839 1.0241698 1.0020262]
class tinyms.primitives.BesselI0e[source]

Computes BesselI0e of input element-wise.

The formula is defined as:

\[BesselI0e(x) = \exp(|x|) * bessel\_i0(x)\]

where bessel_i0 is Bessel function of the first kind with 0 order.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bessel_i0e = ops.BesselI0e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0e(x)
>>> print(output)
[0.7979961  0.5144438  0.75117415  0.9157829 ]
class tinyms.primitives.BesselI1[source]

Computes BesselI1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bessel_i1() for more details.

Supported Platforms:

GPU CPU

Examples

>>> bessel_i1 = ops.BesselI1()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1(x)
>>> print(output)
[0.1208661  0.45177728 0.1568694  0.04504559]
class tinyms.primitives.BesselI1e[source]

Computes BesselI1e of input element-wise.

The formula is defined as:

\[BesselI1e(x) = \exp(|x|) * bessel\_i1(x)\]

where bessel_i1 is Bessel function of the first kind with 1 order.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16 or float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bessel_i1e = ops.BesselI1e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1e(x)
>>> print(output)
[0.09507662 0.19699717 0.11505538 0.04116856]
class tinyms.primitives.BesselJ0[source]

Computes BesselJ0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_j0 = ops.BesselJ0()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_j0(x)
>>> print(output)
[0.93846981  0.76519769  0.22389078  -0.39714981]
class tinyms.primitives.BesselJ1[source]

Computes BesselJ1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_j1 = ops.BesselJ1()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_j1(x)
>>> print(output)
[0.24226846,  0.44005059,  0.57672481, -0.06604333]
class tinyms.primitives.BesselK0[source]

Computes BesselK0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k0 = ops.BesselK0()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k0(x)
>>> print(output)
[1.579826  0.5402144 1.3424659 2.5310173]
class tinyms.primitives.BesselK0e[source]

Computes BesselK0e of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k0e = ops.BesselK0e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k0e(x)
>>> print(output)
[2.0083523 1.2388839 1.8303517 2.769374 ]
class tinyms.primitives.BesselK1[source]

Computes BesselK1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k1 = ops.BesselK1()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k1(x)
>>> print(output)
[3.9190812  0.8143549  2.9440577 10.974864]
class tinyms.primitives.BesselK1e[source]

Computes BesselK1e of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k1e = ops.BesselK1e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k1e(x)
>>> print(output)
[ 4.9821286  1.8675754  4.0140023 12.008413 ]
class tinyms.primitives.BesselY0[source]

Computes BesselY0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_y0 = ops.BesselY0()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_y0(x)
>>> print(output)
[-0.44451873  0.08825696  0.51037567  -0.01694074]
class tinyms.primitives.BesselY1[source]

Computes BesselY1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_y1 = ops.BesselY1()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_y1(x)
>>> print(output)
[-1.47147239  -0.78121282  -0.10703243  0.39792571]
class tinyms.primitives.Betainc[source]

Calculates the regularized incomplete beta function \(I_{x}(a, b)\). It is defined as the ratio of the incomplete beta function to the complete beta function:

\[I_{x}(a, b)=\frac{B(x ; a, b)}{B(a, b)}\]

where

\[B(x ; a, b)=\int_{0}^{x} t^{a-1}(1-t)^{b-1} dt\]

is the incomplete beta function and

\[B(a, b) = \int_0^1 t^{a-1} (1-t)^{b-1} dt\]

is the complete beta function.

Inputs:
  • a (Tensor) - Peak location of beta distribution. A Tensor of types: float32, float64.

  • b (Tensor) - Spread of the beta distribution. A Tensor, must have the same dtype and shape as a .

  • x (Tensor) - Upper limit of integration of the incomplete beta function. A Tensor, must have the same dtype and shape as a .

Outputs:

A Tensor, has the same dtype and shape as a .

Raises:
  • TypeError – If dtype of a is not float32 nor float64.

  • TypeError – If either dtype of b and x is not the same as the a.

  • ValueError – If either shape of b and x is not the same as the a.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([0.3, 0.1, 0.4]), mindspore.float32)
>>> b = Tensor(np.array([0.4, 0.5, 0.9]), mindspore.float32)
>>> x = Tensor(np.array([0.2, 0.6, 0.5]), mindspore.float32)
>>> betainc = ops.Betainc()
>>> print(betainc(a, b, x))
[0.41462693 0.8706035  0.7298298 ]
class tinyms.primitives.BiasAdd(data_format='NCHW')[source]

Returns the sum of the input Tensor and the bias Tensor. Before adding, the bias Tensor will be broadcasted to be consistent with the shape of the input Tensor.

Parameters:

data_format (str) – The format of input and output data. It should be ‘NHWC’, ‘NCHW’ or ‘NCDHW’. Default is ‘NCHW’.

Inputs:
  • input_x (Tensor) - The input tensor. The shape can be 2-5 dimensions.

  • bias (Tensor) - The bias tensor, with shape \((C)\). C must be the same as channel dimension C of input_x.

Outputs:

Tensor, with the same shape and data type as input_x.

Raises:
  • TypeError – If data_format is not a str.

  • ValueError – If value of data_format is not in the range of [‘NHWC’,’NCHW’,’NCDHW’].

  • TypeError – If input_x or bias is not a Tensor.

  • TypeError – If dtype of input_x or bias is neither float16 nor float32.

  • TypeError – If dtype of input_x or bias is inconsistent.

  • TypeError – If dimension of input_x is not in the range [2, 5].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3,)), mindspore.float32)
>>> bias_add = ops.BiasAdd()
>>> output = bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.BinaryCrossEntropy(reduction='mean')[source]

Computes the binary cross entropy between the logits and the labels.

Sets logits as \(x\), labels as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

In which, \(L\) indicates the loss of all batch_sizes, \(l\) indicates the loss of one batch_size, and n indicates one batch_size in the 1-N range, \(w_n\) indicates the weight of \(n\)-th batch of binary cross entropy. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

  • The value of \(x\) must range from 0 to 1.

Parameters:

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’ or ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - The predictive value whose data type must be float16 or float32, The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • labels (Tensor) - The target value which has the same shape and data type as logits.

  • weight (Tensor, optional) - A rescaling weight applied to the loss of each batch element. And it must have the same shape and data type as logits. Default: None.

Outputs:

Tensor or Scalar. Returns Tensor that has the same dtype and shape as logits if reduction is ‘none’. Otherwise, returns a scalar Tensor.

Raises:
  • TypeError – If dtype of logits, labels or weight (if given) is neither float16 nor float32.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If shape of labels is not the same as logits or weight (if given).

  • TypeError – If logits, labels or weight is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.binary_cross_entropy = ops.BinaryCrossEntropy()
...     def construct(self, logits, labels, weight):
...         result = self.binary_cross_entropy(logits, labels, weight)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = net(logits, labels, weight)
>>> print(output)
0.38240486
class tinyms.primitives.Bincount[source]

Counts the number of occurrences of each value in an integer array.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • array (Tensor) - A Tensor of type int32, whose value can not be less than zero.

  • size (Tensor) - A non-negative Tensor of type int32.

  • weights (Tensor) - A Tensor with the same shape as array, or a length-0 Tensor, in which case it acts as all weights equal to 1. Must be one of the following types: int32, int64, float32, float64.

Outputs:

A Tensor. Has the same type as weights.

Raises:
  • TypeError – If dtype of array is not int32.

  • TypeError – If dtype of size is not int32.

  • ValueError – If size is negative.

  • ValueError – If weights are empty.

  • ValueError – If size of weights is not zero and the shape of weights is different with the shape of array.

  • TypeError – If dtype of weights is not in int32,int64,float32,float64

Supported Platforms:

Ascend GPU CPU

Examples

>>> array = Tensor(np.array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4]), mindspore.int32)
>>> size = Tensor(5, mindspore.int32)
>>> weights = Tensor(np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), mindspore.float32)
>>> bincount = ops.Bincount()
>>> bins = bincount(array, size, weights)
>>> print(bins)
[0. 1. 2. 3. 4.]
class tinyms.primitives.BitwiseAnd[source]

Returns bitwise and of two tensors element-wise.

Refer to mindspore.ops.bitwise_and() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_and = ops.BitwiseAnd()
>>> output = bitwise_and(x, y)
>>> print(output)
[ 0  0  1 -1  1  0  1]
class tinyms.primitives.BitwiseOr[source]

Returns bitwise or of two tensors element-wise.

Refer to mindspore.ops.bitwise_or() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_or = ops.BitwiseOr()
>>> output = bitwise_or(x, y)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
class tinyms.primitives.BitwiseXor[source]

Returns bitwise xor of two tensors element-wise.

Refer to mindspore.ops.bitwise_xor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_xor = ops.BitwiseXor()
>>> output = bitwise_xor(x, y)
>>> print(output)
[ 0  1  0  0 -2  3  2]
class tinyms.primitives.BlackmanWindow(periodic=True, dtype=mindspore.float32)[source]

Blackman window function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.blackman_window() for more details.

Parameters:
  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Default: True.

  • dtype (mindspore.dtype, optional) – the desired data type of returned tensor. Only float16, float32 and float64 is allowed. Default: mstype.float32.

Inputs:
  • window_length (Tensor) - the size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

Outputs:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(10, mindspore.int32)
>>> blackman_window = ops.BlackmanWindow(periodic = True, dtype = mindspore.float32)
>>> output = blackman_window(window_length)
>>> print(output)
[-2.9802322e-08  4.0212840e-02  2.0077014e-01  5.0978714e-01
  8.4922993e-01  1.0000000e+00  8.4922981e-01  5.0978690e-01
  2.0077008e-01  4.0212870e-02]
class tinyms.primitives.BoundingBoxDecode(max_shape, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)[source]

Decodes bounding boxes locations.

The function of the operator is to calculate the offset, and this operator converts the offset into a Bbox, which is used to mark the target in the subsequent images, etc.

Parameters:
  • max_shape (tuple) – The max size limit for decoding box calculation.

  • means (tuple) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • wh_ratio_clip (float) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • deltas (Tensor) - Delta of boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, decoded boxes. It has the same data type and shape as anchor_box.

Raises:
  • TypeError – If means, stds or max_shape is not a tuple.

  • TypeError – If wh_ratio_clip is not a float.

  • TypeError – If anchor_box or deltas is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_decode = ops.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0),
...                                          max_shape=(768, 1280), wh_ratio_clip=0.016)
>>> output = boundingbox_decode(anchor_box, deltas)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
class tinyms.primitives.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))[source]

Encodes bounding boxes locations.

This operator will calculate the offset between the predicted bounding boxes and the real bounding boxes, and this offset will be used as a variable for the loss.

Parameters:
  • means (tuple) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • groundtruth_box (Tensor) - Ground truth boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, encoded bounding boxes. It has the same data type and shape as input anchor_box.

Raises:
  • TypeError – If means or stds is not a tuple.

  • TypeError – If anchor_box or groundtruth_box is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
>>> output = boundingbox_encode(anchor_box, groundtruth_box)
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
class tinyms.primitives.Broadcast(root_rank, group='hccl_world_group')[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • root_rank (int) – Source rank. Required in all processes except the one that is sending the data.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (tuple[Tensor]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], Tensor has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises:

TypeError – If root_rank is not an integer or group is not a string.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.broadcast = ops.Broadcast(1)
...
...     def construct(self, x):
...         return self.broadcast((x,))
...
>>> input_x = Tensor(np.ones([2, 4]).astype(np.int32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape[2,4], dtype=Int32, value=
[[1, 1, 1, 1],
 [1, 1, 1, 1]]),)
class tinyms.primitives.BroadcastTo(shape)[source]

Broadcasts input tensor to a given shape.

Refer to mindspore.ops.broadcast_to() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 3)
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> output = ops.BroadcastTo(shape=shape)(x)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>>
>>> shape = (-1, 2)
>>> x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> output = ops.BroadcastTo(shape=shape)(x)
>>> print(output)
[[1. 1.]
 [2. 2.]]
class tinyms.primitives.Bucketize(boundaries)[source]

Bucketizes input based on boundaries.

Parameters:

boundaries (list[float]) – A sorted list of floats gives the boundary of the buckets, and no default value.

Inputs:
  • input (Tensor) - A tensor containing the search value(s).

Outputs:

Tensor, with the same shape as the input, and data type is int32.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> class Bucketize(nn.Cell):
...     def __init__(self, boundaries):
...         super().__init__()
...         self.bucketize = ops.Bucketize(boundaries=boundaries)
...     def construct(self, input):
...         return self.bucketize(input)
>>> input = Tensor(np.array([[3, 6, 9], [3, 6, 9]]).astype(np.int32))
>>> boundaries = list(np.array([1., 3., 5., 7., 9.]))
>>> net = Bucketize(boundaries)
>>> output = net(input)
>>> print(output)
[[2 3 5]
 [2 3 5]]
class tinyms.primitives.BufferAppend(capacity, buffer_shape, buffer_dtype)[source]

In reinforcement learning, the experience data is collected in each step. We use BufferAppend to push data to the bottom of buffer under the First-In-First-Out rule.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • exp (tuple(Parameter(Tensor))) - The tuple(Tensor) represents one list of experience data, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

None.

Raises:
  • ValueError – If count and head is not an integer.

  • ValueError – If capacity is not a positive integer.

  • ValueError – If length of data is not equal to length of exp.

  • ValueError – If dim of data is equal to dim of exp, but data[1:] is not equal to the shape in exp.

  • ValueError – If the shape of data[1:] is not equal to the shape in exp.

  • TypeError – If the type in exp is not the same with data.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> exp = [Tensor(np.array([2, 2, 2, 2]), ms.float32), Tensor(np.array([0, 0]), ms.int32),
...        Tensor(np.array([0]), ms.int32), Tensor(np.array([3, 3, 3, 3]), ms.float32)]
>>> batch_exp = [Tensor(np.array([[2, 2, 2, 2], [2, 2, 2, 2]]), ms.float32),
...              Tensor(np.array([[0, 0], [0, 0]]), ms.int32),
...              Tensor(np.array([[0], [0]]), ms.int32),
...              Tensor(np.array([[3, 3, 3, 3], [3, 3, 3, 3]]), ms.float32)]
>>> buffer_append = ops.BufferAppend(capacity, shapes, types)
>>> buffer_append(buffer, exp, count, head)
>>> buffer_append(buffer, batch_exp, count, head)
class tinyms.primitives.BufferGetItem(capacity, buffer_shape, buffer_dtype)[source]

Get the data from buffer in the position of input index.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

  • index (int64) - The position of the data in buffer.

Outputs:

tuple(Tensor). The shape is buffer_shape. The dtype is buffer_dtype.

Raises:
  • ValueError – If count and head is not an integer.

  • ValueError – If capacity is not a positive integer.

  • TypeError – If buffer_shape is not a tuple.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> index = 3
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_get = ops.BufferGetItem(capacity, shapes, types)
>>> output = buffer_get(buffer, count, head, index)
>>> print(output)
    (Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]),
     Tensor(shape=[2], dtype=Int32, value= [6, 7]),
     Tensor(shape=[1], dtype=Int32, value= [1]),
     Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]))
class tinyms.primitives.BufferSample(capacity, batch_size, buffer_shape, buffer_dtype, seed=0, unique=False)[source]

In reinforcement learning, the data is sampled from the replaybuffer randomly.

Returns the tuple tensor with the given shape, decided by the given batchsize.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • batch_size (int64) – The size of the sampled data, lessequal to capacity.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

  • seed (int64) – Random seed for sample. Default: 0. If use the default seed, it will generate a ramdom

  • in kernel. Set a number other than 0 to keep a specific seed. Default (one) –

  • unique (bool) – Whether the sampled data is strictly unique. Setting it to False has a better performance. Default: False

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

tuple(Tensor). The shape is batch_size * buffer_shape. The dtype is buffer_dtype.

Raises:
  • TypeError – If buffer_shape is not a tuple.

  • ValueError – If batch_size is larger than capacity.

  • ValueError – If capacity is not a positive integer.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> batch_size = 5
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_sample = ops.BufferSample(capacity, batch_size, shapes, types)
>>> output = buffer_sample(buffer, count, head)
>>> print(output)
    (Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]),
     Tensor(shape=[5, 2], dtype=Int32, value=
        [[ 0, 1],
        [ 4, 5],
        [ 8, 9],
        [ 6, 7],
        [16, 17]]),
     Tensor(shape=[5, 1], dtype=Int32, value=
        [[1],
        [1],
        [1],
        [1],
        [1]]),
     Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]))
class tinyms.primitives.CTCGreedyDecoder(merge_repeated=True)[source]

Performs greedy decoding on the logits given in inputs.

Refer to mindspore.ops.ctc_greedy_decoder() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.array([[[0.6, 0.4, 0.2], [0.8, 0.6, 0.3]],
...                           [[0.0, 0.6, 0.0], [0.5, 0.4, 0.5]]]), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> decoded_indices, decoded_values, decoded_shape, log_probability = ops.CTCGreedyDecoder()(inputs,
...                                                                                          sequence_length)
>>> print(decoded_indices)
[[0 0]
 [0 1]
 [1 0]]
>>> print(decoded_values)
[0 1 0]
>>> print(decoded_shape)
[2 2]
>>> print(log_probability)
[[-1.2]
 [-1.3]]
class tinyms.primitives.CTCLoss(preprocess_collapse_repeated=False, ctc_merge_repeated=True, ignore_longer_outputs_than_inputs=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

The bottom layer of this interface calls the implementation of the third-party baidu-research::warp-ctc. The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

CTCLoss calculates loss between a continuous time series and a target sequence. CTCLoss sums over the probability of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, such that the length of target series must be less than or equal to the length of input.

Parameters:
  • preprocess_collapse_repeated (bool) – If true, repeated labels will be collapsed prior to the CTC calculation. Default: False.

  • ctc_merge_repeated (bool) – If false, during CTC calculation, repeated non-blank labels will not be merged and these labels will be interpreted as individual ones. This is a simplified version of CTC. Default: True.

  • ignore_longer_outputs_than_inputs (bool) – If true, sequences with longer outputs than inputs will be ignored. Default: False.

Inputs:
  • x (Tensor) - The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float16, float32 or float64.

  • labels_indices (Tensor) - The indices of labels. labels_indices[i, :] = [b, t] means labels_values[i] stores the id for (batch b, time t). The type must be int64 and rank must be 2.

  • labels_values (Tensor) - A 1-D input tensor. The values are associated with the given batch and time. The type must be int32. labels_values[i] must be in the range of [0, num_classes).

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must not be greater than max_time.

Outputs:
  • loss (Tensor) - A tensor containing log-probabilities, the shape is \((batch\_size, )\). The tensor has the same data type as x.

  • gradient (Tensor) - The gradient of loss, has the same shape and data type as x.

Raises:
  • TypeError – If preprocess_collapse_repeated, ctc_merge_repeated or ignore_longer_outputs_than_inputs is not a bool.

  • TypeError – If x, labels_indices, labels_values or sequence_length is not a Tensor.

  • ValueError – If rank of labels_indices is not equal to 2.

  • TypeError – If dtype of x is not one of the following: float16, float32 nor float64.

  • TypeError – If dtype of labels_indices is not int64.

  • TypeError – If dtype of labels_values or sequence_length is not int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[0.3, 0.6, 0.6],
...                       [0.4, 0.3, 0.9]],
...
...                      [[0.9, 0.4, 0.2],
...                       [0.9, 0.9, 0.1]]]).astype(np.float32))
>>> labels_indices = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int64)
>>> labels_values = Tensor(np.array([2, 2]), mindspore.int32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_loss = ops.CTCLoss()
>>> loss, gradient = ctc_loss(x, labels_indices, labels_values, sequence_length)
>>> print(loss)
[ 0.79628  0.5995158 ]
>>> print(gradient)
[[[ 0.27029088  0.36485454  -0.6351454  ]
  [ 0.28140804  0.25462854  -0.5360366 ]]
 [[ 0.47548494  0.2883962    0.04510255 ]
  [ 0.4082751   0.4082751    0.02843709 ]]]
class tinyms.primitives.CTCLossV2(blank=0, reduction='none', zero_infinity=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • blank (int, optional) – The blank label. Default: 0.

  • reduction (str, optional) – Apply specific reduction method to the output. Currently only support ‘none’, not case sensitive. Default: “none”.

  • zero_infinity (bool, optional) – If loss is infinite, this parameter determines whether to set that loss and its correlated gradient to zero. Default: False.

Inputs:
  • log_probs (Tensor) - A tensor of shape \((T, C, N)\), where \(T\) is input length, \(N\) is batch size and \(C\) is number of classes (including blank).

  • targets (Tensor) - A tensor of shape \((N, S)\), where \(S\) is max target length, means the target sequences.

  • input_lengths (Union(Tuple, Tensor)) - A tuple or Tensor of shape \((N)\). It means the lengths of the input.

  • target_lengths (Union(Tuple, Tensor)) - A tuple or Tensor of shape \((N)\). It means the lengths of the target.

Outputs:
  • neg_log_likelihood (Tensor) - A loss value which is differentiable with respect to each input node.

  • log_alpha (Tensor) - The probability of possible trace of input to target.

Raises:
  • TypeError – If zero_infinity is not a bool.

  • TypeError – If reduction is not string.

  • TypeError – If the dtype of log_probs is not float or double.

  • TypeError – If the dtype of targets, input_lengths or target_lengths is not int32 or int64.

  • ValueError – If the rank of log_probs is not 3.

  • ValueError – If the rank of targets is not 2.

  • ValueError – If the shape of input_lengths does not match batch_size \(N\).

  • ValueError – If the shape of target_lengths does not match batch_size \(N\).

  • TypeError – If the types of targets, input_lengths or target_lengths are different.

  • ValueError – If the value of blank is not in range [0, num_labels|C).

  • RuntimeError – If any value of input_lengths is larger than (num_labels|C).

  • RuntimeError – If any target_lengths[i] is not in range [0, input_length[i]].

Supported Platforms:

Ascend GPU CPU

Examples

>>> log_probs = Tensor(np.array([[[0.3, 0.6, 0.6]],
...                              [[0.9, 0.4, 0.2]]]).astype(np.float32))
>>> targets = Tensor(np.array([[0, 1]]), mstype.int32)
>>> input_lengths = Tensor(np.array([2]), mstype.int32)
>>> target_lengths = Tensor(np.array([1]), mstype.int32)
>>> CTCLossV2 = ops.CTCLossV2(blank=0, reduction='none', zero_infinity=False)
>>> neg_log_hood, log_alpha = CTCLossV2(
...     log_probs, targets, input_lengths, target_lengths)
>>> print(neg_log_hood)
[-2.2986124]
>>> print(log_alpha)
[[[0.3       0.3            -inf      -inf      -inf]
  [1.2       1.8931472 1.2            -inf      -inf]]]
class tinyms.primitives.Cast[source]

Returns a tensor with the new specified data type.

Inputs:
  • input_x (Union[Tensor, Number]) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The tensor to be cast.

  • type (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises:
  • TypeError – If input_x is neither Tensor nor Number.

  • TypeError – If type is not a Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> type_dst = mindspore.int32
>>> cast = ops.Cast()
>>> output = cast(input_x, type_dst)
>>> print(output.dtype)
Int32
>>> print(output.shape)
(2, 3, 4, 5)
class tinyms.primitives.Cauchy(size, median=0.0, sigma=1.0)[source]

Create a tensor of shape size with random numbers drawn from Cauchy distribution. It is defined as follows:

\[f(x)= \frac{1}{\pi} \frac{\sigma}{(x-median)^2 +\sigma^2}\]
Parameters:
  • size (list[int]) – The size of tensor.

  • sigma (float, optional) – the location parameter, specifying the location of the peak of the distribution. Default: 1.0.

  • median (float, optional) – the scale parameter which specifies the half-width at half-maximum. Default: 0.0.

Outputs:

Tensor with cauchy distribution data. Tensor shape is size, and data type is float32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> size = [1]
>>> net = ops.Cauchy(size)
>>> y = net()
>>> print(y)
[0.03128606]
class tinyms.primitives.Cdist(p=2.0)[source]

Computes batched the p-norm distance between each pair of the two collections of row vectors.

Refer to mindspore.ops.cdist() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input_x = Tensor(np.array([[[1.0, 1.0], [2.0, 2.0]]]).astype(np.float32))
>>> input_y = Tensor(np.array([[[3.0, 3.0], [3.0, 3.0]]]).astype(np.float32))
>>> op = ops.Cdist(p=2.0)
>>> output = op(input_x, input_y)
>>> print(output)
[[[2.8284273 2.8284273]
  [1.4142137 1.4142137]]]
class tinyms.primitives.CeLU(alpha=1.0)[source]

Computes CeLU (Continuously differentiable exponential linear units) of input tensors element-wise.

Refer to mindspore.ops.celu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
>>> celu = ops.CeLU(alpha=1.0)
>>> output = celu(input_x)
>>> print(output)
[-0.86466473 -0.63212055  1.          2.        ]
class tinyms.primitives.Ceil[source]

Rounds a tensor up to the closest integer element-wise.

Refer to mindspore.ops.ceil() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> ceil_op = ops.Ceil()
>>> output = ceil_op(x)
>>> print(output)
[ 2.  3. -1.]
class tinyms.primitives.ChannelShuffle(group)[source]

Divide the channels in a tensor of shape (, C, H, W) into g groups and rearrange them as (, C/g, g, H*W), while keeping the original tensor shapes.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.channel_shuffle() for more detail.

Supported Platforms:

Ascend CPU

Examples

>>> group = 2
>>> x = Tensor(np.arange(1 * 4 * 2 * 2).reshape(1, 4, 2, 2).astype(np.int16))
>>> channel_shuffle_func = ops.ChannelShuffle(group)
>>> y = channel_shuffle_func(x)
>>> print(y)
[[[[ 0  1]
   [ 2  3]]
   [[ 8  9]
   [10 11]]
   [[ 4  5]
   [ 6  7]]
   [[12 13]
   [14 15]]]]
class tinyms.primitives.CheckNumerics[source]

Checks a tensor for NaN and Inf values. A runtime error is raised if input has NaN or Inf values.

Inputs:
  • x (Tensor) - Input Tensor of any dimension. The data type is float16, float32 or float64.

Outputs:

Tensor, has the same shape and data type as x if x has no NaN or Inf values.

Raises:
  • TypeError – If x data type is not float16, float32, float64.

  • RuntimeError – If x has NaN or Inf values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 3], [2, 4]], dtype=np.float32))
>>> checknumerics = ops.CheckNumerics()
>>> output = checknumerics(x)
>>> print(output)
[[1. 3.]
 [2. 4.]]
class tinyms.primitives.CheckValid[source]

Checks bounding box.

Checks whether the bounding boxes specified by bboxes is valid. Returns True if the box is within borders specified by img_metas, False if not.

Inputs:
  • bboxes (Tensor) - Bounding boxes tensor with shape \((N, 4)\). \(N\) indicates the number of bounding boxes, the value “4” indicates “x0”, “x1”, “y0”, and “y1”. Data type must be float16 or float32.

  • img_metas (Tensor) - Raw image size information with the format of \((height, width, ratio)\), specifying the valid boundary \((height * ratio, width * ratio)\). Data type must be float16 or float32.

Outputs:

Tensor, with shape of \((N,)\) and dtype of bool, specifying whether the bounding boxes is in the image. “True” indicates valid, while “False” indicates invalid.

Raises:
  • TypeError – If bboxes or img_metas is not a Tensor.

  • TypeError – If dtype of bboxes or img_metas is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.check_valid = ops.CheckValid()
...     def construct(self, x, y):
...         valid_result = self.check_valid(x, y)
...         return valid_result
...
>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> net = Net()
>>> output = net(bboxes, img_metas)
>>> print(output)
[ True False False]
class tinyms.primitives.Cholesky(upper=False)[source]

Performs the Cholesky decomposition on a single or a batch of symmetric positive-definite matrices.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cholesky() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 1.0], [1.0, 2.0]]), mindspore.float32)
>>> cholesky = ops.Cholesky(upper=False)
>>> output = cholesky(input_x)
>>> print(output)
[[1. 0.]
 [1. 1.]]
class tinyms.primitives.CholeskyInverse(upper=False)[source]

Returns the inverse of the positive definite matrix using cholesky matrix factorization given its Cholesky factor.

Refer to mindspore.ops.cholesky_inverse() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([[2,0,0], [4,1,0], [-1,1,2]]), mindspore.float32)
>>> net = ops.CholeskyInverse()
>>> y = net(x)
>>> print(y)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
class tinyms.primitives.CholeskySolve(upper=False)[source]

Computes the solution of a set of linear equations with a positive definite matrix, according to its Cholesky decomposition factor u , and outputs the result as c.

If upper is set to True, u is upper triangular and c is returned such that:

\[c = (u^{T}u)^{{-1}}b\]

If upper is set to False, u is lower triangular and c is returned such that:

\[c = (uu^{T})^{{-1}}b\]
Parameters:

upper (bool, optional) – A flag indicates whether to treat the Cholesky factor as an upper or a lower triangular matrix. Default: False.

Inputs:
  • x1 (Tensor) - Tensor of shape \((*, N, M)\), indicating 2D or 3D matrices, with float32 or float64 data type.

  • x2 (Tensor) - Tensor of shape \((*, N, N)\), indicating 2D or 3D square matrices composed of upper or lower triangular Cholesky factor, with float32 or float64 data type. x1 and x2 must have the same type.

Outputs:

Tensor, has the same shape and data type as x1.

Raises:
  • TypeError – If upper is not a bool.

  • TypeError – If dtype of x1 and x2 is not one of: float64, float32.

  • TypeError – If x1 is not a Tensor.

  • TypeError – If x2 is not a Tensor.

  • ValueError – If x1 and x2 have different batch size.

  • ValueError – If x1 and x2 have different row numbers.

  • ValueError – If x1 is not 2D or 3D matrices.

  • ValueError – If x2 is not 2D or 3D square matrices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), mindspore.float32)
>>> x2 = Tensor(np.array([[2, 0, 0], [4, 1, 0], [-1, 1, 2]]), mindspore.float32)
>>> net = ops.CholeskySolve()
>>> y = net(x1, x2)
>>> print(y)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
class tinyms.primitives.Coalesce[source]

Returns the coalesced sparse tensor of the input.

Inputs:
  • x_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Supported data type is int64. Its elements should be non-negative. The shape is \((y, x)\).

  • x_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in x_indices. Supported data types are float16 and float32. The shape is \((x,)\).

  • x_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Supported data type is int64. The shape is \((y,)\).

Outputs:
  • y_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Data type is int64. It’s elements are non-negative. The shape is \((y, z)\). z represents the number of different indices in x_indices.

  • y_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in y_indices. Data type is the same as x_values’s. The shape is \((z,)\).

  • y_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Data type is int64. The shape is \((y,)\).

Raises:
  • TypeError – If the data type of x_values is neither float32 nor float16.

  • TypeError – If any of the data types of x_indices and x_shape is not int64.

  • ValueError – If any of x_values and x_shape is not a 1-D tensor.

  • ValueError – If x_indices is not a 2-D tensor.

  • ValueError – If sizes of second dimension of x_indices and first dimension of x_values are not the same.

  • ValueError – If sizes of first dimension of x_indices and first dimension of x_shape are not the same.

  • ValueError – If any of the values of elements of x_indices is negative.

  • ValueError – If any of the values of elements of x_indices exceed the limit set by x_shape.

Supported Platforms:

GPU CPU

Examples

>>> x_indices = Tensor([[0, 0, 1], [1, 1, 2]], dtype=mstype.int64)
>>> x_values = Tensor([1, 5, 4], dtype=mstype.float32)
>>> x_shape = Tensor([3, 3], dtype=mstype.int64)
>>> coalesce = ops.Coalesce()
>>> y_indices, y_values, y_shape = coalesce(x_indices, x_values, x_shape)
>>> print(y_indices)
[[0 1]
 [1 2]]
>>> print(y_values)
[6. 4.]
>>> print(y_shape)
[3 3]
class tinyms.primitives.Col2Im(kernel_size, dilation=1, padding=0, stride=1)[source]

Combines an array of sliding local blocks into a large containing tensor. It is usually used to reconstruct an image from a set of image patches(or sliding local blocks).

Consider a batched input tensor containing sliding local blocks, e.g., patches of images, of shape \((N, C, \prod(\text{kernel_size}), L)\), where \(N\) is batch dimension, \(C\) is channel dimension, \(\prod(\text{kernel_size})\) is the block size, and \(L\) is the total number of blocks. This operation combines these local blocks into the large output tensor of shape \((N, C, \text{output_size}[0], \text{output_size}[1], \dots)\) by summing the overlapping values.

\[L = \prod_d \left\lfloor\frac{\text{output_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor\]

where \(d\) is over all spatial dimensions. The padding, stride and dilation arguments specify how the sliding blocks are retrieved.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two positive int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The size of the dilation, should be two positive int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The size of the stride, should be two positive int for height and width. If type is int, it means that height equal with width. Default: 1.

Inputs:
  • x (Tensor) - 4D tensor with data type float16 or float32.

  • output_size (Tensor) - 1D tensor with 2 elements of data type int32.

Outputs:

Tensor, a 4-D Tensor with same type of input x.

Raises:
  • TypeError – If dtype of kernel_size , dilation , padding or stride is not in Union[int, tuple[int], list[int]].

  • ValueError – If values in kernel_size , dilation , padding or stride are not greater than zero or any one of them has more than 2 elements.

  • ValueError – If x.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If x.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> col2im = ops.Col2Im(kernel_size=[2, 2], dilation=[2, 2], padding=[2, 2], stride=[2, 2])
>>> y = col2im(x, output_size)
>>> print(y.shape)
(16, 16, 8, 8)
class tinyms.primitives.CombinedNonMaxSuppression(pad_per_class=False, clip_boxes=True)[source]

Applies a greedy approach to select a subset of bounding boxes from a list of candidates using NonMaxSuppression, where the boxes are sorted in descending order of their confidence score.

Parameters:
  • clip_boxes (bool, optional) –

    Determines whether to apply bounding box normalization to ensure the coordinates are within [0, 1] range. Default: True.

    • If True, clip the boxes that fall outside this range.

    • If False, return the box coordinates as they are without any modifications.

  • pad_per_class (bool, optional) –

    Determines whether the output of the non-maximum suppression (NMS) algorithm should be padded or clipped to meet the maximum size constraints. Default: False.

    • If False, the output is clipped to the maximum size of max_total_size.

    • If True, the output is padded up to max_size_per_class * num_classes and clipped if it exceeds max_total_size.

Inputs:
  • boxes (Tensor) - A float32 Tensor with shape \((batch_size, num_boxes, q, 4)\) representing the bounding box coordinates. q indicates mapping relationship between boxes and classes. If q is 1, all classes use the same bounding box. If q is equal to the number of classes, class-specific boxes are applied.

  • scores (Tensor) - A 3-D Tensor of float32 type with the shape \((batch_size, num_boxes, num_classes)\). It contains a score value for each box, with each row of boxes represented by a single score.

  • max_output_size_per_class (Tensor) - The maximum number of boxes that can be selected for each class by the non-maximum suppression algorithm, represented by a scalar Tensor of type int32.

  • max_total_size (Tensor) - A scalar Tensor of type int32 that represents the maximum number of boxes that are kept for all classes.

  • iou_threshold (Tensor) - A scalar Tensor of float32 type that represents the threshold for determining if the IOU overlap between boxes is too high. iou_threshold must be equal or greater than 0 and be equal or smaller than 1.

  • score_threshold (Tensor) - A scalar Tensor of type float32 that represents the threshold for determining when to remove boxes based on their scores.

Outputs:
  • nmsed_boxes - A Tensor of float32 with shape of (batch_size, num_detection, 4), which contains the non-max suppressed boxes.

  • nmsed_scores - A Tensor of float32 with shape of (batch_size, num_detection), which contains score of boxes.

  • nmsed_classes - A Tensor of float32 with shape of (batch_size, num_detection), which contains classes of boxes.

  • valid_detections A Tensor of int32 with shape of (batch_size,), which indicates the number of valid detections of each batch.

Raises:
  • TypeError – If the dtype of boxes, scores , iou_threshold , score threshold are not float32.

  • TypeError – If the dtype of max_output_size_per_class and max_total_size are not int32.

  • ValueError – If boxes is not 4D.

  • ValueError – If max_output_size_per_class, max_total_size, iou_threshold and score threshold are not 0D.

  • ValueError – If scores is not 3D.

  • ValueError – If shape[0] or shape[1] of boxes is not same with that of the scores.

  • ValueError – If shape[2] of boxes is not same with shape[2] of scores or 1

  • ValueError – If max_total_size < 0.

  • ValueError – If max_output_size_per_class < 0.

  • ValueError – If iou_threshold not in [0,1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> boxes = Tensor(np.array([[[[200, 100, 150, 100]],
...                           [[220, 120, 150, 100]],
...                           [[190, 110, 150, 100]],
...                           [[210, 112, 150, 100]]]])).astype('float32')
>>> scores = Tensor(np.array([[[0.2000, 0.7000, 0.1000], [0.1000, 0.8000, 0.1000], [0.3000, 0.6000, 0.1000],
...                            [0.0500, 0.9000, 0.0500]]])).astype('float32')
>>> max_output_size_per_class = Tensor(4, mstype.int32)
>>> max_total_size = Tensor(1, mstype.int32)
>>> iou_threshold = Tensor(0, mstype.float32)
>>> score_threshold = Tensor(0, mstype.float32)
>>> net = ops.CombinedNonMaxSuppression()
>>> out = net(boxes, scores, max_output_size_per_class, max_total_size, iou_threshold, score_threshold)
>>> print(out)
(Tensor(shape=[1, 1, 4], dtype=Float32, value= [[[1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
                                                  1.00000000e+00]]]),
Tensor(shape=[1, 1], dtype=Float32, value= [[ 8.99999976e-01]]),
Tensor(shape=[1, 1], dtype=Float32, value= [[ 1.00000000e+00]]),
Tensor(shape=[1], dtype=Int32, value= [1]))
class tinyms.primitives.CompareAndBitpack[source]

Compare values of x to threshold and pack resulting bits into a uint8.

Each comparison returns a boolean true (if x_value > threshold) or and false otherwise.

Given an x shaped \((s_0, s_1, ..., s_n)\), the output is a uint8 Tensor shaped \((s_0, s_1, ..., s_n / 8)\).

Inputs:
  • x (Tensor) - Input tensor. Values to compare against threshold and bitpack. The data type must be bool, float16, float32, float64, int8, int16, int32, int64. Note: Currently, the innermost dimension of the tensor must be divisible by 8.

  • threshold (Tensor) - A 0D Tensor, whose data type is same as x.

Outputs:

Tensor, has the uint8 type.

Raises:
  • TypeError – If x or threshold is not a Tensor.

  • TypeError – If the dtype of ‘x’ is not one of: bool, float16, float32, float64, int8, int16, int32, int64.

  • TypeError – If threshold’s type is not as same ‘x’.

  • ValueError – If threshold is not a 0D Tensor.

  • ValueError – If x is a 0D Tensor.

  • ValueError – If the innermost dimension of x’s shape is not disvisible by 8.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32)
>>> threshold = Tensor(6, mindspore.float32)
>>> net = ops.CompareAndBitpack()
>>> output = net(x, threshold)
>>> print(output)
[3]
class tinyms.primitives.Complex[source]

Returns a complex Tensor from the real part and the imag part.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • real (Tensor) - The real input tensor. types: float32, float64.

  • imag (Tensor) - The imag input tensor. types: float32, float64.

Outputs:

Tensor, has the complex type.

Raises:
  • TypeError – If the dtype of input is not one of: float32, float64.

  • TypeError – If the dtypes of two inputs are not same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> real = Tensor(np.array([1]), mindspore.float32)
>>> imag = Tensor(np.array([2]), mindspore.float32)
>>> complex = ops.Complex()
>>> output = complex(real, imag)
>>> print(output)
[1.+2.j]
class tinyms.primitives.ComplexAbs[source]

Returns a Tensor that contains the magnitudes of the input.

The complex numbers in input must be of the form \(a + bj\), where \(a\) is the real part and \(b\) is the imaginary part.

\[y = \sqrt{a^2+b^2}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - A Tensor, types: complex64, complex128.

Outputs:

Tensor, has the same shape as x. If the type of x is complex64, the type of output is float32. If the type of x is complex128, the type of output is float64.

Raises:
  • TypeError – If the input is not a Tensor.

  • TypeError – If the input type is not complex64 or complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(3+4j)), mindspore.complex64)
>>> complex_abs = ops.ComplexAbs()
>>> output = complex_abs(x)
>>> print(output)
5.0
class tinyms.primitives.ComputeAccidentalHits(num_true=1)[source]

Compute accidental hits of sampled classes which match target classes.

When a target class matches the sample class, we call it “accidental hit”. The result of calculating accidental hits contain three parts (index, id, weight), where index represents the row number in true_classes, and id represents the position in sampled_candidates, the weight is FLOAT_MAX. FLOAT_MAX indicates the max value in the type of Float

Parameters:

num_true (int) – The number of target classes per training example. Default: 1.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int32 or int64 and shape \((batch\_size, num\_true)\).

  • sampled_candidates (Tensor) - The Candidate sampling results of operators, types of training samples, with data type of int32 or int64 and shape \((num\_sampled, )\).

Outputs:

Tuple of 3 Tensors.

  • indices (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • ids (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • weights (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the type float32.

Raises:
  • TypeError – If dtype of num_true is not int.

  • TypeError – If true_classes or sampled_candidates is not a Tensor.

  • TypeError – If dtype of true_classes or sampled_candidates is neither int32 nor int64.

Supported Platforms:

Ascend

Examples

>>> true_classes = np.array([[1, 2], [0, 4], [3, 3]])
>>> sampled_candidates = np.array([0, 1, 2, 3, 4])
>>> sampler = ops.ComputeAccidentalHits(2)
>>> indices, ids, weights = sampler(Tensor(true_classes), Tensor(sampled_candidates))
>>> print(indices, ids, weights)
[0 0 1 1 2 2]
[1 2 0 4 3 3]
[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
class tinyms.primitives.Concat(axis=0)[source]

Connect tensor in the specified axis.

Refer to mindspore.ops.concat() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> op = ops.Concat()
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
>>> op = ops.Concat(1)
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1. 0. 1.]
 [2. 1. 2. 1.]]
infer_value(input_x)[source]

Implement Concat infer value

class tinyms.primitives.ConfusionMatrix(num_classes, dtype='int32')[source]

Calculates the confusion matrix from labels and predictions.

Parameters:
  • num_classes (int) – The num of classes.

  • dtype (str) – Data type of confusion matrix. Default: ‘int32’.

Inputs:
  • labels (Tensor) - real labels, tensor of 1-D. the dtype must be non-negative Integer.

  • predictions (Tensor) - the labels from prediction, tensor of 1-D. the shape same as labels and the dtype must be non-negative Integer.

  • weights (Tensor) - tensor of 1-D. the shape same as predictions.

Outputs:

Tensor, the confusion matrix, with shape (num_classes, num_classes).

Raises:
  • TypeError – If num_classes is not an int.

  • TypeError – If dtype is not a str.

  • TypeError – If labels, predictions or weight` is not a Tensor.

Examples

>>> confusion_matrix = ops.ConfusionMatrix(4)
>>> labels = Tensor([0, 1, 1, 3], mindspore.int32)
>>> predictions = Tensor([1, 2, 1, 3], mindspore.int32)
>>> output = confusion_matrix(labels, predictions)
>>> print(output)
[[0 1 0 0]
 [0 1 1 0]
 [0 0 0 0]
 [0 0 0 1]]
class tinyms.primitives.Conj[source]

Returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form a + bj, where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form a - bj.

If input is real, it is returned unchanged.

Inputs:
  • input (Tensor) - The input tensor to compute to. Must have numeric type.

Outputs:

Tensor, has the same dtype as the input.

Raises:
  • TypeError – If the dtype of input is not a numeric type.

  • TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> conj = ops.Conj()
>>> output = conj(x)
>>> print(output)
(1.3-0.4j)
class tinyms.primitives.ConjugateTranspose[source]

Calculate the conjugate matrix of input x which has been transposed according to input perm.

\[y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])\]
Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • perm (tuple[int]) - The permutation to be converted. The elements in perm are composed of the indexes of each dimension of x. The length of perm and the shape of x must be the same. Only constant value is allowed. Must be in the range [0, rank(x)).

Outputs:

Tensor, the type of output tensor is the same as x and the shape of output tensor is decided by the shape of x and the value of Conj(perm):

\[y.shape[i] = x.shape[perm[i]]\]

where i is in range [0, rank(x) - 1].

Raises:
  • TypeError – If perm is not a tuple.

  • ValueError – If length of shape of x is not equal to length of shape of perm.

  • ValueError – If the same element exists in perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1 + 1j,2 + 2j], [3 + 3j, 4 + 4j]]), mindspore.complex64)
>>> perm = (1, 0)
>>> conjugate_transpose = ops.ConjugateTranspose()
>>> output = conjugate_transpose(x, perm)
>>> print(output)
    [[1.-1.j 3.-3.j]
    [2.-2.j 4.-4.j]]
class tinyms.primitives.Conv2D(out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, data_format='NCHW')[source]

2D convolution layer.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is height, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the group number to split the input in the channel dimension.

If the ‘pad_mode’ is set to be “pad”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} - (\text{kernel_size[0]} - 1) \times (\text{dilation[0]} - 1) }{\text{stride[0]}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + \text{padding[2]} + \text{padding[3]} - \text{kernel_size[1]} - (\text{kernel_size[1]} - 1) \times (\text{dilation[1]} - 1) }{\text{stride[1]}}} \right \rfloor\) respectively. Where \(dilation\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition.

Note

On Ascend platform, \(group = 1\) must be satisfied.

Parameters:
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both the height and the width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. The value is currently not used. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in top and bottom, left and right possiblily. Otherwise, the last extra padding will be calculated from the bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – Implicit paddings on both sides of the input x. If pad is one integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple with four integers, the paddings of top, bottom, left and right will be equal to pad[0], pad[1], pad[2], and pad[3] accordingly. Default: 0.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})\).

Outputs:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> conv2d = ops.Conv2D(out_channel=32, kernel_size=3)
>>> output = conv2d(x, weight)
>>> print(output.shape)
(10, 32, 30, 30)
class tinyms.primitives.Conv2DBackpropInput(out_channel, kernel_size, pad_mode='valid', pad=0, pad_list=None, mode=1, stride=1, dilation=1, group=1, data_format='NCHW')[source]

The Conv2DBackpropInput interface is deprecated, please refer to mindspore.ops.Conv2DTranspose if you want to do unsampling.

Supported Platforms:

Deprecated

class tinyms.primitives.Conv2DTranspose(out_channel, kernel_size, pad_mode='valid', pad=0, pad_list=None, mode=1, stride=1, dilation=1, group=1, data_format='NCHW')[source]

Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input, also called deconvolution, although it is not an actual deconvolution. Because it cannot restore the original input data completely, but it can restore the shape of the original input.

Parameters:
  • out_channel (int) – The dimensionality of the output space.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution window.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”. Please refer to mindspore.nn.Conv2dTranspose for more specifications about pad_mode.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • pad_list (Union[str, None]) – The pad list like (top, bottom, left, right). Default: None.

  • mode (int) – Modes for different convolutions. The value is currently not used. Default: 1.

  • stride (Union[int, tuple[int]]) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union[int, tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default is ‘NCHW’.

Inputs:
  • dout (Tensor) - the gradients with respect to the output of the convolution. The shape conforms to the default data_format \((N, C_{out}, H_{out}, W_{out})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

  • input_size (Tensor) - A tuple describes the shape of the input which conforms to the format \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the gradients with respect to the input of convolution. It has the same shape as the input.

Raises:
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_transpose_input = ops.Conv2DTranspose(out_channel=32, kernel_size=3)
>>> output = conv2d_transpose_input(dout, weight, ops.shape(x))
>>> print(output.shape)
(10, 32, 32, 32)
class tinyms.primitives.Conv3D(out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, data_format='NCDHW')[source]

3D convolution layer.

Applies a 3D convolution over an input tensor which is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H, W\) is feature height and width respectively. the output value of a layer is calculated as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation , \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the jth channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where groups is the number of groups to split input in the channel dimension.

For more details, please refer to the paper Gradient Based Learning Applied to Document Recognition .

If the ‘pad_mode’ is set to be “valid”, the output depth, height and width will be \(\left \lfloor{1 + \frac{D_{in} + 2 \times \text{padding} - \text{ks_d} - (\text{ks_d} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively. Where \(dilation\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

Parameters:
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – Specifies the depth, height and width of the 3D convolution window. It can be a single int or a tuple of 3 integers. Single int means the value is for the depth, height and width of the kernel. A tuple of 3 ints corresponds to the depth, height and width of the kernel respectively.

  • mode (int) – Modes for different convolutions. It is currently not used. Default: 1.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. The value ranges for the depth, height, and width dimensions are [1, D], [1, H], and [1, W], respectively. Default: 1.

  • group (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default: 1.

  • data_format (str) – The optional value for data format. Currently only support “NCDHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

  • weight (Tensor) - Set size of kernel is \((k_d, K_h, K_w)\), then the shape is \((C_{out}, C_{in}/groups, k_d, K_h, K_w)\). Currently weight data type only support float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{in}\). Currently, only support none.

Outputs:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • ValueError – If out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> conv3d = ops.Conv3D(out_channel=32, kernel_size=(4, 3, 3))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(16, 32, 7, 30, 30)
class tinyms.primitives.Conv3DTranspose(in_channel, out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, output_padding=0, data_format='NCDHW')[source]

Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Input is typically of shape \((N, C, D, H, W)\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H\) is height, \(W\) is width.

If the ‘pad_mode’ is set to be “pad”, the depth, height and width of output are defined as:

\[ \begin{align}\begin{aligned}D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{pad}[0] + \text{dilation}[0] \times (\text{kernel_size}[0] - 1) + \text{output_padding}[0] + 1\\H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{pad}[1] + \text{dilation}[1] \times (\text{kernel_size}[1] - 1) + \text{output_padding}[1] + 1\\W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{pad}[2] + \text{dilation}[2] \times (\text{kernel_size}[2] - 1) + \text{output_padding}[2] + 1\end{aligned}\end{align} \]
Parameters:
  • in_channel (int) – The channel of the input x.

  • out_channel (int) – The channel of the weight x.

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 3 integers. Specifies the depth, height and width of the 3D convolution window. Single int means the value is for the depth, height and width of the kernel. A tuple of 3 ints means the first value is for the depth, the second value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. Default is 1. It is currently not used.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad and output_padding must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – Add extra size to each dimension of the output. Default: 0.

  • data_format (str) – The optional value for data format. Currently only ‘NCDHW’ is supported.

Inputs:
  • dout (Tensor) - The gradients with respect to the output of the convolution. The shape conforms to the default. data_format \((N, C_{in}, D_{out}, H_{out}, W_{out})\). Currently dout data type only supports float16 and float32.

  • weight (Tensor) - Set size of kernel is \((K_d, K_h, K_w)\), then the shape is \((C_{in}, C_{out}//group, K_d, K_h, K_w)\). Where \(group\) is the Args parameter, \(//\) is the symbol for integer division. Currently weight data type only supports float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{out}\). Currently, only support none. Default: None.

Outputs:

Tensor, the gradients with respect to the input of convolution 3D. Tensor of shape \((N, C_{out}//group, D_{out}, H_{out}, W_{out})\), where \(group\) is the Args parameter.

Raises:
  • TypeError – If in_channel, out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad , dilation or output_padding is neither an int not a tuple.

  • ValueError – If in_channel, out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ nor ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

  • TypeError – If data type of dout and weight is not float16.

  • ValueError – If bias is not none. The rank of dout and weight is not 5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
>>> conv3d_transpose = ops.Conv3DTranspose(in_channel=16, out_channel=3, kernel_size=(4, 6, 2))
>>> output = conv3d_transpose(dout, weight)
>>> print(output.shape)
(32, 3, 13, 37, 33)
class tinyms.primitives.Cos[source]

Computes cosine of input element-wise.

Refer to mindspore.ops.cos() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> cos = ops.Cos()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cos(x)
>>> print(output)
[0.971338 0.6748758 0.95233357 0.9959527]
class tinyms.primitives.Cosh[source]

Computes hyperbolic cosine of input element-wise.

Refer to mindspore.ops.cosh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> cosh = ops.Cosh()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cosh(x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
class tinyms.primitives.CountNonZero(dims=None)[source]

Calculates the total number of non-zero entries in the input tensor along the specified dimensions.

Refer to mindspore.ops.count_nonzero() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([[0, 0, 1], [1, 1, 2], [0, 0, 1]], dtype=mindspore.int64)
>>> countnonzero = ops.CountNonZero(dims=[1])
>>> y = countnonzero(x)
>>> print(y)
[1 3 1]
class tinyms.primitives.CropAndResize(method='bilinear', extrapolation_value=0.0)[source]

Extracts crops from the input image tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant. For now, the backward of the operator only support bilinear method, for other methods, will return 0.

Parameters:
  • method (str, optional) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. Default: “bilinear”

  • extrapolation_value (float, optional) – An optional float value used extrapolation, if applicable. Default: 0.0.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape \((batch, image\_height, image\_width, depth)\). Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • boxes (Tensor) - A 2-D tensor of shape \((num\_boxes, 4)\). The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the [0, 1] range are allowed, in which case we use extrapolation_value to extrapolate the input image values. Types allowed: float32.

  • box_index (Tensor) - A 1-D tensor of shape \((num\_boxes)\) with int32 values in [0, batch). The value of box_index[i] specifies the image that the i-th box refers to. Types allowed: int32.

  • crop_size (Tuple[int]) - A tuple of two int32 elements: (crop_height, crop_width). Only constant value is allowed. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive.

Outputs:

A 4-D tensor of shape \((num\_boxes, crop\_height, crop\_width, depth)\) with type: float32.

Raises:
  • TypeError – If x or boxes or box_index is not a Tensor.

  • TypeError – If crop_size is not a Tuple with two int32 elements.

  • TypeError – If dtype of boxes is not float or that of box_index is not int.

  • TypeError – If method is not a str.

  • TypeError – If extrapolation_value is not a float.

  • ValueError – If the shape rank of x is not 4.

  • ValueError – If the shape rank of boxes is not 2.

  • ValueError – If the second dim of boxes is not 4.

  • ValueError – If the shape rank of box_index is not 1.

  • ValueError – If the first dim of box_index is not equal to that of boxes.

  • ValueError – If existing element in box_index is out of range [0, batch).

  • ValueError – If the data of crop_size is not positive.

  • ValueError – If method is not one of ‘bilinear’, ‘nearest’, ‘bilinear_v2’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class CropAndResizeNet(nn.Cell):
...     def __init__(self, crop_size):
...         super(CropAndResizeNet, self).__init__()
...         self.crop_and_resize = ops.CropAndResize()
...         self.crop_size = crop_size
...
...     def construct(self, x, boxes, box_index):
...         return self.crop_and_resize(x, boxes, box_index, self.crop_size)
...
>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> crop_and_resize = CropAndResizeNet(crop_size=crop_size)
>>> output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
>>> print(output.shape)
(5, 24, 24, 3)
class tinyms.primitives.Cross(dim=-65530)[source]

Returns the cross product of vectors in dimension dim of x1 and x2.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cross() for more details.

Parameters:

dim (int) – Spefcified dim along which to cumpute cross product with. Default: -65530.

Inputs:
  • x1 (Tensor) - Input Tensor.

  • x2 (Tensor) - Another input Tensor, must have the same shape and the same type as x1, and the size of their dim dimension should be 3.

Outputs:

Tensor, has the same shape and type as inputs.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.common import dtype as mstype
>>> import mindspore.ops as ops
>>> cross = ops.Cross(dim = 0)
>>> x1 = Tensor([1, 2, 3], mstype.int8)
>>> x2 = Tensor([1, 2, 3], mstype.int8)
>>> output = cross(x1, x2)
>>> print(output)
[0 0 0]
class tinyms.primitives.CumProd(exclusive=False, reverse=False)[source]

Computes the cumulative product of the tensor x along axis. For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

\[y_i = x_1 * x_2 * x_3 * ... * x_i\]
Parameters:
  • exclusive (bool) – If true, perform exclusive cumulative product. Default: False.

  • reverse (bool) – If true, reverse the result along axis. Default: False

Inputs:
  • x (Tensor[Number]) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (int) - The dimensions to compute the cumulative product. Only constant value is allowed.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> a, b, c, = 1, 2, 3
>>> x = Tensor(np.array([a, b, c]).astype(np.float32))
>>> op0 = ops.CumProd()
>>> output0 = op0(x, 0) # output=[a, a * b, a * b * c]
>>> op1 = ops.CumProd(exclusive=True)
>>> output1 = op1(x, 0) # output=[1, a, a * b]
>>> op2 = ops.CumProd(reverse=True)
>>> output2 = op2(x, 0) # output=[a * b * c, b * c, c]
>>> op3 = ops.CumProd(exclusive=True, reverse=True)
>>> output3 = op3(x, 0) # output=[b * c, c, 1]
>>> print(output0)
[1. 2. 6.]
>>> print(output1)
[1. 1. 2.]
>>> print(output2)
[6. 6. 3.]
>>> print(output3)
[6. 3. 1.]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [5, 3, 5]]).astype(np.float32))
>>> output4 = op0(x, 0)
>>> output5 = op0(x, 1)
>>> print(output4)
[[ 1.  2.  3.]
 [ 4. 10. 18.]
 [20. 30. 90.]]
>>> print(output5)
[[  1.   2.   6.]
 [  4.  20. 120.]
 [  5.  15.  75.]]
class tinyms.primitives.CumSum(exclusive=False, reverse=False)[source]

Computes the cumulative sum of input tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]
Parameters:
  • exclusive (bool) – By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output. Default: False.

  • reverse (bool) – If true, perform inverse cumulative sum. Default: False.

Inputs:
  • input (Tensor) - The input tensor to accumulate.

  • axis (int) - The axis to accumulate the tensor’s value. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Outputs:

Tensor, the shape of the output tensor is consistent with the input tensor’s.

Raises:
  • TypeError – If exclusive or reverse is not a bool.

  • TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> cumsum = ops.CumSum()
>>> # case 1: along the axis 0
>>> y = cumsum(x, 0)
>>> print(y)
[[ 3.  4.  6. 10.]
 [ 4. 10. 13. 19.]
 [ 8. 13. 21. 26.]
 [ 9. 16. 28. 35.]]
>>> # case 2: along the axis 1
>>> y = cumsum(x, 1)
>>> print(y)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
>>> # Next demonstrate exclusive and reverse, along axis 1
>>> # case 3: exclusive = True
>>> cumsum = ops.CumSum(exclusive=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[ 0.  3.  7. 13.]
 [ 0.  1.  7. 14.]
 [ 0.  4.  7. 15.]
 [ 0.  1.  4. 11.]]
>>> # case 4: reverse = True
>>> cumsum = ops.CumSum(reverse=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[23. 20. 16. 10.]
 [23. 22. 16.  9.]
 [22. 18. 15.  7.]
 [20. 19. 16.  9.]]
class tinyms.primitives.Cummax(axis)[source]

Returns the cumulative maximum of elements and the index.

Refer to mindspore.ops.cummax() for more details.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> cummax = ops.Cummax(axis=0)
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> output = cummax(x)
>>> print(output[0])
[[ 3.  4.  6. 10.]
 [ 3.  6.  7. 10.]
 [ 4.  6.  8. 10.]
 [ 4.  6.  8. 10.]]
>>> print(output[1])
[[0 0 0 0]
 [0 1 1 0]
 [2 1 2 0]
 [2 1 2 0]]
class tinyms.primitives.Cummin(axis)[source]

Returns the cumulative minimum of elements and the index.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cummin() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> func = ops.Cummin(axis=0)
>>> output = func(a)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
class tinyms.primitives.CumulativeLogsumexp(exclusive=False, reverse=False)[source]

Compute the cumulative log-sum-exp of the input tensor x along axis . For example, with all parameters at default values, if the input x is a tensor [a, b, c], the output will be [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))].

Parameters:
  • exclusive (bool, optional) – If true, the last element will be skipped during the calculation and thus an exclusive cumulative log-sum-exp will be performed. For example, this operation will output [-inf, a, log(exp(a) * exp(b))] with tensor [a, b, c] as the input. Note that the minimal value -inf, for performance reasons, is representable by the floating point type. Default: False.

  • reverse (bool, optional) – If true, the function accumulation values will be calculated after the elements of x on axis are flipped, and the calculation result will be flipped afterwards. For example, this operation will output [log(exp(c) + exp(b) + exp(a)), log(exp(c) + exp(b)), c] with tensor [a, b, c] as the input. Default: False.

Inputs:
  • x (Tensor) - The input tensor. Must be one of the following types: float16, float32, float64. The dimension of x must greater than 0.

  • axis (Tensor) - A 0-D tensor describing the dimension to compute the cumulative product. Must be one of the following types: int64, int32, int16. Must be in the range [-rank(x), rank(x)). Default: 0.

Outputs:

Tensor, has the same dtype and shape as the x.

Raises:
  • TypeError – If x or axis not a Tensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • TypeError – If dtype of axis is not in [int16, int32, int64].

  • TypeError – If exclusive or reverse is not a bool.

  • ValueError – If the dimension of x is not greater than 0.

  • RuntimeError – If axis is out of range [-rank(x), rank(x)).

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=False, reverse=False)
>>> output = op(x, Tensor(0))
>>> print(output)
[1.        2.3132617 3.407606 ]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=True, reverse=False)
>>> output = op(x, Tensor(0))
>>> print(output)
[-3.4028235e+38  1.0000000e+00  2.3132617e+00]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=False, reverse=True)
>>> output = op(x, Tensor(0))
>>> print(output)
[3.407606  3.3132617 3.       ]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=True, reverse=True)
>>> output = op(x, Tensor(0))
>>> print(output)
[ 3.3132617e+00  3.0000000e+00 -3.4028235e+38]
class tinyms.primitives.Custom(func, out_shape=None, out_dtype=None, func_type='hybrid', bprop=None, reg_info=None)[source]

Custom primitive is used for user defined operators and is to enhance the expressive ability of built-in primitives. You can construct a Custom object with a predefined function, which describes the computation logic of a user defined operator. You can also construct another Custom object with another predefined function if needed. Then these Custom objects can be directly used in neural networks. Detailed description and introduction of user-defined operators, including correct writing of parameters, please refer to Custom Operators Tutorial .

Warning

This is an experimental API that is subject to change.

Note

The supported platforms are determined by the input func_type. The supported platforms are as follows:

  • “hybrid”: supports [“Ascend”, “GPU”, “CPU”].

  • “akg”: supports [“Ascend”, “GPU”, “CPU”].

  • “tbe”: supports [“Ascend”].

  • “aot”: supports [“GPU”, “CPU”].

  • “pyfunc”: supports [“CPU”].

  • “julia”: supports [“CPU”].

  • “aicpu”: supports [“Ascend”].

Parameters:
  • func (Union[function, str]) –

    • function: If func is of function type, then func should be a Python function which describes the computation logic of a user defined operator. The function can be one of the following:

      1. A AKG operator implementation function, which can use ir builder/tvm compute/hybrid grammar.

      2. A TBE operator implementation function.

      3. A pure python function

      4. An kernel decorated function written by the Hybrid DSL.

    • str: If func is of str type, then str should be a path of file along with a function name. This could be used when func_type is “aot” or “julia”.

      1. for “aot”:

        Currently “aot” supports GPU/CPU(linux only) platform. “aot” means ahead of time, in which case Custom directly launches user defined “xxx.so” file as an operator. Users need to compile a handwriting “xxx.cu”/”xxx.cc” file into “xxx.so” ahead of time, and offer the path of the file along with a function name.

        • ”xxx.so” file generation:

          1) GPU Platform: Given user defined “xxx.cu” file (ex. “{path}/add.cu”), use nvcc command to compile it.(ex. “nvcc –shared -Xcompiler -fPIC -o add.so add.cu”)

          2) CPU Platform: Given user defined “xxx.cc” file (ex. “{path}/add.cc”), use g++/gcc command to compile it.(ex. “g++ –shared -fPIC -o add.so add.cc”)

        • Define a “xxx.cc”/”xxx.cu” file:

          ”aot” is a cross-platform identity. The functions defined in “xxx.cc” or “xxx.cu” share the same args. Typically, the function should be as:

          int func(int nparam, void **params, int *ndims, int64_t **shapes, const char **dtypes,
                  void *stream, void *extra)
          

          Parameters:

          • nparam(int): total number of inputs plus outputs; suppose the operator has 2 inputs and 3 outputs, then nparam=5

          • params(void **): a pointer to the array of inputs and outputs’ pointer; the pointer type of inputs and outputs is void * ; suppose the operator has 2 inputs and 3 outputs, then the first input’s pointer is params[0] and the second output’s pointer is params[3]

          • ndims(int *): a pointer to the array of inputs and outputs’ dimension num; suppose params[i] is a 1024x1024 tensor and params[j] is a 77x83x4 tensor, then ndims[i]=2, ndims[j]=3.

          • shapes(int64_t **): a pointer to the array of inputs and outputs’ shapes(int64_t *); the ith input’s jth dimension’s size is shapes[i][j](0<=j<ndims[i]); suppose params[i] is a 2x3 tensor and params[j] is a 3x3x4 tensor, then shapes[i][0]=2, shapes[j][2]=4.

          • dtypes(const char **): a pointer to the array of inputs and outputs’ types(const char *); (ex. “float32”, “float16”, “float”, “float64”, “int”, “int8”, “int16”, “int32”, “int64”, “uint”, “uint8”, “uint16”, “uint32”, “uint64”, “bool”)

          • stream(void *): stream pointer, only used in cuda file

          • extra(void *): used for further extension

          Return Value(int):

          • 0: MindSpore will continue to run if this aot kernel is successfully executed

          • others: MindSpore will raise exception and exit

          Examples: see details in tests/st/ops/graph_kernel/custom/aot_test_files/

        • Use it in Custom:

          Custom(func="{dir_path}/{file_name}:{func_name}",...)
          (ex. Custom(func="./reorganize.so:CustomReorganize", out_shape=[1], out_dtype=mstype.float32,
          "aot"))
          
      2. for “julia”:

        Currently “julia” supports CPU(linux only) platform. For julia use JIT compiler, and julia support c api to call julia code. The Custom can directly launches user defined “xxx.jl” file as an operator. Users need to write a “xxx.jl” file which include modules and functions, and offer the path of the file along with a module name and function name.

        Examples: see details in tests/st/ops/graph_kernel/custom/julia_test_files/

        • Use it in Custom:

          Custom(func="{dir_path}/{file_name}:{module_name}:{func_name}",...)
          (ex. Custom(func="./add.jl:Add:add", out_shape=[1], out_dtype=mstype.float32, "julia"))
          

  • out_shape (Union[function, list, tuple]) –

    The output shape infer function or the value of output shape of func. Default: None.

    If func has single output, then the value of output shape is a list or tuple of int.

    If func has multiple outputs, then the value of output shape is a tuple, each item represents the shape of each output.

    The input can be None only when the func_type input is “hybrid”. In this case, the automatic infer shape mechanic will be enabled.

  • out_dtype (Union[function, mindspore.dtype, tuple[mindspore.dtype]]) –

    The output data type infer function or the value of output data type of func. Default: None.

    If func has single output, then the value of output shape is a mindspore.dtype.

    If func has multiple outputs, then the value of output shape is a tuple of mindspore.dtype, each item represents the data type of each output.

    The input can be None only when the func_type input is “hybrid”. In this case, the automatic infer value mechanic will be enabled.

  • func_type (str) –

    The implementation type of func, should be one of

    [“hybrid”, “akg”, “tbe”, “aot”, “pyfunc”, “julia”, “aicpu”].

    Each func_type only supports specific platforms(targets). Default: “hybrid”. The supported platforms of func_type:

    • ”hybrid”: supports [“Ascend”, “GPU”, “CPU”].

    • ”akg”: supports [“Ascend”, “GPU”, “CPU”].

    • ”tbe”: supports [“Ascend”].

    • ”aot”: supports [“GPU”, “CPU”].

    • ”pyfunc”: supports [“CPU”].

    • ”julia”: supports [“CPU”].

    • ”aicpu”: supports [“Ascend”].

  • bprop (function) – The back propagation function of func. Default: None.

  • reg_info (Union[str, dict, list, tuple]) –

    Represents the registration information(reg info) of func with json format of type str or dict. The reg info specifies supported data types and formats of inputs and outputs, attributes and target of func. Default: None.

    If reg info is a list or tuple, then each item should be with json format of type str or dict, which represents the registration information of func in a specific target. You need to invoke CustomRegOp or the subclass of RegOp to generate the reg info for func. Then you can invoke custom_info_register to bind the reg info to func or just pass the reg info to reg_info parameter. The reg_info parameter takes higher priority than custom_info_register and the reg info in a specific target will be registered only once.

    If reg info is not set, then we will infer the data types and formats from the inputs of Custom operator.

    Please note that, if func_type is “tbe” or the func only supports some specified data types and formats, or it has attribute inputs, then you should set the reg info for func.

Inputs:
  • input (Union(tuple, list)) - The input tuple or list is made up of multiple tensors, and attributes value(optional).

Outputs:

Tensor or tuple[Tensor], execution results.

Raises:
  • TypeError – If the type of func is invalid or the type of register information for func is invalid.

  • ValueError – If func_type is invalid.

  • ValueError – If the register information is invalid, including the target is not supported, the input numbers or the attributes of func differs in different targets.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore.ops import CustomRegOp, custom_info_register, DataType, kernel
>>> from mindspore import dtype as mstype
>>> from mindspore.nn import Cell
>>> input_x = Tensor(np.ones([16, 16]).astype(np.float32))
>>> input_y = Tensor(np.ones([16, 16]).astype(np.float32))
>>>
>>> # Example, func_type = "hybrid"
>>> # This is the default func_type in Custom,
>>> # and both out_shape and out_dtype can be None(default value).
>>> # In this case, the input func must be a function written in the Hybrid DSL
>>> # and decorated by @kernel.
>>> @kernel
... def add_script(a, b):
...     c = output_tensor(a.shape, a.dtype)
...     for i0 in range(a.shape[0]):
...         for i1 in range(a.shape[1]):
...             c[i0, i1] = a[i0, i1] + b[i0, i1]
...     return c
>>>
>>> test_op_hybrid = ops.Custom(add_script)
>>> output = test_op_hybrid(input_x, input_y)
>>> # the result will be a 16 * 16 tensor with all elements 2
>>> print(output.shape)
(16, 16)
>>> # Example, func_type = "tbe"
>>> square_with_bias_op_info = CustomRegOp() \
...     .fusion_type("OPAQUE") \
...     .attr("bias", "required", "float") \
...     .input(0, "x") \
...     .output(0, "y") \
...     .dtype_format(DataType.F32_Default, DataType.F32_Default) \
...     .dtype_format(DataType.F16_Default, DataType.F16_Default) \
...     .target("Ascend") \
...     .get_op_info()
>>>
>>> @custom_info_register(square_with_bias_op_info)
... def square_with_bias(input_x, output_y, bias=0.0, kernel_name="square_with_bias"):
...     import te.lang.cce
...     from te import tvm
...     from topi.cce import util
...
...     shape = input_x.get("shape")
...     dtype = input_x.get("dtype").lower()
...
...     shape = util.shape_refine(shape)
...     data = tvm.placeholder(shape, name="data", dtype=dtype)
...
...     with tvm.target.cce():
...         res0 = te.lang.cce.vmul(data, data)
...         res = te.lang.cce.vadds(res0, bias)
...         sch = te.lang.cce.auto_schedule(res)
...
...     config = {"print_ir": False,
...               "name": kernel_name,
...               "tensor_list": [data, res]}
...
...     te.lang.cce.cce_build_code(sch, config)
>>>
>>> def test_tbe():
...     square_with_bias = ops.Custom(square_with_bias, out_shape=lambda x, _: x, \
...                                   out_dtype=lambda x, _: x, func_type="tbe")
...     res = self.square_with_bias(input_x, 1.0)
...     return res
>>>
>>> # Example, func_type = "aicpu"
>>> resize_bilinear_op_info = CustomRegOp("ResizeBilinear") \
...     .fusion_type("OPAQUE") \
...     .input(0, "input", "required") \
...     .output(1, "output", "required") \
...     .attr("align_corners", "required", "bool") \
...     .attr("cust_aicpu", "optional", "str", "aicpu_kernels") \
...     .dtype_format(DataType.F32_Default, DataType.F32_Default) \
...     .dtype_format(DataType.F16_Default, DataType.F32_Default) \
...     .target("Ascend") \
...     .get_op_info()
>>>
>>> @custom_info_register(resize_bilinear_op_info)
... def resize_bilinear_aicpu():
...     return
>>>
>>> def test_aicpu(x):
...     resize_bilinear_op = ops.Custom(resize_bilinear_aicpu, out_shape=[1, 1, 9, 9], \
...                                     out_dtype=mstype.float32, func_type="aicpu")
...     res = resize_bilinear_op(x, True, "aicpu_kernels")
...     return res
>>>
>>> # Example, func_type = "aot"
>>> def test_aot(x, y, out_shapes, out_types):
...     program = ops.Custom("./reorganize.so:CustomReorganize", out_shapes, out_types, "aot")
...     out = program(x, y)
...     return out
>>>
>>> # Example, func_type = "pyfunc"
>>> def func_multi_output(x1, x2):
...     return (x1 + x2), (x1 - x2)
>>>
>>> test_pyfunc = ops.Custom(func_multi_output, lambda x, _: (x, x), lambda x, _: (x, x), "pyfunc")
>>> output = test_pyfunc(input_x, input_y)
>>>
>>> # Example, func_type = "julia"
>>> # julia code:
>>> # add.jl
>>> # module Add
>>> # function add(x, y, z)
>>> #   z .= x + y
>>> #   return z
>>> # end
>>> # end
>>> def test_julia(x, y, out_shapes, out_types):
...     program = ops.Custom("./add.jl:Add:add", out_shapes, out_types, "julia")
...     out = program(x, y)
...     return out
get_bprop()[source]

Get the bprop of the custom op

class tinyms.primitives.DType[source]

Returns the data type of the input tensor as mindspore.dtype.

Inputs:
  • input_x (Tensor) - Input Tensor.

Outputs:

mindspore.dtype, the data type of a tensor.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.DType()(input_tensor)
>>> print(output)
Float32
class tinyms.primitives.DataFormatDimMap(src_format='NHWC', dst_format='NCHW')[source]

Returns the dimension index in the destination data format given in the source data format.

Parameters:
  • src_format (str) – An optional value for source data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NHWC’.

  • dst_format (str) – An optional value for destination data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor, each element is used as a dimension index of the source data format. The suggested values are in the range [-4, 4). Only supports int32.

Outputs:

Tensor, Return the dimension index in the given target data format, has the same data type and shape as the input_x.

Raises:
  • TypeError – If src_format or dst_format is not a str.

  • TypeError – If input_x is not a Tensor whose dtype is not int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 2, 3], mindspore.int32)
>>> dfdm = ops.DataFormatDimMap()
>>> output = dfdm(input_x)
>>> print(output)
[0 3 1 2]
class tinyms.primitives.DataFormatVecPermute(src_format='NHWC', dst_format='NCHW')[source]

Converts the input tensor from the src_format to the dst_format by permuting its dimensions.

Parameters:
  • src_format (str, optional) – the source data format, which can be ‘NHWC’ and ‘NCHW’. Default: ‘NHWC’.

  • dst_format (str, optional) – the target data format, which can be ‘NHWC’ and ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor of shape \((4, )\) or \((4, 2)\) in source data format. Supports int32 and int64 datatype.

Outputs:

Tensor, has the same data type and shape as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither int32 nor int64.

  • ValueError – If src_format or dst_format is not a str in [‘NHWC’, ‘NCHW’].

  • ValueError – If input_x shape is not \((4, )\) or \((4, 2)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self, src_format="NHWC", dst_format="NCHW"):
...         super().__init__()
...         self.op = ops.DataFormatVecPermute(src_format, dst_format)
...     def construct(self, x):
...         return self.op(x)
...
>>> net = Net()
>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> output = net(x)
>>> print(output)
[1 4 2 3]
class tinyms.primitives.DeformableOffsets(strides, pads, ksize, dilations=(1, 1, 1, 1), data_format='NCHW', deformable_groups=1, modulated=True)[source]

Computes the deformed convolution output with the expected input.

Refer to mindspore.ops.deformable_conv2d() for more details.

Supported Platforms:

Ascend GPU CPU

class tinyms.primitives.Depend[source]

Depend is used for processing dependency operations.

In most scenarios, if operators have IO side effects or memory side effects, they will be executed according to the user’s semantics. In some scenarios, if the two operators A and B have no order dependency, and A must be executed before B, we recommend using Depend to specify their execution order. The usage method is as follows:

a = A(x)                --->        a = A(x)
b = B(y)                --->        y = Depend(y, a)
                        --->        b = B(y)
Inputs:
  • value (Tensor) - the real value to return for depend operator.

  • expr (Expression) - the expression to execute with no outputs.

Outputs:

Tensor, the value passed by last operator.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.softmax = ops.Softmax()
...         self.depend = ops.Depend()
...
...     def construct(self, x, y):
...         mul = x * y
...         y = self.depend(y, mul)
...         ret = self.softmax(y)
...         return ret
...
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
[[0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]]
class tinyms.primitives.DepthToSpace(block_size)[source]

Rearrange blocks of depth data into spatial dimensions.

This is the reverse operation of SpaceToDepth.

The depth of output tensor is \(input\_depth / (block\_size * block\_size)\).

The output tensor’s height dimension is \(height * block\_size\).

The output tensor’s weight dimension is \(weight * block\_size\).

The input tensor’s depth must be divisible by block_size * block_size. The data format is “NCHW”.

Parameters:

block_size (int) – The block size used to divide depth data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. It must be a 4-D tensor with shape \((N, C_{in}, H_{in}, W_{in})\). The data type is Number.

Outputs:

Tensor of shape \((N, C_{in} / \text{block_size} ^ 2, H_{in} * \text{block_size}, W_{in} * \text{block_size})\).

Raises:
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1, 12, 1, 1), mindspore.float32)
>>> block_size = 2
>>> depth_to_space = ops.DepthToSpace(block_size)
>>> output = depth_to_space(x)
>>> print(output.shape)
(1, 3, 2, 2)
class tinyms.primitives.DepthwiseConv2dNative(channel_multiplier, kernel_size, mode=3, pad_mode='valid', pad=0, stride=1, dilation=1, group=1)[source]

DepthwiseConv2dNative will be deprecated in the future. Please use mindspore.nn.Conv2d instead.

Supported Platforms:

Deprecated

class tinyms.primitives.Diag[source]

Constructs a diagonal tensor with a given diagonal values.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.diag() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([1, 2, 3, 4]).astype('int32')
>>> diag = ops.Diag()
>>> output = diag(input_x)
>>> print(output)
[[1 0 0 0]
 [0 2 0 0]
 [0 0 3 0]
 [0 0 0 4]]
class tinyms.primitives.DiagPart[source]

Extracts the diagonal elements from the given Tensor.

If the input_x is a Tensor of shape \([D_1,..., D_k, D_1,..., D_k]\), then the output will be a Tensor of rank k of shape \([D_1,..., D_k]\) where: \(output[i_1,..., i_k] = input_x[i_1,..., i_k, i_1,..., i_k]\).

Inputs:
  • input_x (Tensor) - The rank of input tensor is 2k(k > 0).

Outputs:

Tensor, the extracted diagonal has the same dtype as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • ValueError – If rank of input_x is not even or zero.

  • ValueError – If input_shape[i] is not equal to input_shape[i + len(input_shape)/2].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1, 0, 0, 0],
...                   [0, 2, 0, 0],
...                   [0, 0, 3, 0],
...                   [0, 0, 0, 4]])
>>> diag_part = ops.DiagPart()
>>> output = diag_part(input_x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.Digamma[source]

Computes the grad of the lgamma function on input.

\[P(x) = grad(ln(gamma(x)))\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. With type of float16 or float32 or float64.

Outputs:

Tensor, has the same dtype as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of input x is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.5, 0.5, 9]).astype(np.float16))
>>> digamma = ops.Digamma()
>>> output = digamma(x)
>>> print(output)
[ 0.0365 -1.964   2.14  ]
class tinyms.primitives.Dilation2D(stride, dilation, pad_mode='SAME', data_format='NCHW')[source]

Computes the grayscale dilation of 4-D input and 3-D filters tensors.

Applies a 2D dilation over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(H\) is height, \(W\) is width, \(C\) is channel number. Given kernel size \(ks = (h_{ker}, w_{ker})\), stride \(s = (s_0, s_1)\) and dilation \(d = (d_0, d_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + d_0 \times m, s_1 \times w + d_1 \times n) + \text{filter}(C_j, m, n)\]

Warning

This is an experimental API that is subjected to change or deletion.

Note

If the input data type is float32, this operator is still executed in float16 mode.

Parameters:
  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively, or a tuple of four int numbers when data_format is ‘NCHW’ represents [1, 1, stride_height, stride_width].

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers or a tuple of 4 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input x.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid”. Default: “same”. Both upper and lower case are supported.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input x.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str, optional) – The value for data format, only ‘NCHW’ is supported at present. Default: “NCHW”.

Inputs:
  • x (Tensor) - Input data. A 4-D Tensor, its shape must be \((N, C_{in}, H_{in}, W_{in})\).

  • filter (Tensor) - A three dimension tensor with the same type as input. The shape must be \((C_{in}, H_{filter}, W_{filter})\).

Outputs:

Tensor, the value that applied 2D dilation. The shape is \((N, C_{out}, H_{out}, W_{out})\) which is not necessarily the same as the input x, the type is the same as the input x.

Raises:
  • TypeError – If type of x or filter is not the type in [uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64].

  • TypeError – If stride or dilation is not an int number or a tuple of two or four int numbers.

  • ValueError – If the length of stride or dilation is neither two nor four when they are tuple.

  • ValueError – If stride or dilation shape is not (1, 1, height, width) when it is a tuple of four int numbers.

  • ValueError – If stride is not in the range of [1, 255].

  • ValueError – If dilation is less than 1.

  • ValueError – If pad_mode is not a str of ‘same’, ‘valid’, ‘SAME’ or ‘VALID’.

  • ValueError – If data_format is not the str of ‘NCHW’.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.ones([10, 5, 32, 32]), mindspore.float16)
>>> filter = Tensor(np.ones([5, 3, 3]), mindspore.float16)
>>> dilation2d = ops.Dilation2D(stride=1, dilation=1, pad_mode='VALID')
>>> output = dilation2d(x, filter)
>>> print(output.shape)
(10, 5, 30, 30)
class tinyms.primitives.Div[source]

Computes the quotient of dividing the first input tensor by the second input tensor element-wise.

\[out_{i} = \frac{x_i}{y_i}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • y (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Outputs:

Tensor, the shape is the same as the one of the input x , y after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If neither x nor y is a Tensor.

  • TypeError – If data types of x and y are both Tensor with bool_.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 :has same data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = ops.Div()
>>> output = div(x, y)
>>> print(output)
[-1.3333334  2.5        2.        ]
>>> # case 2 : different data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(2, mindspore.int32)
>>> output = div(x, y)
>>> print(output)
[-2.  2.5  3.]
>>> print(output.dtype)
Float32
class tinyms.primitives.DivNoNan[source]

Operates a safe division between x1 and x2 element-wise. Returns 0 if element of x2 is zero.

Inputs of x1 and x2 comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}output_{i} = \begin{cases} 0, & \text{ if } x2_{i} = 0\\ x1_{i} / x2_{i}, & \text{ if } x2_{i} \ne 0 \end{cases}\end{split}\]
Inputs:
  • x1 (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is number or bool_ <https://www.mindspore.cn/docs/en/r2.0/api_python/mindspore.html#mindspore.dtype> _.

  • x2 (Union[Tensor, number.Number, bool]) - The second input is a number.Number or a bool when the first input is a bool or a tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x1 and x2 is not a number.Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
>>> x2 = Tensor(np.array([0., 0., 0., 2.0, 3.0]), mindspore.float32)
>>> div_no_nan = ops.DivNoNan()
>>> output = div_no_nan(x1, x2)
>>> print(output)
[0.  0.  0.  2.5 2. ]
class tinyms.primitives.Dropout(keep_prob=0.5, Seed0=0, Seed1=0)[source]

During training, randomly zeroes some of the elements of the input tensor with probability 1-keep_prob from a Bernoulli distribution. It plays the role of reducing neuron correlation and avoid overfitting.

Refer to mindspore.ops.dropout() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout(keep_prob=0.5)
>>> x = Tensor(np.ones([1, 2, 3, 4, 5]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape, mask.shape, mask.dtype)
(1, 2, 3, 4, 5) (16,) UInt8
class tinyms.primitives.Dropout2D(keep_prob=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 4-dimensional tensor with a shape of NCHW, the channel feature map refers to a 2-dimensional feature map with the shape of HW).

Dropout2D can improve the independence between channel feature maps.

Note

The keep probability \(keep\_prob\) is equal to \(1 - p\) in mindspore.ops.dropout2d().

Parameters:

keep_prob (float, optional) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 4-D tensor with shape \((N, C, H, W)\), where N is the batch size, C is the number of channels, H is the feature height, and W is the feature width. The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0].

  • ValueError – If x shape is not 4D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout2D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 3)
class tinyms.primitives.Dropout3D(keep_prob=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 5-dimensional tensor with a shape of NCDHW, the channel feature map refers to a 3-dimensional feature map with a shape of DHW).

Note

The keep probability \(keep\_prob\) is equal to \(1 - p\) in mindspore.ops.dropout3d().

Dropout3D can improve the independence between channel feature maps.

Parameters:

keep_prob (float) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 5-D tensor with shape \((N, C, D, H, W)\), where N is the batch size, C is the number of channels, D is the feature depth, H is the feature height, and W is the feature width. The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises:
  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0]; or if the dim of input is not 5-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout3D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 1, 2)
class tinyms.primitives.DropoutDoMask[source]

The DropoutDoMask interface is deprecated, please use the mindspore.ops.Dropout instead.

Supported Platforms:

Deprecated

class tinyms.primitives.DropoutGenMask(Seed0=0, Seed1=0)[source]

The DropoutGenMask interface is deprecated, please use the mindspore.ops.Dropout instead.

Supported Platforms:

Deprecated

class tinyms.primitives.DynamicGRUV2(direction='UNIDIRECTIONAL', cell_depth=1, keep_prob=1.0, cell_clip=-1.0, num_proj=0, time_major=True, activation='tanh', gate_order='rzh', reset_after=True, is_training=True)[source]

Applies a single-layer gated recurrent unit (GRU) to an input sequence.

\[\begin{split}\begin{array}{ll} r_{t+1} = \sigma(W_{ir} x_{t+1} + b_{ir} + W_{hr} h_{(t)} + b_{hr}) \\ z_{t+1} = \sigma(W_{iz} x_{t+1} + b_{iz} + W_{hz} h_{(t)} + b_{hz}) \\ n_{t+1} = \tanh(W_{in} x_{t+1} + b_{in} + r_{t+1} * (W_{hn} h_{(t)}+ b_{hn})) \\ h_{t+1} = (1 - z_{t+1}) * n_{t+1} + z_{t+1} * h_{(t)} \end{array}\end{split}\]

where \(h_{t+1}\) is the hidden state at time t+1, \(x_{t+1}\) is the input at time t+1, \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0. \(r_{t+1}\), \(z_{t+1}\), \(n_{t+1}\) are the reset, update, and new gates, respectively. \(W\), \(b\) are the weight parameter and the deviation parameter respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.

Parameters:
  • direction (str) – A string identifying the direction in the operator. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the operator. Default: 1.

  • keep_prob (float) – A float identifying the keep prob in the operator. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the operator. Default: -1.0.

  • num_proj (int) – An integer identifying the number projection in the operator. Default: 0.

  • time_major (bool) – A bool identifying the time major in the operator. Default: True.

  • activation (str) – A string identifying the type of activation function in the operator. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • gate_order (str) – A string identifying the gate order in weight and bias. Default: ‘rzh’. ‘zrh’ is another option. Here, ‘rzh’ means the gate order is: reset gate, update gate, hidden gate. ‘zrh’ means the gate order is: update gate, reset gate, hidden gate.

  • reset_after (bool) – A bool identifying whether to apply reset gate after matrix multiplication. Default: True.

  • is_training (bool) – A bool identifying is training in the operator. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((\text{num_step}, \text{batch_size}, \text{input_size})\). The data type must be float16.

  • weight_input (Tensor) - Input-hidden weight \(W_{\{ir,iz,in\}}\). Tensor of shape \((\text{input_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • weight_hidden (Tensor) - Hidden-hidden weight \(W_{\{hr,hz,hn\}}\). Tensor of shape \((\text{hidden_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • bias_input (Tensor) - Input-hidden bias \(b_{\{ir,iz,in\}}\). Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • bias_hidden (Tensor) - Hidden-hidden bias \(b_{\{hr,hz,hn\}}\). Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((\text{batch_size})\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((\text{batch_size}, \text{hidden_size})\). The data type must be float16 or float32.

Outputs:
  • y (Tensor) - A Tensor of shape:

    • y_shape = \((num\_step, batch\_size, min(hidden\_size, num\_proj))\): If num_proj > 0,

    • y_shape = \((num\_step, batch\_size, hidden\_size)\): If num_proj = 0.

    Has the same data type with input bias_type.

  • output_h (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • update (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • reset (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • hidden_new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

A note about the bias_type:

  • If bias_input and bias_hidden both are None, bias_type is the data type of init_h.

  • If bias_input is not None, bias_type is the data type of bias_input.

  • If bias_input is None and bias_hidden is not None, bias_type is the data type of bias_hidden.

Raises:
  • TypeError – If direction, activation or gate_order is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob or cell_clip is not a float.

  • TypeError – If time_major, reset_after or is_training is not a bool.

  • TypeError – If x, weight_input, weight_hidden, bias_input, bias_hidden, seq_length or ini_h is not a Tensor.

  • TypeError – If dtype of x, weight_input or weight_hidden is not float16.

  • TypeError – If dtype of init_h is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 8, 64).astype(np.float16))
>>> weight_i = Tensor(np.random.rand(64, 48).astype(np.float16))
>>> weight_h = Tensor(np.random.rand(16, 48).astype(np.float16))
>>> bias_i = Tensor(np.random.rand(48).astype(np.float16))
>>> bias_h = Tensor(np.random.rand(48).astype(np.float16))
>>> init_h = Tensor(np.random.rand(8, 16).astype(np.float16))
>>> dynamic_gru_v2 = ops.DynamicGRUV2()
>>> output = dynamic_gru_v2(x, weight_i, weight_h, bias_i, bias_h, None, init_h)
>>> print(output[0].shape)
(2, 8, 16)
class tinyms.primitives.DynamicRNN(cell_type='LSTM', direction='UNIDIRECTIONAL', cell_depth=1, use_peephole=False, keep_prob=1.0, cell_clip=-1.0, num_proj=0, time_major=True, activation='tanh', forget_bias=0.0, is_training=True)[source]

Applies a recurrent neural network to the input. Only long short-term memory (LSTM) is supported currently.

\[\begin{split}\begin{array}{ll} \\ i_{t+1} = \sigma(W_{ix} x_{t+1} + b_{ix} + W_{ih} h_{(t)} + b_{ih}) \\ f_{t+1} = \sigma(W_{fx} x_{t+1} + b_{fx} + W_{fh} h_{(t)} + b_{fh}) \\ \tilde{c}_{t+1} = \tanh(W_{cx} x_{t+1} + b_{cx} + W_{ch} h_{(t)} + b_{ch}) \\ o_{t+1} = \sigma(W_{ox} x_{t+1} + b_{ox} + W_{oh} h_{(t)} + b_{oh}) \\ c_{t+1} = f_{t+1} * c_{(t)} + i_t * \tilde{c}_{t+1} \\ h_{t+1} = o_{t+1} * \tanh(c_{t+1}) \\ \end{array}\end{split}\]

\(h_{t+1}\) is the hidden state at time t+1. \(x_{t+1}\) is the input at time t+1. \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\).

Parameters:
  • cell_type (str) – A string identifying the cell type in the operator. Default: ‘LSTM’. Only ‘LSTM’ is currently supported.

  • direction (str) – A string identifying the direction in the operator. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the operator. Default: 1.

  • use_peephole (bool) – A bool identifying if use peephole in the operator. Default: False.

  • keep_prob (float) – A float identifying the keep prob in the operator. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the operator. Default: -1.0.

  • num_proj (int) – An integer identifying the number projection in the operator. Default: 0.

  • time_major (bool) – A bool specify the data format of x. If it is set to True, the format is \((num\_step, batch\_size, input\_size)\), if it is set to False, the format is \((batch\_size, num\_step, input\_size)\). Default: True. Only supports True at present.

  • activation (str) – A string identifying the type of activation function in the operator. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • forget_bias (float) – A float identifying the forget bias in the operator. Default: 0.0.

  • is_training (bool) – A bool identifying is training in the operator. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((num\_step, batch\_size, input\_size)\). The data type must be float16.

  • w (Tensor) - Weight. Tensor of shape \((input\_size + hidden\_size, 4 * hidden\_size)\). The data type must be float16.

  • b (Tensor) - Bias. Tensor of shape \((4 * hidden\_size)\). The data type must be float16 or float32.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((batch\_size, )\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

  • init_c (Tensor) - Cell state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

Outputs:
  • y (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • output_h (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). With data type of float16.

  • output_c (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • i (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • j (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • f (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • o (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • tanhct (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

Raises:
  • TypeError – If cell_type, direction or activation is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob, cell_clip or forget_bias is not a float.

  • TypeError – If use_peehpole, time_major or is_training is not a bool.

  • TypeError – If x, w, b, seq_length, init_h or init_c is not a Tensor.

  • TypeError – If dtype of x, w, init_h or init_c is not float16.

  • TypeError – If dtype of b is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 16, 64).astype(np.float16))
>>> w = Tensor(np.random.rand(96, 128).astype(np.float16))
>>> b = Tensor(np.random.rand(128).astype(np.float16))
>>> init_h = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> init_c = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> dynamic_rnn = ops.DynamicRNN()
>>> output = dynamic_rnn(x, w, b, None, init_h, init_c)
>>> print(output[0].shape)
(2, 16, 32)
class tinyms.primitives.DynamicShape(dtype=9)[source]

Same as operator TensorShape. DynamicShape will be deprecated in the future. Please use TensorShape instead.

Supported Platforms:

Deprecated

class tinyms.primitives.EditDistance(normalize=True)[source]

Computes the Levenshtein Edit Distance. It is used to measure the similarity of two sequences. The inputs are variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape) and (truth_indices, truth_values, truth_shape).

\[\begin{split}\operatorname{lev}_{a, b}(i, j)=\left\{\begin{array}{ll} \max (i, j) \qquad \qquad \qquad \qquad \qquad \quad \ \text { if } \min (i, j)=0 \\ \min \left\{\begin{array}{ll} \operatorname{lev}_{a, b}(i-1, j)+1 & \\ \operatorname{lev}_{a, b}(i, j-1)+1 & \text { otherwise. } \\ \operatorname{lev}_{a, b}(i-1, j-1)+1_{\left(a_{i} \neq b_{j}\right)} \end{array}\right. & \end{array}\right.\end{split}\]

Where the \(a\) indicates the hypothesis and the \(b\) indicates the truth. For ease of understanding, i and j here in may be considered as lengths of a and b.

Warning

Unorded truth_indices or hypothesis_indices might lead to expected result, so it is suggested to make sure truth_indices and hypothesis_indices are both in ascending order before calling this API.

Parameters:

normalize (bool) – If true, edit distances are normalized by length of truth. Default: True.

Inputs:
  • hypothesis_indices (Tensor) - The indices of the hypothesis list SparseTensor. With int64 data type. The shape of tensor is \((N, R)\).

  • hypothesis_values (Tensor) - The values of the hypothesis list SparseTensor. Must be 1-D vector with length of N.

  • hypothesis_shape (Tensor) - The shape of the hypothesis list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

  • truth_indices (Tensor) - The indices of the truth list SparseTensor. With int64 data type. The shape of tensor is \((M, R)\).

  • truth_values (Tensor) - The values of the truth list SparseTensor. Must be 1-D vector with length of M.

  • truth_shape (Tensor) - The shape of the truth list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

Outputs:

Tensor, a dense tensor with rank R-1 and float32 data type.

Raises:

TypeError – If normalize is not a bool.

Supported Platforms:

Ascend CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> class EditDistance(nn.Cell):
...     def __init__(self, hypothesis_shape, truth_shape, normalize=True):
...         super(EditDistance, self).__init__()
...         self.edit_distance = ops.EditDistance(normalize)
...         self.hypothesis_shape = hypothesis_shape
...         self.truth_shape = truth_shape
...
...     def construct(self, hypothesis_indices, hypothesis_values, truth_indices, truth_values):
...         return self.edit_distance(hypothesis_indices, hypothesis_values, self.hypothesis_shape,
...                                   truth_indices, truth_values, self.truth_shape)
...
>>> hypothesis_indices = Tensor(np.array([[0, 0, 0], [1, 0, 1], [1, 1, 1]]).astype(np.int64))
>>> hypothesis_values = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> hypothesis_shape = Tensor(np.array([1, 1, 2]).astype(np.int64))
>>> truth_indices = Tensor(np.array([[0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]).astype(np.int64))
>>> truth_values = Tensor(np.array([1, 3, 2, 1]).astype(np.float32))
>>> truth_shape = Tensor(np.array([2, 2, 2]).astype(np.int64))
>>> edit_distance = EditDistance(hypothesis_shape, truth_shape)
>>> output = edit_distance(hypothesis_indices, hypothesis_values, truth_indices, truth_values)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Eig(compute_v=False)[source]

Computes the eigenvalues and eigenvectors of a square matrix(batch square matrices).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

compute_v (bool, optional) – If True, compute both eigenvalues and eigenvectors; If False, just eigenvalues will be computed. Default: False.

Inputs:
  • x (Tensor) - Square matrices of shape \((*, N, N)\), with float32, float64, complex64 or complex128 data type.

Outputs:
  • eigen_values (Tensor) - Shape \((*, N)\). Each inner most vector represents eigenvalues of the corresponding matrix. The eigenvalues may not have an order.

  • eigen_vectors (Tensor) - If compute_v is False, it’s an empty tensor. Otherwise, this tensor has shape \((*, N, N)\), whose columns represent normalized (unit length) eigenvectors of corresponding eigenvalues.

Raises:
  • TypeError – If compute_v is not a bool.

  • TypeError – If dtype of x is not one of: float64, float32, complex64 or complex128.

  • TypeError – If x is not a Tensor.

  • ValueError – If x is not a square(batch squares).

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 0.0], [0.0, 2.0]]), mindspore.float32)
>>> eig = ops.Eig(compute_v=True)
>>> u, v = eig(input_x)
>>> print(u)
[1.+0.j 2.+0.j]
>>> print(v)
[[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]]
class tinyms.primitives.Einsum(equation)[source]

Sums the product of the elements of the input Tensor along dimensions specified notation based on the Einstein summation convention(Einsum). You can use this operator to perform diagonal/reducesum/transpose/matmul/mul/inner product operations, etc.

The inputs must be a tuple of tensors. When the inputs are only one tensor, you can input (tensor, ) dtypes of them should be float16/float32/float64.

Parameters:

equation (str) – An attribute, represent the operation you want to do. the value can contain only letters([a-z][A-Z]), commas(,), ellipsis(…), and arrow(->). the letters represent inputs’s tensor dimension, commas(,)represent separate tensors, ellipsis(…) indicates the tensor dimension that you do not care about, the left of the arrow(->) indicates the input tensors, and the right of it indicates the desired output dimension.

Inputs:
  • x (Tuple) - input tensor used for calculation. the data type of the tensor must be the same.

Outputs:

Tensor, the shape of it can be obtained from the equation, and the data type is the same as input tensors.

Raises:

TypeError – If equation itself is invalid, or the equation does not match the input tensor.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> equation = "i->"
>>> einsum = ops.Einsum(equation)
>>> output = einsum([x])
>>> print(output)
[7.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> equation = "i,i->i"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[ 2. 8. 12.]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> y = Tensor(np.array([[2.0, 3.0], [1.0, 2.0], [4.0, 5.0]]), mindspore.float32)
>>> equation = "ij,jk->ik"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[[16. 22.]
[37. 52.]]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->ji"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[[1. 4.]
[2. 5.]
[3. 6.]]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->j"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[5. 7. 9.]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "...->"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[21.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 1.0]), mindspore.float32)
>>> equation = "j,i->ji"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[[ 2. 4. 1.]
[ 4. 8. 2.]
[ 6. 12. 3.]]
class tinyms.primitives.Elu(alpha=1.0)[source]

Exponential Linear Uint activation function.

Applies the exponential linear unit function element-wise. The activation function is defined as:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

The picture about ELU looks like this ELU .

Parameters:

alpha (float) – The alpha value of ELU, the data type is float. Only support ‘1.0’ currently. Default: 1.0.

Inputs:
  • input_x (Tensor) - The input of ELU is a Tensor of any dimension with data type of float16, float32 or float64.

Outputs:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of input_x is neither float16, float32 nor float64.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> elu = ops.Elu()
>>> output = elu(input_x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
class tinyms.primitives.EmbeddingLookup[source]

Returns a slice of input tensor based on the specified indices.

This Primitive has the similar functionality as GatherV2 operating on axis = 0, but has one more inputs: offset.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). This represents a Tensor slice, instead of the entire Tensor. Currently, the dimension is restricted to be 2.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of input_params, and the exceeding part will be filled with 0 in the output. Values do not support negative and the result is undefined if values are negative. The data type should be int32 or int64.

  • offset (int) - Specifies the offset value of this input_params slice. Thus the real indices are equal to input_indices minus offset.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\). The data type is the same with input_params.

Raises:
  • TypeError – If dtype of input_indices is not int.

  • ValueError – If length of shape of input_params is greater than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
>>> input_indices = Tensor(np.array([[5, 2], [8, 5]]), mindspore.int32)
>>> offset = 4
>>> output = ops.EmbeddingLookup()(input_params, input_indices, offset)
>>> print(output)
[[[10. 11.]
  [ 0.  0.]]
 [[ 0.  0.]
  [10. 11.]]]
class tinyms.primitives.Eps[source]

Create a Tensor with the same data type and shape as input, and the element value is the minimum value that the corresponding data type can be expressed.

Inputs:
  • x (Tensor) - Tensor of any dimension used to obtain the minimum value that its data type can be expressed. The data type must be float16, float32 or float64.

Outputs:

Tensor, has the same type and shape as x, but filled with x dtype minimum val.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If data type of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([4, 1, 2, 3], mindspore.float32)
>>> output = ops.Eps()(x)
>>> print(output)
[1.5258789e-05 1.5258789e-05 1.5258789e-05 1.5258789e-05]
class tinyms.primitives.Equal[source]

Computes the equivalence between two tensors element-wise.

Refer to mindspore.ops.equal() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: The shape of two inputs are different
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> equal = ops.Equal()
>>> output = equal(x, 2.0)
>>> print(output)
[False True False]
>>> # case 2: The shape of two inputs are the same
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal = ops.Equal()
>>> output = equal(x, y)
>>> print(output)
[ True  True False]
class tinyms.primitives.EqualCount[source]

Computes the number of the same elements of two tensors.

The two input tensors must have the same data type and shape.

Inputs:
  • x (Tensor) - The first input tensor. If the data type and shape of y are determined, then x must be the same as y, and vice versa. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The second input tensor. If the data type and shape of x are determined, then y must be the same as x, and vice versa.

Outputs:

Tensor, with the type same as input tensor and shape as \((1,)\).

Raises:
  • TypeError – If x or y is not a Tensor.

  • ValueError – If shape of x is not equal to shape of y.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal_count = ops.EqualCount()
>>> output = equal_count(x, y)
>>> print(output)
[2]
class tinyms.primitives.Erf[source]

Computes the Gauss error function of x element-wise.

Refer to mindspore.ops.erf() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erf = ops.Erf()
>>> output = erf(x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
class tinyms.primitives.Erfc[source]

Computes the complementary error function of x element-wise.

Refer to mindspore.ops.erfc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erfc = ops.Erfc()
>>> output = erfc(x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
class tinyms.primitives.Erfinv[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1).

The formula is defined as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16 or float64.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises:

TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = ops.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.Erfinv[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1).

The formula is defined as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16 or float64.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises:

TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = ops.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.EuclideanNorm(keep_dims=False)[source]

Calculates the Euclidean norm(aka L2 norm) of a Tensor along the specified axes. The specified axes are removed by default.

Parameters:

keep_dims (bool, optional) – whether to retain the reduced dimensions. If true, retains them with length 1. If false, these dimensions are removed. Default: False.

Inputs:
  • x (Tensor) - The input Tensor to reduce.

  • axes (Tensor) - The axes to perform reduction on. Must be one of the following types: int32, int64. It must be in range \([-rank(x), rank(x))\).

Outputs:

Tensor, has the same type as the ‘x’.

Raises:
Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[3, 5], [4, 12]])).astype(np.int32)
>>> axes = Tensor([0])
>>> op = ops.EuclideanNorm(keep_dims=True)
>>> output = op(x, axes)
>>> print(output)
[[5 13]]
class tinyms.primitives.Exp[source]

Returns exponential of a tensor element-wise.

Refer to mindspore.ops.exp() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 1.0, 3.0]), mindspore.float32)
>>> exp = ops.Exp()
>>> output = exp(x)
>>> print(output)
[ 1.        2.718282 20.085537]
class tinyms.primitives.Expand[source]

Expands the Tensor along singleton dimensions(dim with size 1) to match given desired shape.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.expand() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> shape = Tensor(np.array([3,4]), mindspore.int32)
>>> expand = ops.Expand()
>>> y = expand(x, shape)
>>> print(y)
[[1. 1. 1. 1.]
 [2. 2. 2. 2.]
 [3. 3. 3. 3.]]
class tinyms.primitives.ExpandDims[source]

Adds an additional dimension to input_x at the given axis.

Refer to mindspore.ops.expand_dims() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> expand_dims = ops.ExpandDims()
>>> output = expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
class tinyms.primitives.Expm1[source]

Returns exponential then minus 1 of a tensor element-wise.

Refer to mindspore.ops.expm1() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 2.0, 3.0, 5.0]), mindspore.float32)
>>> expm1 = ops.Expm1()
>>> output = expm1(x)
>>> print(output)
[  0.         6.389056  19.085537 147.41316 ]
class tinyms.primitives.ExtractGlimpse(centered=True, normalized=True, uniform_noise=True, noise='uniform')[source]

Extracts glimpses(usually subarea of rectangle) from the input image Tensor and return as windows.

Note

If extracted windows and the input image only partially overlap, random noise is filled in those non overlapping areas.

Parameters:
  • centered (bool, optional) – An optional bool. Indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the center of the input images. If false, the (0, 0) offset corresponds to the upper left corner of the input images. Defaults to True.

  • normalized (bool, optional) – An optional bool. indicates if the offset coordinates are normalized. Defaults to True.

  • uniform_noise (bool, optional) – An optional bool. indicates if the noise should be generated using a uniform distribution(aka. Gaussian distribution). Defaults to True.

  • noise (str, optional) –

    An optional string specifies the type of noise to fill. The window is determined by size and offsets. When the window and input image tensor don’t not overlap, random noise is filled. The value can be ‘uniform’, ‘gaussian’ and ‘zero’. Default: uniform.

    • When noise is ‘uniform’ and ‘gaussian’, the result is variable.

    • When noise is ‘zero’, the value of uniform_noise must be ‘False’ and the filling noise will be zero so that the result is fixed.

    • When uniform_noise is ‘True’, the value of noise only can be ‘uniform’. When uniform_noise is ‘False’, the value of noise can be ‘uniform’, ‘gaussian’ and ‘zero’.

Inputs:
  • x (Tensor) - A 4-D float tensor of shape \((batch_size, height, width, channels)\). Types allowed: float32.

  • size (Tensor) - A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width. Types allowed: int32. The value of size must be greater than zero.

  • offsets (Tensor) - A 2-D integer tensor of shape \((batch_size, 2)\) containing the y, x locations of the center of each window. Types allowed: float32.

Outputs:

A 4-D tensor of shape \((batch_size, glimpse_height, glimpse_width, channels)\) with type: float32.

Raises:
  • TypeError – If centered is not a bool.

  • TypeError – If normalize is not a bool.

  • TypeError – If uniform_noise is not a bool.

  • ValueError – If noise is not uniform, gaussian or zero.

  • ValueError – If the value of size is not constant value.

  • ValueError – If the batch_size of input is inconsistent with the batch_size of offsets.

  • ValueError – If the value of offsets[1] is not 2.

  • ValueError – If the input is not Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[[0.0], [1.0], [2.0]], [[3.0], [4.0], [5.0]], [[6.0], [7.0], [8.0]]]], dtype=mindspore.float32)
>>> size = Tensor((2, 2), dtype=mindspore.int32)
>>> offsets = Tensor([[1, 1]], dtype=mindspore.float32)
>>> ops = P.image_ops.ExtractGlimpse(centered = False, normalized = False,
>>>                                  uniform_noise = False, noise = "uniform")
>>> output = ops(x, size, offsets)
>>> print(output)
[[[[0.]
   [1.]]
  [[3.]
   [4.]]]]
class tinyms.primitives.ExtractImagePatches(ksizes, strides, rates, padding='valid')[source]

Extracts patches from images. The input tensor must be a 4-D tensor and the data format is NCHW.

Parameters:
  • ksizes (Union[tuple[int], list[int]]) – The size of sliding window, must be a tuple or a list of integers, and the format is [1, 1, ksize_row, ksize_col].

  • strides (Union[tuple[int], list[int]]) – Distance between the centers of the two consecutive patches, must be a tuple or list of int, and the format is [1, 1, stride_row, stride_col].

  • rates (Union[tuple[int], list[int]]) – In each extracted patch, the gap between the corresponding dimension pixel positions, must be a tuple or a list of integers, and the format is [1, 1, rate_row, rate_col].

  • padding (str) –

    The type of padding algorithm, is a string whose value is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Means that the patch can take the part beyond the original image, and this part is filled with 0.

    • valid: Means that the taken patch area must be completely covered in the original image.

Inputs:
  • input_x (Tensor) - A 4-D tensor whose shape is [in_batch, in_depth, in_row, in_col] and data type is number.

Outputs:

Tensor, a 4-D tensor whose data type is same as ‘input_x’, and the shape is [out_batch, out_depth, out_row, out_col], Where the out_batch is the same as the in_batch and

\[out_depth=ksize\_row * ksize\_col * in\_depth\]

and if ‘padding’ is “valid”:

\[out\_row=floor((in\_row - (ksize\_row + (ksize\_row - 1) * (rate\_row - 1))) / stride\_row) + 1 out\_col=floor((in\_col - (ksize\_col + (ksize\_col - 1) * (rate\_col - 1))) / stride\_col) + 1\]

if ‘padding’ is “same”:

\[out\_row=floor((in\_row - 1) / stride\_row) + 1 out\_col=floor((in\_col - 1) / stride\_col) + 1\]
Supported Platforms:

Ascend GPU

class tinyms.primitives.ExtractVolumePatches(kernel_size, strides, padding)[source]

Extract patches from input and put them in the “depth” output dimension. “depth” dimension is the second dim of output.

Parameters:
  • kernel_size (Union[int, tuple[int], list[int]]) – A list of ints which’s length is 3 or 5. The size of the sliding window for each dimension of input. Must be: \([1, 1, k_d, k_h, k_w]\) or \([k_d, k_h, k_w]\). If \(k_d = k_h = k_w\), you can enter an integer.

  • strides (Union[int, tuple[int], list[int]]) – A list of ints which’s length is 3 or 5. How far the centers of two consecutive patches are in input. Must be: \([1, 1, s_d, s_h, s_w]\) or \([s_d, s_h, s_w]\). If \(s_d = s_h = s_w\), you can enter an integer.

  • padding (str) – A string from: “SAME”, “VALID”. The type of padding algorithm to use.

Inputs:
  • input_x (Tensor) - A Tensor. 5-D Tensor with shape \(()\).

Outputs:

Tensor, has the same type as input. If padding is “VALID”, the shape is \((x_n, k_d * k_h * k_w * x_c, 1 + (x_d - k_d) / s_d, 1 + (x_h - k_h) / s_h, 1 + (x_w - k_w) / s_w)\); if padding is “SAME”, the shape is \(( x_n, k_d * k_h * k_w * x_c, (x_d + s_d - 1) / s_d, (x_h + s_h - 1) / s_h, (x_w + s_w - 1) / s_w)\).

Raises:
  • TypeError – If kernel_size or strides is not a list, a tuple or an int.

  • TypeError – If input_x is not a tensor.

  • TypeError – If padding is not str.

  • ValueError – If the length of kernel_size is neither 3 nor 5 and kernel_size is not an integer.

  • ValueError – If the length of strides is neither 3 nor 5 and strides is not an integer.

  • ValueError – If padding is neither “VALID” nor “SAME”.

  • ValueError – If elements of kernel_size or strides are not positive integer.

  • ValueError – If input_x is not a tensor in dimension 5.

  • ValueError – If input_x’s shape has zero.

  • ValueError – If one of kernel_size or strides’ first two numbers is not 1.

  • ValueError – If padding = “VALID” and \(input\_x - kernel\_size\) is less than 0 in d, h or w dimension.

  • ValueError – If padding = “SAME” and \(padding\_needed = ((input\_x + strides - 1) / strides - 1) * strides + kernel\_size - input\_x\) is less than 0 in d, h or w dimension.

  • ValueError – If x_h is not 1 or x_w is not 1 and \(x_w + padding\_needed - k_w - s_w\) is less than 0.

  • ValueError – If \(x_d * x_h * x_w\) is greater than 2048.

Supported Platforms:

Ascend GPU CPU

Examples

>>> kernel_size = (1, 1, 2, 2, 2)
>>> strides = (1, 1, 1, 1, 1)
>>> padding = "VALID"
>>> input_x = P.Reshape()(Tensor(np.arange(1, 28), mstype.float16), (1, 1, 3, 3, 3))
>>> output_y = ops.ExtractVolumePatches(kernel_size, strides, padding)(input_x)
>>> print(output_y.shape)
(1, 8, 2, 2, 2)
class tinyms.primitives.Eye[source]

Creates a tensor with ones on the diagonal and zeros in the rest.

Refer to mindspore.ops.eye() for more details.

Inputs:
  • n (int) - The number of rows of returned tensor. Constant value only.

  • m (int) - The number of columns of returned tensor. Constant value only.

  • t (mindspore.dtype) - MindSpore’s dtype, the data type of the returned tensor. The data type can be bool or Number. Default: None, the data type of the returned tensor is mindspore.float32.

Outputs:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of output depends on the user’s Inputs n and m. And the data type depends on Inputs t.

Supported Platforms:

Ascend GPU CPU

Examples

>>> eye = ops.Eye()
>>> output = eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = eye(1, 2, mindspore.float64)
>>> print(output)
[[1. 0.]]
>>> print(output.dtype)
Float64
class tinyms.primitives.FFTWithSize(signal_ndim, inverse, real, norm='backward', onesided=True, signal_sizes=())[source]

Fourier transform, can be adjusted by parameters to achieve FFT/IFFT/RFFT/IRFFT.

For fft, it computes the following expression:

\[X[\omega_1, \dots, \omega_d] = \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},\]

where \(d\) = signal_ndim is number of dimensions for the signal, and \(N_i\) is the size of signal dimension \(i\).

For ifft, it computes the following expression:

\[X[\omega_1, \dots, \omega_d] = \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},\]

where \(d\) = signal_ndim is number of dimensions for the signal, and \(N_i\) is the size of signal dimension \(i\).

Note

  • FFT/IFFT requires complex64 or complex128 inputs, return complex64 or complex128 outputs.

  • RFFT requires float32 or float64 inputs, return complex64 or complex128 outputs.

  • IRFFT requires complex64 or complex128 inputs, return float32 or float64 outputs.

Parameters:
  • signal_ndim (int) – The number of dimensions in each signal, this controls how many dimensions of the fourier transform are realized, can only be 1, 2 or 3.

  • inverse (bool) – Whether it is the inverse transformation.

  • real (bool) –

    Whether it is the real transformation.

    • ”inverse:False real:False” corresponds to FFT.

    • ”inverse:True real:False” corresponds to IFFT.

    • ”inverse:False real:True” corresponds to RFFT.

    • ”inverse:True real:True” corresponds to IRFFT.

  • norm (str, optional) –

    The normalization, optional values: [“backward”, “forward”, “ortho”]. Default value: “backward”.

    • ”backward” has the direct transforms unscaled and the inverse transforms scaled by \(1/n\), where n is the input x’s element numbers.

    • ”ortho” has both direct and inverse transforms are scaled by \(1/\sqrt n\).

    • ”forward” has the direct transforms scaled by \(1/n\) and the inverse transforms unscaled.

  • onesided (bool, optional) – Controls whether the input is halved to avoid redundancy. Default: True.

  • signal_sizes (tuple, optional) –

    Size of the original signal (the signal before rfft, no batch dimension), only in IRFFT mode and set onesided to True requires the parameter, the following conditions must be satisfied. Default: ().

    • The length of signal_sizes is equal to the signal_ndim of the IRFFT: \(len(signal_sizes)=signal_ndim\).

    • The last dimension of signal_sizes divided by 2 is equal to the last dimension of the IRFFT input: \(signal_size[-1]/2+1=x.shape[-1]\).

    • signal_sizes has exactly the same dimensions as the input shape except for the last dimension: \(signal_sizes[:-1]=x.shape[:-1]\).

Inputs:
  • x (Tensor) - The dimension of the input tensor must be greater than or equal to signal_ndim.

Outputs:

A tensor containing the complex-to-complex, real-to-complex or complex-to-real Fourier transform result.

Raises:
  • TypeError – If the input type of FFT/IFFT/IRFF is not one of: complex64, complex128.

  • TypeError – If the input type of RFFT is not one of: float32, float64.

  • TypeError – If the input type is not Tensor.

  • ValueError – If x dimension is less than signal_ndim.

  • ValueError – If signal_ndim is greater than 3 or less than 1.

  • ValueError – If norm is none of “backward”, “forward” or “ortho”.

Supported Platforms:

GPU CPU

Examples

>>> # case FFT: signal_ndim: 1, inverse: False, real: False.
>>> fft_in = Tensor(np.array([2, 1, 2]), mindspore.complex64)
>>> fft_net = ops.FFTWithSize(signal_ndim=1, inverse=False, real=False)
>>> fft_output = fft_net(fft_in)
>>> print(fft_output)
[5.        +0.j         0.5       +0.86602545j 0.50000006-0.8660255j ]
>>> # case IFFT: signal_ndim: 1, inverse: True, real: False.
>>> ifft_in = fft_output
>>> ifft_net = ops.FFTWithSize(signal_ndim=1, inverse=True, real=False)
>>> ifft_output = ifft_net(ifft_in)
>>> print(ifft_output)
[2.        -1.9868216e-08j 0.99999994+0.0000000e+00j
 1.9999999 +7.9472862e-08j]
>>> # case RFFT2D: signal_ndim: 2, inverse: False, real: True.
>>> rfft_in = Tensor(np.array([[2, 1, 2], [3, 1, 6]]), mindspore.float32)
>>> rfft_net = ops.FFTWithSize(signal_ndim=2, inverse=False, real=True)
>>> rfft_output = rfft_net(rfft_in)
>>> print(rfft_output)
[[ 1.5000000e+01+1.1920929e-07j -2.3841858e-07+5.1961522e+00j]
 [-5.0000000e+00-2.9802322e-08j  9.9999988e-01-3.4641016e+00j]]
>>> # case IRFFT2D: signal_ndim: 2, inverse: True, real: True.
>>> irfft_in = rfft_output
>>> irfft_net = ops.FFTWithSize(signal_ndim=2, inverse=True, real=True, signal_sizes=rfft_in.shape)
>>> irfft_output = irfft_net(irfft_in)
>>> print(irfft_output)
[[2.         1.         2.        ]
 [3.         0.99999994 5.9999995 ]]
class tinyms.primitives.FastGeLU[source]

Fast Gaussian Error Linear Units activation function.

Refer to mindspore.ops.fast_gelu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = ops.FastGeLU()
>>> output = fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
class tinyms.primitives.FastGelu[source]

Same as operator FastGeLU. FastGelu will be deprecated in the future. Please use FastGeLU instead.

class tinyms.primitives.Fill[source]

The Fill interface is deprecated, please use the mindspore.ops.FillV2 instead.

Supported Platforms:

Deprecated

class tinyms.primitives.FillDiagonal(fill_value, wrap=False)[source]

Fills the main diagonal of a Tensor in-place with a specified value and returns the result. The input has at least 2 dimensions, and all dimensions of input must be equal in length when the dimension of input is greater than 2.

Parameters:
  • fill_value (float) – The value to fill the diagonal of input_x.

  • wrap (bool, optional) – Controls whether the diagonal elements continue onto the remaining rows in case of a tall matrix(A matrix has more rows than columns). Examples blow demonstrates how it works on a tall matrix if wrap is set True. Default: False.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type must be float32, int32 or int64.

Outputs:
  • y (Tensor) - Tensor, has the same shape and data type as the input input_x.

Raises:
  • TypeError – If data type of input_x is not one of the following: float32, int32, int64.

  • ValueError – If the dimension of input_x is not greater than 1.

  • ValueError – If the size of each dimension is not equal, when the dimension is greater than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> fill_value = 9.9
>>> fill_diagonal = ops.FillDiagonal(fill_value)
>>> y = fill_diagonal(x)
>>> print(y)
[[9.9 2.  3. ]
 [4.  9.9 6. ]
 [7.  8.  9.9]]
>>> x = Tensor(np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4], [5, 5, 5]]).astype(np.int32))
>>> fill_value = 9.0
>>> fill_diagonal = ops.FillDiagonal(fill_value)
>>> y = fill_diagonal(x)
>>> print(y)
[[9 0 0]
 [1 9 1]
 [2 2 9]
 [3 3 3]
 [4 4 4]
 [5 5 5]]
>>> x = Tensor(np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3],
...                      [4, 4, 4], [5, 5, 5], [6, 6, 6]]).astype(np.int64))
>>> fill_value = 9.0
>>> wrap = True
>>> fill_diagonal = FillDiagonal(fill_value, wrap)
>>> y = fill_diagonal(x)
>>> print(y)
[[9 0 0]
 [1 9 1]
 [2 2 9]
 [3 3 3]
 [9 4 4]
 [5 9 5]
 [6 6 9]]
class tinyms.primitives.FillV2[source]

Creates a tensor with shape described by shape and fills it with values in value .

Inputs:
  • shape (Union[Tuple[int], Tensor[int]]) - 1-D Tensor or Tuple, specify the shape of output tensor. Its dtype must be int32 or int64.

  • value (Tensor) - A 0-D Tensor, the value to fill the output tensor y .

Outputs:
  • y (Tensor) - A tensor, its shape and value are described above.

Raises:
  • TypeError – If shape is not a 1-D tensor or tuple.

  • TypeError – If the data type of shape is not int32 or int64.

  • ValueError – If value is not a 0-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> fillV2 = ops.FillV2()
>>> output = fillV2(Tensor([2, 3], mindspore.int32), Tensor(1, mindspore.float32))
>>> print(output)
[[1. 1. 1.]
 [1. 1. 1.]]
>>> output = fillV2(Tensor([3, 3], mindspore.int64), Tensor(0, mindspore.int32))
>>> print(output)
[[0 0 0]
 [0 0 0]
 [0 0 0]]
class tinyms.primitives.Fills[source]

The Fills primitive is deprecated. Please use mindspore.ops.fill() instead.

Supported Platforms:

Deprecated

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2,2)).astype('float32'))
>>> fills = ops.Fills()
>>> output = fills(a, float(1))
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Flatten[source]

Flattens a tensor without changing its batch size on the 0-th axis.

Refer to mindspore.ops.flatten() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> flatten = ops.Flatten()
>>> output = flatten(input_x)
>>> print(output.shape)
(1, 24)
class tinyms.primitives.FloatStatus[source]

Determines if the elements contain Not a Number(NaN), infinite or negative infinite. 0 for normal, 1 for overflow.

Inputs:
  • x (Tensor) - The input tensor. The data type must be float16, float32 or float64. \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the shape of \((1,)\), and the dtype is mindspore.dtype.float32.

Raises:

TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

GPU

Examples

>>> float_status = ops.FloatStatus()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = float_status(x)
>>> print(result)
[1.]
class tinyms.primitives.Floor[source]

Rounds a tensor down to the closest integer element-wise.

Refer to mindspore.ops.floor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> floor = ops.Floor()
>>> output = floor(x)
>>> print(output)
[ 1.  2. -2.]
class tinyms.primitives.FloorDiv[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Refer to mindspore.ops.floor_div() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_div = ops.FloorDiv()
>>> output = floor_div(x, y)
>>> print(output)
[ 0  1 -1]
class tinyms.primitives.FloorMod[source]

Computes the remainder of division element-wise, and it’s a flooring divide.

Refer to mindspore.ops.floor_mod() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_mod = ops.FloorMod()
>>> output = floor_mod(x, y)
>>> print(output)
[2 1 2]
class tinyms.primitives.Fmax[source]

Computes the maximum of input tensors element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.fmax() for more detail.

Supported Platforms:

CPU

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> fmax = ops.Fmax()
>>> output = fmax(x1, x2)
>>> print(output)
[4. 5. 6.]
class tinyms.primitives.Fmin[source]

Computes the minimum of input tensors element-wise.

Refer to mindspore.ops.fmin() for more detail.

Supported Platforms:

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mstype.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mstype.float32)
>>> fmin = ops.Fmin()
>>> output = fmin(x1, x2)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.FractionalAvgPool(pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0)[source]

Performs fractional avg pooling on the input.

Fractional avg pooling is similar to regular avg pooling, but with the added flexibility of allowing the overall reduction ratio N to be a non-integer value. In regular avg pooling, an input set is reduced in size by taking the average value of N x N (usually 2x2) subsections of the set, with the goal of reducing the set by a factor of N, where N is an integer.

Warning

“pooling_ratio”, currently only supports row and col dimension and should be >= 1.0, the first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions.

Parameters:
  • pooling_ratio (list(float)) – Decide the shape of output, is a list of floats that has length >= 4. Pooling ratio for each dimension of value should be >=0, currently only support for row and col dimension. The first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions.

  • pseudo_random (bool, optional) – Generate the pooling sequence either randomly or pseudo-randomly. If the pseudo_random parameter is set to True, the sequence will be generated in a pseudo-random fashion, otherwise it will be generated randomly. Refer to Fractional Max-Pooling by Benjamin Graham to understand the distinction between the two. Default: False.

  • overlapping (bool, optional) – When set to True, the values at the boundary of adjacent pooling cells will be shared by both cells during pooling process. When set to False, the values are not reused. Default: False.

  • deterministic (bool, optional) – If deterministic is set to True, a fixed pooling region will be used in the computation graph, ensuring that the FractionalAvgPool is deterministic. This is often used in unit tests. When set to False, fixed pool regions will not be used. Default: False.

  • seed (int, optional) – If either seed or seed2 are set to a non-zero value, the random number generator will be seeded using the specified seed. If neither seed nor seed2 are set, the generator will be seeded by a random seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • x (Tensor) -The data type must be one of the following types: float32, float64, int32, int64. Tensor of shape \((N, H_{in}, W_{in}, C_{in})\).

Outputs:
  • y (Tensor) - A tensor, the output of FractionalAvgPool, has the same data type with x. Tensor of shape \((N, H_{out}, W_{out}, C_{out})\).

  • row_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary rows.

  • col_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary cols.

Raises:
  • TypeError – If data type of x is not float32, float64, int32, int64.

  • TypeError – If x is not a 4D tensor.

  • ValueError – If element of x equals 0 or is less than 0.

  • ValueError – If pooling_ratio is a list whose length is not equal to 4.

  • ValueError – If the first and last element of pooling_ratio is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]).reshape([1,4,4,1]).astype(np.int64)
>>> pooling_ratio=[1.0,1.5,1.5,1.0]
>>> fractionalavgpool_op = ops.FractionalAvgPool(pooling_ratio=pooling_ratio)
>>> output = fractionalavgpool_op(Tensor(x))
>>> print(output)
(Tensor(shape=[1, 2, 2, 1], dtype=Int64, value=
[[[[ 3],
   [ 5]],
  [[11],
   [13]]]]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]))
class tinyms.primitives.FractionalMaxPool(pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0)[source]

Performs fractional max pooling on the input.

Fractional max pooling is similar to regular max pooling, but with the added flexibility of allowing the overall reduction ratio N to be a non-integer value. In regular max pooling, an input set is reduced in size by taking the maximum value of N x N (usually 2x2) subsections of the set, with the goal of reducing the set by a factor of N, where N is an integer.

In contrast, fractional max pooling uses randomly generated pool sizes that are fairly uniform in size.

Warning

“pooling_ratio”, currently only supports row and col dimension and should be >= 1.0, the first and last elements must be 1.0 because pooling on batch and channels dimensions is not allowed.

Parameters:
  • pooling_ratio (list(float)) – Decide the shape of output, is a list of float numbers has length >= 4. Pooling ratio for each dimension of value should not be less than 0, currently only support for row and col dimension.

  • pseudo_random (bool, optional) –

    Generate the pooling sequence either randomly or pseudo-randomly. If the pseudo_random parameter is set to True, the sequence will be generated in a pseudo-random fashion, otherwise it will be generated randomly. Refer to Fractional Max-Pooling by Benjamin Graham to understand the distinction between the two. Default: False.

  • overlapping (bool, optional) – When set to True, the values at the boundary of adjacent pooling cells will be shared by both cells during pooling process. When set to False, the values are not reused. Default: False.

  • deterministic (bool, optional) – If deterministic is set to True, a fixed pooling region will be used in the computation graph, ensuring that the FractionalMaxPool is deterministic. This is often used in unit tests. When set to False, fixed pool regions will not be used. Default: False.

  • seed (int, optional) – If either seed or seed2 are set to a non-zero value, the random number generator will be seeded using the specified seed. If neither seed nor seed2 are set, the generator will be seeded by a random seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • x (Tensor) -The data type must be one of the following types: float32, float64, int32, int64. Tensor of shape \((N, H_{in}, W_{in}, C_{in})\).

Outputs:
  • y (Tensor) - the output of FractionalMaxPool, has the same data type with x. Tensor of shape \((N, H_{out}, W_{out}, C_{out})\).

  • row_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary rows.

  • col_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary cols.

Raises:
  • TypeError – If data type of x is not float32, float64, int32, int64.

  • TypeError – If x is not a 4D tensor.

  • ValueError – If element of x equals 0 or is less than 0.

  • ValueError – If pooling_ratio is a list whose length is not equal to 4.

  • ValueError – If the first and last element of pooling_ratio is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]).reshape([1,4,4,1]).astype(np.int64)
>>> pooling_ratio=[1.0,1.5,1.5,1.0]
>>> fractionalmaxpool_op = ops.FractionalMaxPool(pooling_ratio=pooling_ratio)
>>> output = fractionalmaxpool_op(Tensor(x))
>>> print(output)
(Tensor(shape=[1, 2, 2, 1], dtype=Int64, value=
[[[[ 6],
   [ 8]],
  [[14],
   [16]]]]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]))
class tinyms.primitives.FractionalMaxPool3DWithFixedKsize(ksize, output_shape, data_format='NCDHW')[source]

Applies a 3D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in \((kD, kH, kW)\) regions by a stochastic step size determined by the target output size output_shape.

The number of output features is equal to the number of input planes.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

The input and output data format can be “NCDHW” and “NDHWC”. N is the batch size, C is the number of channels, D the feature depth, H is the feature height, and W is the feature width.

Parameters:
  • ksize (Union[float, tuple]) – Size of the pooling window. ksize can be a tuple of three values specify a shape \((k_D, k_H, k_W)\), or a single int K for \((K, K, K)\).

  • output_shape (Union[int, tuple]) – The target output shape. output_shape can be a tuple of three values specify a shape \((D_{out}, H_{out}, W_{out})\), or a single float S for \((S, S, S)\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCDHW’ and ‘NHDWC’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - The input of FractionalMaxPool3DWithFixedKsize, which is a 4D or 5D tensor. Tensor of data type : float16, float32, double, int32, int64. Supported shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((N, D_{in}, H_{in}, W_{in}, C)\).

  • random_samples (Tensor) - The random step of FractionalMaxPool3DWithFixedKsize, which is a 3D tensor. Tensor of data type : float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 3)\)

Outputs:
  • y (Tensor) - A tensor, the output of FractionalMaxPool3DWithFixedKsize. Has the same data type with x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((N, D_{out}, H_{out}, W_{out}, C)\).

  • argmax (Tensor) - A tensor, the indices along with the outputs. Has the same shape as the y and int32 or int64 data type.

Raises:
  • TypeError – If input_x is not a 4D or 5D tensor.

  • TypeError – If random_samples is not a 3D tensor.

  • TypeError – If data type of x is not float16, float32, double, int32, int64.

  • TypeError – If dtype of random_samples is not float16, float32, double.

  • TypeError – If dtype of argmax is not int32, int64.

  • ValueError – If output_shape is a tuple and if output_shape length is not 3.

  • ValueError – If ksize is a tuple and if ksize length is not 3.

  • ValueError – If numbers in output_shape or ksize is not positive.

  • ValueError – If data_format is neither ‘NCDHW’ nor ‘NDHWC’.

  • ValueError – If the first dimension size of input_x and random_samples is not equal.

  • ValueError – If the second dimension size of input_x and random_samples is not equal.

  • ValueError – If the third dimension size of random_samples is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...       .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> random_samples = Tensor(np.array([0.7, 0.7, 0.7]).reshape([1, 1, 3]), mstype.float32)
>>> ksize = (1, 1, 1)
>>> output_shape = (1, 1, 2)
>>> net = ops.FractionalMaxPool3DWithFixedKsize(ksize = ksize, output_shape = output_shape)
>>> output, argmax = net(x, random_samples)
>>> print(output)
>>> print(argmax)
[[[[[13. 16.]]]]]
[[[[[12 15]]]]]
class tinyms.primitives.FractionalMaxPoolWithFixedKsize(ksize, output_shape, data_format='NCHW')[source]

Applies a 2D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in \((kH, kW)\) regions by a stochastic step size determined by the target output size output_shape.

The number of output features is equal to the number of input planes.

Fractional MaxPooling is described in the paper Fractional Max-Pooling.

Parameters:
  • ksize (Union[int, tuple[int]]) – Size of the pooling window. ksize can be a tuple of two values specify a shape \((k_H, k_W)\), or a single int K for \((K, K)\).

  • output_shape (Union[int, tuple[int]]) – The target output shape. output_shape can be a tuple of two values specify a shape \((H_{out}, W_{out})\), or a single float S for \((S, S)\).

  • data_format (str, optional) – The optional value for data format, is ‘NCHW’. Default: “NCHW”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C, H_{in}, W_{in})\), with float16, float32, float64, int32, int64 data type.

  • random_samples (Tensor) - Tensor of shape \((N, C, 2)\). with float16, float32, float64 data type.

Outputs:
  • y (Tensor) - Has the same type as the input_x. Has the shape \((N, C, H_{out}, W_{out})\).

  • argmax (Tensor) -A tensor whose data type must be int64. Has the same shape as the y.

Raises:
  • TypeError – If data type of input_x is not one of the following: float16, float32, float64, int32, int64.

  • TypeError – If data type of random_samples is not one of the following: float16, float32, float64.

  • ValueError – If ksize is not a number and ksize is not a tuple of length 2.

  • ValueError – If output_shape is not a number and output_shape is not a tuple of length 2.

  • ValueError – If the sum of ksize , output_shape and -1 is larger than the corresponding dimension of input_x.

  • ValueError – If the dimension of random_samples is not 3.

  • ValueError – If the first dimension size of input_x and random_samples is not equal.

  • ValueError – If the second dimension size of input_x and random_samples is not equal.

  • ValueError – If the third dimension size of random_samples is not 2.

Supported Platforms:

CPU

Examples

>>> # the ksize is an int number and the output_shape is a tuple.
>>> ksize = 2
>>> output_shape = (2,2)
>>> data_format = "NCHW"
>>> input_x = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.9581, 0.4760, 0.9014, 0.8522, 0.3664,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([1, 1, 5, 5]), mstype.float32)
>>> random_samples = Tensor(np.array([[[0.8, 0.8]]]), mstype.float32)
>>> net = ops.FractionalMaxPoolWithFixedKsize(ksize, output_shape, data_format)
>>> y, argmax = net(input_x, random_samples)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
class tinyms.primitives.FusedAdaFactor(enable_scale_parameter=False, enable_first_moment=False, enable_weight_decay=False)[source]

Updates gradients by the Adaptive Learning Rates with Sublinear Memory Cost (Adafactor) algorithm.

The Adafactor algorithm is proposed in Adafactor: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost.

Warning

This is an experimental API that is subject to change or deletion.

Adafactor for weight vector are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ \hat{V}_{t}=\hat{\beta}_{2} \hat{V}_{t-1}+\left(1-\hat{\beta}_{2_{t}}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n}\right) \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} \hat{U}_{t} \end{array}\end{split}\]

Adafactor for weight matrices are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ R_{t}=\hat{\beta}_{2 t} R_{t-1}+\left(1-\hat{\beta}_{2 t}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) 1_{m} \\ C_{t}=\hat{\beta}_{2 t} C_{t-1}+\left(1-\hat{\beta}_{2 t}\right) 1_{n}^{\top}\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) \\ \hat{V}_{t}=R_{t} C_{t} / 1_{n}^{\top} R_{t} \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} U_{t} \end{array}\end{split}\]

Where RMS is:

\[\begin{split}\operatorname{RMS}\left(U_{t}\right)=\operatorname{RMS}_{x \in X}\left(u_{x t}\right)= \\ \sqrt{\operatorname{Mean}_{x \in X}\left(\frac{\left(g_{x t}\right)^{2}}{\hat{v}_{x t}}\right)}\end{split}\]

\(x\) is each individual parameter, \(t\) is assumed to be the current number of steps, \(a_{t}\) is the learning rate, \(f(X)\) is the loss function, \(\epsilon1\) and \(\epsilon2\) is a small positive number to prevent errors, \(d\) is the clipping threshold, \(\beta_{2}\) is the moment decay, \(\rho\) is the relative step size, \(R\) is the running averages of the row sums of the squared gradient, \(C\) is the running averages of the column sums of the squared gradient.

Parameters:
  • enable_weight_decay (bool) – If True, enable weight decay. default: False

  • enable_first_moment (bool) – If True, enable first moment. default: False

  • enable_scale_parameter (bool) – If True, enable scale learning rate using parameter. default: False

Inputs:
  • epsilon (Tensor) - input epsilon pair.

  • clip_threshold (float) - The threshold of root mean square of final gradient update.

  • beta1 (float) - The exponential decay rate for the 1nd moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • weight_decay (float) - The weight decay value, must be a scalar tensor with float data type.

  • learning_rate (float) - The learning rate value.

  • gradient (Tensor) - Gradient.

  • param (Tensor) - Weights to be updated.

  • exp_avg (Tensor) - The exponential moving average of 1st moment optimizer state.

  • exp_avg_sq_row (Tensor) - The exponential moving average of square of gradient square row factor.

  • exp_avg_sq_col (Tensor) - The exponential moving average of square of gradient square col factor.

  • exp_avg_sq (Tensor) - The exponential moving average of square of gradient square.

Outputs:
  • dummy_param (Tensor) - The same shape and data type as param.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, Parameter
>>> from mindspore import dtype as mstype
>>> param_shape = [2, 3, 2]
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.opt = ops.FusedAdaFactor()
...         self.param = Parameter(Tensor(np.ones(param_shape), mstype.float32), name="param")
...         self.exp_avg = Parameter(Tensor(np.zeros(param_shape), mstype.float32), name="exp_avg")
...         self.exp_avg_sq = Parameter(Tensor(np.zeros(param_shape), mstype.float32), name="exp_avg_sq")
...         self.exp_avg_sq_row = Parameter(Tensor(np.zeros([2, 3]), mstype.float32), name="exp_avg_sq_row")
...         self.exp_avg_sq_col = Parameter(Tensor(np.zeros([2, 2]), mstype.float32), name="exp_avg_sq_col")
...
...     def construct(self, epsilon, clip_threshold, beta1, beta2, weight_decay, lr, grad):
...         out = self.opt(epsilon, clip_threshold, beta1, beta2, weight_decay, lr, grad, self.param,
...                        self.exp_avg, self.exp_avg_sq_row, self.exp_avg_sq_col, self.exp_avg_sq)
...         return out
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> gradient = Tensor(np.ones(param_shape), mstype.float32)
>>> output = net((1e-30, 1e-3), 1.0, 0.9, 0.8, 1e-2, 0.03, gradient)
class tinyms.primitives.FusedAdaFactorWithGlobalNorm(enable_scale_parameter=False, enable_first_moment=False, enable_weight_decay=False)[source]

Divide global norm for gradient in FusedAdaFactor, and refer to super class for FusedAdaFactor details

class tinyms.primitives.FusedCastAdamWeightDecay(use_locking=False)[source]

Updates gradients by the Adaptive Moment Estimation (AdamWeightDecay) algorithm with weight decay. This operator incorporates type conversion when parameters are initialized with dtype of float16.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + \epsilon} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters:

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated with the type float16 or float32.

  • m (Tensor) - The 1st moment vector in the updating formula with the type float32.

  • v (Tensor) - the 2nd moment vector in the updating formula with the type float32.

  • lr (float) - \(lr\) in the updating formula.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • decay (float) - The weight decay value, must be a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the type float16.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, Parameter
>>> from mindspore import dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.opt = ops.FusedCastAdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]), mstype.float16), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad, norm):
...         out = self.opt(self.var, self.m, self.v, lr, beta1, beta2, epsilon, decay, grad, norm)
...         return out
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]), mstype.float16)
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient, 1.0)
infer_dtype(var_dtype, m_dtype, v_dtype, lr_dtype, beta1_dtype, beta2_dtype, epsilon_dtype, decay_dtype, grad_dtype, global_norm)[source]

infer dtype

class tinyms.primitives.FusedSparseAdam(use_locking=False, use_nesterov=False)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(\beta_1^t\) and \(\beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula. With float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient, has the same data type as var and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If neither use_locking nor use_neserov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient or indices is not float32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adam = ops.FusedSparseAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                                      epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[0.99971527 0.99971527]]]
class tinyms.primitives.FusedSparseFtrl(lr, l1, l2, lr_power, use_locking=False)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme.

All inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

  • linear (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • ValueError – If shape of lr_power less than or equal to zero.

  • TypeError – If dtype of var is not float32.

  • TypeError – If dtype of indices is not int32.

  • TypeError – If shape of accum, linear or grad is not same as var.

  • TypeError – If shape of indices is not same as shape of first dimension of grad.

  • RuntimeError – If the data type of all of inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.FusedSparseFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[-0.00598256 -0.00598256]]
 [[-0.00598256 -0.00598256]]
 [[ 1.          1.        ]]]
class tinyms.primitives.FusedSparseLazyAdam(use_locking=False, use_nesterov=False)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the original Adam algorithm, as only the current indices parameters will be updated.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(\beta_1^t\) and \(\beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient value with float32 data type and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If neither use_locking nor use_nestrov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon or gradient is not float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_lazyadam = ops.FusedSparseLazyAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_lazyadam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1,
...                                          beta2, epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedSparseProximalAdagrad(use_locking=False)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the variable and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Tensor) - The learning rate value. The data type must be float32. The shape is \((1, )\).

  • l1 (Tensor) - l1 regularization strength. The data type must be float32. The shape is \((1, )\).

  • l2 (Tensor) - l2 regularization strength. The data type must be float32. The shape is \((1, )\).

  • grad (Tensor) - A tensor of the same data type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2 or grad is not float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.FusedSparseProximalAdagrad()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.lr = Tensor(0.01, mindspore.float32)
...         self.l1 = Tensor(0.0, mindspore.float32)
...         self.l2 = Tensor(0.0, mindspore.float32)
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[0.99900496 0.99900496]]
 [[0.99900496 0.99900496]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedWeightScaleApplyMomentum[source]

Optimizer that implements the Momentum algorithm with weight decay and loss scale.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.

Inputs:
  • weight_decay (Tensor) - The weight decay value, must be a scalar tensor with float data type. Default: 0.0.

  • loss_scale (Tensor) - The loss scale value, must be a scalar tensor with float data type. Default: 1.0.

  • variable (Parameter) - Weights to be updated. data type must be float.

  • accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.

Outputs:

Tensor, parameters to be updated.

Supported Platforms:

GPU

Examples

Please refer to the usage in mindspore.nn.Momentum, and add weight_decay and loss_scale as inputs.

infer_dtype(d_dtype, s_dtype, v_dtype, a_dtype, l_dtype, g_dtype, m_dtype)[source]

infer dtype

class tinyms.primitives.GLU(axis=-1)[source]

Computes GLU (Gated Linear Unit activation function) of input tensors.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.glu() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> import numpy as np
>>> axis = 0
>>> x = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([2, 2, 5]), mstype.float32)
>>> glu = ops.GLU(axis=axis)
>>> y = glu(x)
>>> print(y)
[[[0.20028052 0.6916126  0.57412136 0.06512236 0.26307625]
  [0.3682598  0.3093122  0.17306386 0.10212085 0.63814086]]]
class tinyms.primitives.Gamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

\[\text{P}(x|α,β) = \frac{\exp(-x/β)}{{β^α}\cdot{\Gamma(α)}}\cdot{x^{α-1}}\]

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int) – The global random seed and it will combile with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • alpha (Tensor) - α is the shape parameter of Gamma distribution, which mainly determines the shape of the curve. It must be greater than 0. The data type is float32.

  • beta (Tensor) - β is the inverse scale parameter of the Gamma distribution, which mainly determines how steep the curve is. It must be greater than 0. The data type is float32.

Outputs:

Tensor. The shape must be the broadcasted shape of Input “shape” and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If alpha or beta is not a Tensor.

  • TypeError – If data type of alpha or beta is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend

Examples

>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> beta = Tensor(np.array([1.0]), mstype.float32)
>>> gamma = ops.Gamma(seed=3)
>>> output = gamma(shape, alpha, beta)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
class tinyms.primitives.Gather(batch_dims=0)[source]

Returns the slice of the input tensor corresponding to the elements of input_indices on the specified axis.

The following figure shows the calculation process of Gather commonly:

tinyms/Gather.png

where params represents the input input_params, and indices represents the index to be sliced input_indices.

Refer to mindspore.ops.gather() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: input_indices is a Tensor with shape (5, ).
>>> input_params = Tensor(np.array([1, 2, 3, 4, 5, 6, 7]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 4, 2, 6]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[1. 3. 5. 3. 7.]
>>> # case2: input_indices is a Tensor with shape (2, 2). When the input_params has one dimension,
the output shape is equal to the input_indices shape.
>>> input_indices = Tensor(np.array([[0, 2], [2, 6]]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[ 1. 3.]
 [ 3. 7.]]
>>> # case3: input_indices is a Tensor with shape (2, ). input_params is a Tensor with shape (3, 4) and axis is 0.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[1.  2.  3.  4.]
 [9. 10. 11. 12.]]
>>> # case4: input_indices is a Tensor with shape (2, ).
>>> # input_params is a Tensor with shape (3, 4) and axis is 1, batch_dims is 1.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 1]), mindspore.int32)
>>> axis = 1
>>> batch_dims = 1
>>> output = ops.Gather(batch_dims)(input_params, input_indices, axis)
>>> print(output)
[ 1.  7. 10.]
class tinyms.primitives.GatherD[source]

Gathers elements along an axis specified by dim.

Refer to mindspore.ops.gather_elements() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.GatherD()(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
class tinyms.primitives.GatherNd[source]

Gathers slices from a tensor by indices.

Refer to mindspore.ops.gather_nd() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.GatherNd()
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> output = op(input_x, indices)
>>> print(output)
[-0.1  0.5]
class tinyms.primitives.GatherV2[source]

Same as operator Gather. GatherV2 will be deprecated in the future. Please use Gather instead.

class tinyms.primitives.Gcd[source]

Computes greatest common divisor of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> gcd_ = ops.Gcd()
>>> y = gcd_(x1, x2)
>>> print(y)
[7 2 3]
class tinyms.primitives.GeLU[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

GeLU is defined as follows:

\[GELU(x_i) = x_i*P(X < x_i)\]

where \(P\) is the cumulative distribution function of the standard Gaussian distribution, \(x_i\) is the input element.

Inputs:
  • x (Tensor) - The input of the activation function GeLU, the data type is float16, float32 or float64.

Outputs:

Tensor, with the same type and shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> gelu = ops.GeLU()
>>> result = gelu(x)
>>> print(result)
[0.841192  1.9545976  2.9963627]
class tinyms.primitives.GeSwitch[source]

Adds control switch to data.

Switch data flows into false or true branch depending on the condition. If the condition is true, the true branch will be activated, or vise verse.

Inputs:
  • data (Union[Tensor, Number]) - The data to be used for switch control.

  • pred (Tensor) - It must be a scalar whose type is bool and shape is (), It is used as condition for switch control.

Outputs:

tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data. The false_output connects with the false_branch and the true_output connects with the true_branch.

Raises:
  • TypeError – If data is neither a Tensor nor a Number.

  • TypeError – If pred is not a Tensor.

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.square = ops.Square()
...         self.add = ops.Add()
...         self.value = Tensor(np.full((1), 3), mindspore.float32)
...         self.switch = ops.GeSwitch()
...         self.merge = ops.Merge()
...         self.less = ops.Less()
...
...     def construct(self, x, y):
...         cond = self.less(x, y)
...         st1, sf1 = self.switch(x, cond)
...         st2, sf2 = self.switch(y, cond)
...         add_ret = self.add(st1, st2)
...         st3, sf3 = self.switch(self.value, cond)
...         sq_ret = self.square(sf3)
...         ret = self.merge((add_ret, sq_ret))
...         return ret[0]
...
>>> x = Tensor(10.0, dtype=mindspore.float32)
>>> y = Tensor(5.0, dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
class tinyms.primitives.Gelu[source]

Same as operator GeLU. Gelu will be deprecated in the future. Please use GeLU instead.

class tinyms.primitives.Geqrf[source]

Decomposes a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R. The process is called QR decomposition: \(A = QR\).

Both Q and R matrices are stored in the same output tensor y. The elements of R are stored on and above the diagonal, whereas elementary reflectors (or Householder vectors) implicitly defining matrix Q are stored below the diagonal.

This function returns two tensors (y, tau).

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - Tensor of shape \((*, m, n)\), input must be a matrix greater than or equal to 2D, with dtype of float32, float64, complex64, complex128.

Outputs:
  • y (Tensor) - Tensor of shape \((*, m, n)\), has the same dtype as the x.

  • tau (Tensor) - Tensor of shape \((*, p)\) and \(p = min(m, n)\), has the same dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If the dtype of x is neither float32, float64, complex64, complex128.

  • ValueError – If x dimension is less than 2

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-2.0, -1.0], [1.0, 2.0]]).astype(np.float32))
>>> geqrf = ops.Geqrf()
>>> y, tau = geqrf(input_x)
>>> print(y)
[[ 2.236068   1.7888544]
 [-0.236068   1.3416407]]
>>> print(tau)
[1.8944271 0.       ]
class tinyms.primitives.Ger[source]

Ger product of x1 and x2. Calculate the outer product of two arrays. If x1 is a 1D Tensor of shape \((m,)\) and x2 is a 1D Tensor of shape \((n,)\), then output must be a 2D Tensor of shape \((m, n)\).

Refer to mindspore.ops.ger() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([1., 2., 3., 4.], mindspore.float32)
>>> x2 = Tensor([1., 2., 3.], mindspore.float32)
>>> ger = ops.Ger()
>>> output = ger(x1, x2)
>>> print(output)
[[ 1.  2.  3.]
 [ 2.  4.  6.]
 [ 3.  6.  9.]
 [ 4.  8. 12.]]
class tinyms.primitives.GetNext(types, shapes, output_num, shared_name)[source]

Returns the next element in the dataset queue.

Note

The GetNext operation needs to be associated with network and it also depends on the ‘dataset’ interface, For example, please refer to mindspore.dataset.MnistDataset . it can’t be used directly as a single operation. For details, please refer to mindspore.connect_network_with_dataset source code.

Parameters:
  • types (list[mindspore.dtype]) – The type of the outputs.

  • shapes (list[tuple[int]]) – The dimensionality of the outputs.

  • output_num (int) – The output number, length of types and shapes.

  • shared_name (str) – Queue name to fetch the data.

Inputs:

No inputs.

Outputs:

tuple[Tensor], the output of dataset. The shape is described in shapes and the type is described in types.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> from mindspore import ops
>>> from mindspore import dataset as ds
>>> from mindspore.common import dtype as mstype
>>> data_path = "/path/to/MNIST_Data/train/"
>>> train_dataset = ds.MnistDataset(data_path, num_samples=10)
>>> dataset_helper = mindspore.DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> dataset = dataset_helper.iter.dataset
>>> dataset_types, dataset_shapes = dataset_helper.types_shapes()
>>> queue_name = dataset.__transfer_dataset__.queue_name
>>> get_next = ops.GetNext(dataset_types, dataset_shapes, len(dataset_types), queue_name)
>>> data, label = get_next()
>>> relu = ops.ReLU()
>>> result = relu(data.astype(mstype.float32))
>>> print(result.shape)
(28, 28, 1)
class tinyms.primitives.Greater[source]

Compare the value of the input parameters \(x,y\) element-wise, and the output result is a bool value.

Refer to mindspore.ops.gt() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater = ops.Greater()
>>> output = greater(x, y)
>>> print(output)
[False True False]
infer_value(x, y)[source]

Infer value for Greater.

class tinyms.primitives.GreaterEqual[source]

Computes the boolean value of \(x >= y\) element-wise.

Refer to mindspore.ops.ge() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater_equal = ops.GreaterEqual()
>>> output = greater_equal(x, y)
>>> print(output)
[True True False]
class tinyms.primitives.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)[source]

This operation samples 2d input_x by using interpolation based on flow field grid, which is usually gennerated by mindspore.ops.affine_grid().

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • interpolation_mode (str, optional) – An optional string specifying the interpolation method. The optional values are “bilinear” or “nearest”. Default: “bilinear”.

  • padding_mode (str, optional) –

    An optional string specifying the pad method. The optional values are “zeros”, “border” or “reflection”. Default: “zeros”. When the sampling grid is outside input’s bounds, effects of various padding modes are as follows:

    • ”zeros”: Pads the input tensor with zeros.

    • ”border”: Pads the input tensor with the values of the pixels on the border of the tensor.

    • ”reflection”: Pads the input tensor by reflecting the values of the pixels at the boundary of the tensor.

  • align_corners (bool, optional) – An optional bool. When set to True, the centers of the corner pixels of the input and output tensors are aligned. When set to False, it is not aligned. Defaults to False.

Inputs:
  • input_x (Tensor) - A 4-D tensor with dtype of float16 or float32 and shape of \((N, C, H_{in}, W_{in})\).

  • grid (Tensor) - A 4-D tensor whose dtype is the same as input_x and whose shape is \((N, H_{out}, W_{out}, 2)\). Used to specify the sampling pixel locations normalized by the input spatial dimensions.

Outputs:

A 4-D Tensor whose dtype is the same as input_x and whose shape is \((N, C, H_{out}, W_{out})\).

Raises:
  • TypeError – If input_x or grid is not a Tensor.

  • TypeError – If the dtypes of input_x and grid are inconsistent.

  • TypeError – If the dtype of input_x or grid is not a valid type.

  • TypeError – If align_corners is not a boolean value.

  • ValueError – If the rank of input_x or grid is not equal to 4.

  • ValueError – If the first dimension of input_x is not equal to that of grid.

  • ValueError – If the forth dimension of grid is not equal to 2.

  • ValueError – If interpolation_mode is not “bilinear”, “nearest” or a string value.

  • ValueError – If padding_mode is not “zeros”, “border”, “reflection” or a string value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> gridsampler = ops.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=True)
>>> input_x = Tensor(np.arange(16).reshape((2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(-9, 9, 0.5).reshape((2, 3, 3, 2)).astype(np.float32))
>>> output = gridsampler(input_x, grid)
>>> print(output)
[[[[ 0.     0.     0.   ]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.5  ]]
  [[ 0.     0.     0.   ]
   [ 0.     0.     0.   ]
   [ 0.     1.5    4.5  ]]]
 [[[10.     8.25   1.375]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.   ]]
  [[14.    11.25   1.875]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.   ]]]]
class tinyms.primitives.GridSampler3D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)[source]

Given an input and a grid, the output is calculated using the input values and pixel positions in the grid. Only volume (5-D) input is supported.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.grid_sample() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> gridsampler = ops.GridSampler3D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=True)
>>> input_x = Tensor(np.arange(32).reshape((2, 2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(-0.2, 1, 0.1).reshape((2, 2, 1, 1, 3)).astype(np.float32))
>>> output = gridsampler(input_x, grid)
>>> print(output)
[[[[[ 3.3     ]]
   [[ 4.35    ]]]
  [[[11.300001]]
   [[12.349999]]]]
 [[[[21.4     ]]
   [[22.449999]]]
  [[[29.4     ]]
   [[30.449999]]]]]
class tinyms.primitives.HSVToRGB[source]

Transform one single or a batch of images from HSV to RGB color space. Each pixel’s HSV value is converted to its corresponding RGB value. Note that the function is only well-defined for input pixel values in the range [0, 1]. Image format should be “NHWC”.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape \((batch, image\_height, image\_width, channel)\). Number of channel must be 3. Types allowed: float16, float32, float64.

Outputs:

A 4-D tensor of shape \((batch, image\_height, image\_width, channel)\) with same type of input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If the dtype of x is not float16, float32, float64.

  • ValueError – If rank of the x is not equal to 4.

  • ValueError – If the last dimension of x is not equal to 3.

Supported Platforms:

GPU CPU

Examples

>>> image = np.array([0.5, 0.5, 0.5]).astype(np.float32).reshape([1, 1, 1, 3])
>>> hsv_to_rgb = ops.HSVToRGB()
>>> output = hsv_to_rgb(Tensor(image))
>>> print(output)
[[[[0.25 0.5  0.5 ]]]]
class tinyms.primitives.HShrink(lambd=0.5)[source]

Hard Shrink activation function.

Refer to mindspore.ops.hardshrink() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> input_x = Tensor(np.array([[0.5,  1,  2.0], [0.0533, 0.0776, -2.1233]]), ms.float32)
>>> hshrink = ops.HShrink()
>>> output = hshrink(input_x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
class tinyms.primitives.HSigmoid[source]

Hard sigmoid activation function.

Refer to mindspore.ops.hardsigmoid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> hsigmoid = ops.HSigmoid()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hsigmoid(input_x)
>>> print(result)
[0.3333 0.1666 0.5    0.8335 0.6665]
class tinyms.primitives.HSwish[source]

Hard swish activation function.

Refer to mindspore.ops.hardswish() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> hswish = ops.HSwish()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hswish(input_x)
>>> print(result)
[-0.3333  -0.3333  0  1.666  0.6665]
class tinyms.primitives.HammingWindow(periodic=True, alpha=0.54, beta=0.46, dtype=mindspore.float32)[source]

Computes the hamming window function with input window length.

\[w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),\]

where \(N\) is the full window size.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • periodic (bool, optional) –

    a flag determines whether the returned window trims off the last duplicate value from the symmetric window. Default: True.

    • If True, returns a window to be used as periodic function, in above formula, \(N = \text{length} + 1\).

    • If False, return a symmetric window, \(N = \text{length}\).

  • alpha (float, optional) – The coefficient \(\alpha\) in the equation above. Default: 0.54.

  • beta (float, optional) – The coefficient \(\beta\) in the equation above. Default: 0.46.

  • dtype (mindspore.dtype, optional) – An optional data type of mstype.float16, mstype.float32 and mstype.float64. Default: mstype.float32.

Inputs:
  • length (Tensor) - a positive integer tensor controlling the returned window size, must be 1D.

Outputs:

Tensor, A 1-D tensor containing the window, whose shape is \((\text{length},)\).

Raises:
  • TypeError – If length is not a Tensor.

  • TypeError – If dtype of length is not integer data type.

  • TypeError – If periodic is not a bool.

  • TypeError – If alpha is not a float.

  • TypeError – If beta is not a float.

  • TypeError – If dtype is not mindspore.float16, mindspore.float32 or mindspore.float64.

  • ValueError – If dimension of length is not 1.

  • ValueError – If data of length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: periodic=True.
>>> length = Tensor(np.array([6]).astype(np.int32))
>>> hamming_window = ops.HammingWindow(periodic=True)
>>> y = hamming_window(length)
>>> print(y)
[0.08000001 0.31       0.77000004 1.         0.77000004 0.31      ]
>>> # case 2: periodic=False.
>>> length = Tensor(np.array([7]).astype(np.int32))
>>> hamming_window = ops.HammingWindow(periodic=False)
>>> y = hamming_window(length)
>>> print(y)
[0.08000001 0.31       0.77000004 1.         0.77000004 0.31       0.08000001]
class tinyms.primitives.Heaviside[source]

Applies the Heaviside step function for input x element-wise.

\[\begin{split}\text { heaviside }(\text { x, values })=\left\{\begin{array}{ll} 0, & \text { if x }<0 \\ \text { values, } & \text { if x }==0 \\ 1, & \text { if x }>0 \end{array}\right.\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. With real number data type.

  • values (Tensor) - The values to use where x is zero. It should be able to broadcast with x have the same dtype as x.

Outputs:

Tensor, has the same type as x and values.

Raises:
  • TypeError – If x or values is not Tensor.

  • TypeError – If data type x and values is different.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1.5, 0., 2.]))
>>> values = Tensor(np.array([0.5]))
>>> heaviside = ops.Heaviside()
>>> y = heaviside(x, values)
>>> print(y)
[0.  0.5 1. ]
class tinyms.primitives.Histogram(bins=100, min=0.0, max=0.0)[source]

Computes the histogram of Tensor element distribution.

The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Elements lower than min and higher than max are ignored.

Parameters:
  • bins (int, optional) – Number of histogram bins, optional. Default 100. If specified, must be positive.

  • min (float, optional) – An optional float of the lower end of the range (inclusive). Default value is 0.0.

  • max (float, optional) – An optional float of the upper end of the range (inclusive). Default value is 0.0.

Inputs:
  • x (Tensor) - the input tensor, type support list: [float16, float32, int32].

Outputs:

Tensor, 1-D Tensor with type int32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([1., 2, 1])
>>> op = ops.Histogram(bins=4, min=0.0, max=3.0)
>>> y = op(x)
>>> print(y)
[0 2 1 0]
class tinyms.primitives.HistogramFixedWidth(nbins, dtype='int32')[source]

Returns a rank 1 histogram counting the number of entries in values that fall into every bin. The bins are equal width and determined by the inputs range and the arguments nbins.

Parameters:
  • nbins (int) – The number of histogram bins, the type is a positive integer.

  • dtype (str, optional) – An optional attribute. The dtype must be str. Default: “int32”.

Inputs:
  • x (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.

  • range (Tensor) - Must have the same data type as x, and the shape is \((2,)\). x <= range[0] will be mapped to histogram[0], x >= range[1] will be mapped to histogram[-1].

Outputs:

1-D Tensor, whose length is the type is nbins with dtype of int32.

Raises:
  • TypeError – If dtype is not a str or nbins is not an int.

  • ValueError – If nbins is less than 1.

  • ValueError – If dtype is not ‘int32’.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor([-1.0, 0.0, 1.5, 2.0, 5.0, 15], mindspore.float16)
>>> range_op = Tensor([0.0, 5.0], mindspore.float16)
>>> hist = ops.HistogramFixedWidth(5)
>>> output = hist(x, range_op)
>>> print(output)
[2 1 1 0 2]
class tinyms.primitives.HistogramSummary[source]

This operator will calculate the histogram of a tensor and put it to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.HistogramSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor([1, 2]), Tensor([3, 4]))
>>> print(summary)
[4 6]
class tinyms.primitives.HookBackward(hook_fn, cell_id='')[source]

This operation is used as a tag to hook gradient in intermediate variables. Note that this function is only supported in pynative mode.

Note

The hook function must be defined like hook_fn(grad) -> new gradient or None, where the ‘grad’ is the gradient passed to the primitive. The ‘grad’ may be modified by returning a new gradient and passed to next primitive. The difference between a hook function and callback of InsertGradientOf is that the hook function is executed in the python environment while callback will be parsed and added to the graph.

Parameters:
  • hook_fn (Function) – Python function. hook function.

  • cell_id (str, optional) – Used to identify whether the function registered by the hook is actually registered on the specified cell object. For example, ‘nn.Conv2d’ is a cell object. The default value of cell_id is empty string(“”), in this case, the system will automatically register a value of cell_id. The value of cell_id currently does not support custom values.

Inputs:
  • input (Tensor) - The variable to hook.

Outputs:
  • output (Tensor) - Returns input directly. HookBackward does not affect the forward result.

Raises:
  • TypeError – If input is not a tensor.

  • TypeError – If hook_fn is not a function of python.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> from mindspore import Tensor
>>> from mindspore.ops import GradOperation
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> def hook_fn(grad):
...     print(grad)
...
>>> hook = ops.HookBackward(hook_fn)
>>> def hook_test(x, y):
...     z = x * y
...     z = hook(z)
...     z = z * y
...     return z
...
>>> grad_all = GradOperation(get_all=True)
>>> def backward(x, y):
...     return grad_all(hook_test)(x, y)
...
>>> output = backward(Tensor(1, ms.float32), Tensor(2, ms.float32))
(Tensor(shape=[], dtype=Float32, value= 2),)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4), Tensor(shape=[], dtype=Float32, value= 4))
class tinyms.primitives.Hypot[source]

Computes hypotenuse of input tensors element-wise as legs of a right triangle. The shape of two inputs should be broadcastable, and data type of them should be one of: float32, float64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not float32 or float64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([3., 5., 7.]))
>>> x2 = Tensor(np.array([4., 12., 24.]))
>>> hypot_ = ops.Hypot()
>>> y = hypot_(x1, x2)
>>> print(y)
[ 5. 13. 25.]
class tinyms.primitives.IOU(mode='iou')[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

Refer to mindspore.ops.iou() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> iou = ops.IOU(mode='iou')
>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> output = iou(anchor_boxes, gt_boxes)
>>> print(output.shape)
(3, 3)
class tinyms.primitives.Identity[source]

Returns a Tensor with the same shape and contents as input.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

Tensor, the shape of tensor and the data type are the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> output = ops.Identity()(x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.IdentityN[source]

Return a tuple of tensors with the same shapes and contents as the input.

This op can be used to override the gradient for complicated functions. For example, suppose \(y = f(x)\) and we wish to apply a custom function g for backprop such that \(dx=g(dy)\).

Inputs:
  • x (Union[tuple[Tensor], list[Tensor]]) - Input, the data type is RealNumber.

Outputs:

Tensors - tuple(Tensor), the shape of tensor and the data type are the same as input x.

Raises:
  • TypeError – If x is not tuple(Tensor) or List(Tensor).

  • TypeError – If input x type is not RealNumber.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = [Tensor(np.array([1, 2, 3, 4]), mstype.int64), Tensor(np.array([4, 3, 1, 1]), mstype.int64)]
>>> output = ops.IdentityN()(x)
>>> print(np.allclose(output[0].asnumpy(), x[0].asnumpy()))
True
>>> print(np.allclose(output[1].asnumpy(), x[1].asnumpy()))
True
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 2, 3, 4]), Tensor(shape=[4], dtype=Int64, value= [4, 3, 1, 1]))
class tinyms.primitives.Igamma[source]

Calculates lower regularized incomplete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.igamma() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igamma = ops.Igamma()
>>> output = igamma(a, x)
>>> print (output)
[0.593994  0.35276785  0.21486944  0.13337152]
class tinyms.primitives.Igammac[source]

Compute the upper regularized incomplete Gamma function Q(a, x).

Refer to mindspore.ops.igammac() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igammac = ops.Igammac()
>>> output = igammac(a, x)
>>> print (output)
[0.40600586 0.6472318  0.7851304  0.8666283 ]
class tinyms.primitives.Im2Col(ksizes, strides=1, dilations=1, pads=0)[source]

Extracts sliding local blocks from a batched input tensor.

Consider a batched input tensor of shape \((N, C, *)\), where \(N\) is the batch dimension, \(C\) is the channel dimension, and \(*\) represent arbitrary spatial dimensions. This operation flattens each sliding ksizes- sized block within the spatial dimensions of input x into a column (i.e., last dimension) of a 4-D output tensor of shape \((N, C, \prod(\text{kernel_size}), L)\), where \(C \times \prod(\text{kernel_size})\) is the total number of values within each block (a block has \(\prod(\text{kernel_size})\) spatial locations each containing a C-channeled vector), and \(L\) is the total number of such blocks:

\[L = \prod_d \left\lfloor\frac{\text{spatial_size}[d] + 2 \times \text{pads}[d] % - \text{dilations}[d] \times (\text{kernel_size}[d] - 1) - 1}{\text{strides}[d]} + 1\right\rfloor,\]

where \(\text{spatial_size}\) is formed by the spatial dimensions of input x (\(*\) above), and \(d\) is over all spatial dimensions.

Therefore, indexing output at the last dimension (column dimension) gives all values within a certain block.

The pads, strides and dilations arguments specify how the sliding blocks are retrieved.

Note

Currently, only 4-D input tensors (batched image-like tensors) are supported.

Parameters:
  • ksizes (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • strides (Union[int, tuple[int], list[int]], optional) – The stride of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • dilations (Union[int, tuple[int], list[int]], optional) – The dilation of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • pads (Union[int, tuple[int], list[int]], optional) –

    The pad of the window, that must be a tuple of one or two int for height and width. Default: 0.

    • If one int, \(pad\_height = pad\_width\).

    • If two int, \(pad\_height = pads[0]\), \(pad\_width = pads[1]\).

    • If four int, \(pads = [pad\_height\_top, pad\_height\_bottom, pad\_width\_left, pad\_width\_right]\).

Inputs:
  • x (Tensor) - input tensor, only 4-D input tensors (batched image-like tensors) are supported. support all real number data type.

Outputs:

Tensor, a 4-D Tensor with same type of input x.

Raises:
  • TypeError – If ksizes data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If strides data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If dilations data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If pads data type isnot in Union[int, tuple[int], list[int]].

  • ValueError – If ksizes value is not greater than zero or elements number more than 2.

  • ValueError – If strides value is not greater than zero or elements number more than 2.

  • ValueError – If dilations value is not greater than zero or elements number more than 2.

  • ValueError – If pads value is not greater than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(4, 4, 32, 32), dtype=mstype.float64)
>>> im2col = ops.Im2Col(ksizes=3, strides=1, dilations=1)
>>> y = im2col(x)
>>> print(y.shape)
(4, 4, 9, 900)
class tinyms.primitives.Imag[source]

Returns a new tensor containing imaginary value of the input. If input is real, it is returned zeros.

Inputs:
  • input (Tensor) - The input tensor to compute to.

Outputs:

Tensor, the shape is the same as the input.

Raises:

TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> imag = ops.Imag()
>>> output = imag(x)
>>> print(output)
0.4
class tinyms.primitives.ImageSummary[source]

This operator will put an image tensor to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of image, the rank of tensor must be 4.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.summary = ops.ImageSummary()
...
...     def construct(self, x):
...         name = "image"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.InTopK(k)[source]

Determines whether the targets are in the top k predictions.

Refer to mindspore.ops.intopk() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> in_top_k = ops.InTopK(3)
>>> output = in_top_k(x1, x2)
>>> print(output)
[ True  False]
class tinyms.primitives.IndexAdd(axis, use_lock=True, check_index_bound=True)[source]

Adds tensor y to specified axis and indices of tensor x. The axis should be in [-len(x.dim), len(x.dim) - 1], and indices should be in [0, the size of x - 1] at the axis dimension.

Parameters:
  • axis (int) – The dimension along which to index.

  • use_lock (bool) – Whether to enable a lock to protect the updating process of variable tensors. If true, when updating the value of x, this process will be protected by a lock by using atomic operation. If false, the result may be unpredictable. Default: True.

  • check_index_bound (bool) – If true, check index boundary. If false, don’t check index boundary. Default: True.

Inputs:
  • x (Parameter) - The input Parameter to add to.

  • indices (Tensor) - Add the value of x and y along the dimension of the axis according to the specified index value, with data type int32. The indices must be 1D with the same size as the size of y in the axis dimension. The values of indices should be in [0, b), where the b is the size of x in the axis dimension.

  • y (Tensor) - The input tensor with the value to add. Must have same data type as x. The shape must be the same as x except the axis th dimension.

Outputs:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a Parameter.

  • TypeError – If neither indices nor y is a Tensor.

  • ValueError – If axis is out of x rank’s range.

  • ValueError – If x rank is not the same as y rank.

  • ValueError – If shape of indices is not 1D or size of indices is not equal to dimension of y[axis].

  • ValueError – If y’s shape is not the same as x except the axis th dimension.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.index_add = ops.IndexAdd(axis=1)
...         self.x = Parameter(Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32),
...                 name="name_x")
...         self.indices = Tensor(np.array([0, 2]), mindspore.int32)
...
...     def construct(self, y):
...         return self.index_add(self.x, self.indices, y)
...
>>> y = Tensor(np.array([[0.5, 1.0], [1.0, 1.5], [2.0, 2.5]]), mindspore.float32)
>>> net = Net()
>>> output = net(y)
>>> print(output)
[[ 1.5  2.   4. ]
 [ 5.   5.   7.5]
 [ 9.   8.  11.5]]
class tinyms.primitives.IndexFill[source]

Fills the elements under the dim dimension of the input Tensor x with the input value by selecting the indices in the order given in index.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.index_fill() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> index_fill = ops.IndexFill()
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> index = Tensor([0, 2], mindspore.int32)
>>> value = Tensor(-2.0, mindspore.float32)
>>> y = index_fill(x, 1, index, value)
>>> print(y)
[[-2. 2. -2.]
 [-2. 5. -2.]
 [-2. 8. -2.]]
class tinyms.primitives.InplaceAdd(indices)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Refer to mindspore.ops.inplace_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceAdd = ops.InplaceAdd(indices)
>>> output = inplaceAdd(x, input_v)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceIndexAdd(axis)[source]

Adds Tensor updates to specified axis and indices of Tensor var element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.inplace_index_add() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> var = Parameter(Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32))
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceIndexAdd = ops.InplaceIndexAdd(axis=0)
>>> var = inplaceIndexAdd(var, indices, updates)
>>> print(var)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceSub(indices)[source]

Subtracts v into specified rows of x. Computes \(y = x\); \(y[i,] -= input\_v\).

Refer to mindspore.ops.inplace_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceSub = ops.InplaceSub(indices)
>>> output = inplaceSub(x, input_v)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
class tinyms.primitives.InplaceUpdate(indices)[source]

The InplaceUpdate interface is deprecated. Please use the mindspore.ops.InplaceUpdateV2 instead.

Supported Platforms:

Deprecated

class tinyms.primitives.InplaceUpdateV2[source]

Updates specified values in x to v according to indices.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.inplace_update() for more details.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplace_update_v2 = ops.InplaceUpdateV2()
>>> output = inplace_update_v2(x, indices, v)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
class tinyms.primitives.InsertGradientOf(f)[source]

Attaches callback to the graph node that will be invoked on the node’s gradient.

Parameters:

f (Function) – MindSpore’s Function. Callback function.

Inputs:
  • input_x (Any) - The graph node to attach to.

Outputs:

Tensor, returns input_x directly. InsertGradientOf does not affect the forward result.

Raises:

TypeError – If f is not a function of MindSpore.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops, jit
>>> a = Tensor(np.array([1.0]).astype(np.float32))
>>> b = Tensor(np.array([0.2]).astype(np.float32))
>>> def clip_gradient(dx):
...     ret = dx
...     if ret > a:
...         ret = a
...
...     if ret < b:
...         ret = b
...
...     return ret
...
>>> clip = ops.InsertGradientOf(clip_gradient)
>>> grad_all = ops.GradOperation(get_all=True)
>>> def InsertGradientOfClipDemo():
...     def clip_test(x, y):
...         x = clip(x)
...         y = clip(y)
...         c = x * y
...         return c
...
...     @jit
...     def f(x, y):
...         return clip_test(x, y)
...
...     def fd(x, y):
...         return grad_all(clip_test)(x, y)
...
...     print("forward: ", f(Tensor(np.array([1.1]).astype(np.float32)),
...         Tensor(np.array([0.1]).astype(np.float32))))
...     print("clip_gradient:", fd(Tensor(np.array([1.1]).astype(np.float32)),
...         Tensor(np.array([0.1]).astype(np.float32))))
>>> InsertGradientOfClipDemo()
forward: [0.11000001]
clip_gradient: (Tensor(shape=[1], dtype=Float32, value= [ 2.00000003e-01]),
                Tensor(shape=[1], dtype=Float32, value= [ 1.00000000e+00]))
class tinyms.primitives.Inv[source]

Computes Reciprocal of input tensor element-wise.

Refer to mindspore.ops.inv() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inv = ops.Inv()
>>> x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = inv(x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
class tinyms.primitives.Invert[source]

Flips all bits of input tensor element-wise.

Refer to mindspore.ops.invert() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.Invert()
>>> x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = invert(x)
>>> print(output)
[-26 -5 -14 -10]
class tinyms.primitives.InvertPermutation[source]

Computes the inverse of an index permutation.

This operator is mainly used to calculate the inverse of index permutation. It requires a 1-dimensional integer tensor x, which represents the index of a zero-based array, and exchanges each value with its index position. In other words, For output tensor y and input tensor x, this operation calculates the following values:

\(y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]\).

Note

These values must include 0. There must be no duplicate values and the values can not be negative.

Inputs:
  • input_x (Union(tuple[int], list[int])) - The input is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\) representing the indices. The values must include 0. There can be no duplicate values or negative values. Only constant value is allowed. The maximum value must be equal to length of input_x.

Outputs:

tuple[int]. It has the same length as the input.

Raises:
  • TypeError – If input_x is neither tuple nor list.

  • TypeError – If element of input_x is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.InvertPermutation()
>>> input_data = (3, 4, 0, 2, 1)
>>> output = invert(input_data)
>>> print(output)
(2, 4, 3, 0, 1)
class tinyms.primitives.IsClose(rtol=1e-05, atol=1e-08, equal_nan=True)[source]

Returns a tensor of Boolean values indicating whether two input tensors are element-wise equal within a given tolerance.

Refer to mindspore.ops.isclose() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import IsClose
>>> input = Tensor(np.array([1.3, 2.1, 3.2, 4.1, 5.1]), mindspore.float16)
>>> other = Tensor(np.array([1.3, 3.3, 2.3, 3.1, 5.1]), mindspore.float16)
>>> isclose = IsClose()
>>> output = isclose(input, other)
>>> print(output)
[ True False False False  True]
class tinyms.primitives.IsFinite[source]

Determines which elements are finite for each position.

Refer to mindspore.ops.isfinite() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_finite = ops.IsFinite()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_finite(x)
>>> print(output)
[False  True False]
class tinyms.primitives.IsInf[source]

Determines which elements are inf or -inf for each position.

Refer to mindspore.ops.isinf() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_inf = ops.IsInf()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_inf(x)
>>> print(output)
[False False True]
class tinyms.primitives.IsNan[source]

Determines which elements are NaN for each position.

Refer to mindspore.ops.isnan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_nan = ops.IsNan()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_nan(x)
>>> print(output)
[ True False False]
class tinyms.primitives.KLDivLoss(reduction='mean')[source]

Computes the Kullback-Leibler divergence between the logits and the labels.

For tensors of the same shape \(x\) and \(target\), the updating formulas of KLDivLoss algorithm are as follows,

\[L(x, target) = target \cdot (\log target - x)\]

Then,

\[\begin{split}\ell(x, target) = \begin{cases} L(x, target), & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L(x, target)), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L(x, target)) / x.\operatorname{shape}[0], & \text{if reduction} = \text{'batchmean';}\\ \operatorname{sum}(L(x, target)), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(x\) represents logits, \(target\) represents labels, and \(\ell(x, target)\) represents output.

Note

  • On Ascend, float64 dtype is not currently supported.

  • The output aligns with the mathematical definition of Kullback-Leibler divergence only when reduction is set to ‘batchmean’.

Parameters:

reduction (str) –

Specifies the reduction to be applied to the output. Default: ‘mean’.

  • On Ascend, the value of reduction must be one of ‘batchmean’, ‘none’ or ‘sum’.

  • On GPU, the value of reduction must be one of ‘mean’, ‘none’ or ‘sum’.

  • On CPU, the value of reduction must be one of ‘mean’, ‘batchmean’, ‘none’ or ‘sum’.

Inputs:
  • logits (Tensor) - The input Tensor. The data type must be float16, float32 or float64.

  • labels (Tensor) - The label Tensor which has the same shape and data type as logits.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise it is a scalar.

Raises:
  • TypeError – If reduction is not a str.

  • TypeError – If neither logits nor labels is a Tensor.

  • TypeError – If dtype of logits or labels is not currently supported.

  • ValueError – If shape of logits is not the same as labels.

  • RuntimeError – If logits or labels is a scalar when reduction is ‘batchmean’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.kldiv_loss = ops.KLDivLoss(reduction='sum')
...     def construct(self, logits, labels):
...         result = self.kldiv_loss(logits, labels)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = net(logits, labels)
>>> print(output)
-0.7
class tinyms.primitives.L2Loss[source]

Calculates half of the L2 norm, but do not square the result.

Set input as x and output as loss.

\[loss = \frac{\sum x ^ 2}{2}\]
Inputs:
  • input_x (Tensor) - Tensor for computing the L2 norm. Data type must be float16, float32 or float64.

Outputs:

Tensor, has a Scalar Tensor with the same data type as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
>>> l2_loss = ops.L2Loss()
>>> output = l2_loss(input_x)
>>> print(output)
7.0
class tinyms.primitives.L2Normalize(axis=0, epsilon=0.0001)[source]

L2 Normalization Operator.

This operator will normalize the input using the given axis. The function is shown as follows:

\[\displaylines{{\text{output} = \frac{x}{\sqrt{\text{max}( \sum_{i}^{}\left | x_i \right | ^2, \epsilon)}}}}\]

where \(\epsilon\) is epsilon and \(\sum_{i}^{}\left | x_i \right | ^2\) calculate the sum of squares of the input x along the dimension axis.

Note

On Ascend, input data type of float64 is currently not supported.

Parameters:
  • axis (Union[list(int), tuple(int), int]) – Specify the axis for calculating the L2 norm. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-4.

Inputs:
  • x (Tensor) - Input to compute the normalization. Tensor of shape \((N, *)\), where \(*\) means any number of additional dimensions. Data type must be float16, float32 or float64.

Outputs:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If axis is not one of the following: list, tuple or int.

  • TypeError – If epsilon is not a float.

  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • ValueError – If dimension of x is not greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> l2_normalize = ops.L2Normalize()
>>> x = Tensor(np.random.randint(-256, 256, (2, 3, 4)), mindspore.float32)
>>> output = l2_normalize(x)
>>> print(output.shape)
(2, 3, 4)
class tinyms.primitives.LARSUpdate(epsilon=1e-05, hyperpara=0.001, use_clip=False)[source]

Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient.

For more details, please refer to mindspore.nn.LARS.

Parameters:
  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • hyperpara (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

Inputs:
  • weight (Tensor) - A tensor, representing the weight. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient (Tensor) - The gradient of weight, which has the same shape and dtype with weight.

  • norm_weight (Tensor) - A scalar tensor, representing the sum of squares of weight.

  • norm_gradient (Tensor) - A scalar tensor, representing the sum of squares of gradient.

  • weight_decay (Union[Number, Tensor]) - Weight decay. It must be a scalar tensor or number.

  • learning_rate (Union[Number, Tensor]) - Learning rate. It must be a scalar tensor or number.

Outputs:

Tensor, represents the new gradient.

Raises:
  • TypeError – If neither epsilon nor hyperpara is a float.

  • TypeError – If use_clip is not a bool.

  • TypeError – If weight, gradient, norm_weight or norm_gradient is not a Tensor.

  • TypeError – If weight_decay or learning_rate is neither a Number nor a Tensor.

  • TypeError – If shape of gradient is not the same as weight.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.lars = ops.LARSUpdate()
...         self.reduce = ops.ReduceSum()
...         self.square = ops.Square()
...     def construct(self, weight, gradient):
...         w_square_sum = self.reduce(self.square(weight))
...         grad_square_sum = self.reduce(self.square(gradient))
...         grad_t = self.lars(weight, gradient, w_square_sum, grad_square_sum, 0.0, 1.0)
...         return grad_t
...
>>> weight = Tensor(np.array([[0.5, 0.8, 0.2], [0.6, 0.4, 0.2]]).astype(np.float32))
>>> gradient = Tensor(np.array([[0.4, 0.4, 0.5], [0.2, 0.4, 0.3]]).astype(np.float32))
>>> net = Net()
>>> output = net(Tensor(weight), Tensor(gradient))
>>> print(output)
[[0.0005265  0.0005265 0.00065813]
 [0.00026325 0.0005265 0.00039488]]
class tinyms.primitives.LRN(depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]

where the \(a_{c}\) indicates the specific value of the pixel corresponding to \(c\) in feature map; where the \(n/2\) indicates the depth_radius; where the \(k\) indicates the bias; where the \(\alpha\) indicates the alpha; where the \(\beta\) indicates the beta.

Parameters:
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default: 5.

  • bias (float) – An offset (usually positive to avoid dividing by 0). Default: 1.0.

  • alpha (float) – A scale factor, usually positive. Default: 1.0.

  • beta (float) – An exponent. Default: 0.5.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

Inputs:
  • x (Tensor) - A 4-D Tensor with float16 or float32 data type.

Outputs:

Tensor, with the same shape and data type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> lrn = ops.LRN()
>>> output = lrn(x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
class tinyms.primitives.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)[source]

Performs the Long Short-Term Memory (LSTM) on the input.

For detailsed information, please refer to mindspore.nn.LSTM.

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked LSTM.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh.

  • bidirectional (bool) – Specifies whether it is a bidirectional LSTM.

  • dropout (float) – If not 0, append Dropout layer on the outputs of each LSTM layer except the last layer. The range of dropout is [0.0, 1.0].

Inputs:
  • input (Tensor) - Tensor of shape \((seq\_len, batch\_size, input\_size)\) or \((batch\_size, seq\_len, input\_size)\).

  • h (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • c (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • w (Tensor) - A weight Tensor.

Outputs:

Tuple, a tuple contains (output, h_n, c_n, reserve, state).

  • output (Tensor) - Tensor of shape \((seq\_len, batch\_size, num\_directions * hidden\_size)\).

  • h_n (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • c_n (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • reserve (Tensor) - Tensor of shape \((r, 1)\).

  • state (Tensor) - Random number generator state and its shape is \((s, 1)\).

Raises:
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0].

Supported Platforms:

GPU CPU

Examples

>>> input_size = 10
>>> hidden_size = 2
>>> num_layers = 1
>>> seq_len = 5
>>> batch_size = 2
>>>
>>> net = ops.LSTM(input_size, hidden_size, num_layers, True, False, 0.0)
>>> input_tensor = Tensor(np.ones([seq_len, batch_size, input_size]).astype(np.float32))
>>> h0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> c0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> w = Tensor(np.ones([112, 1, 1]).astype(np.float32))
>>> output, hn, cn, _, _ = net(input_tensor, h0, c0, w)
>>> print(output)
[[[0.9640267  0.9640267 ]
  [0.9640267  0.9640267 ]]
 [[0.9950539  0.9950539 ]
  [0.9950539  0.9950539 ]]
 [[0.99932843 0.99932843]
  [0.99932843 0.99932843]]
 [[0.9999084  0.9999084 ]
  [0.9999084  0.9999084 ]]
 [[0.9999869  0.9999869 ]
  [0.9999869  0.9999869 ]]]
class tinyms.primitives.LayerNorm(begin_norm_axis=1, begin_params_axis=1, epsilon=1e-07)[source]

Applies the Layer Normalization to the input tensor.

This operator will normalize the input tensor on given axis. LayerNorm is described in the paper Layer Normalization.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters:
  • begin_norm_axis (int) – The begin axis of the input_x to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • begin_params_axis (int) – The begin axis of the parameter input (gamma, beta) to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • epsilon (float) – A value added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\). The input of LayerNorm.

  • gamma (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter \(\gamma\) as the scale on norm.

  • beta (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter \(\beta\) as the scale on norm.

Outputs:

tuple[Tensor], tuple of 3 tensors, the normalized input and the updated parameters.

  • output_x (Tensor) - The normalized input, has the same type and shape as the input_x. The shape is \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Raises:
  • TypeError – If begin_norm_axis or begin_params_axis is not an int.

  • TypeError – If epsilon is not a float.

  • TypeError – If input_x, gamma or beta is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [1, 2, 3]]), mindspore.float32)
>>> gamma = Tensor(np.ones([3]), mindspore.float32)
>>> beta = Tensor(np.ones([3]), mindspore.float32)
>>> layer_norm = ops.LayerNorm()
>>> output, mean, variance = layer_norm(input_x, gamma, beta)
>>> print(output)
[[-0.2247448  1.         2.2247448]
 [-0.2247448  1.         2.2247448]]
>>> print(mean)
[[2.]
 [2.]]
>>> print(variance)
[[0.6666667]
 [0.6666667]]
class tinyms.primitives.Lcm[source]

Computes least common multiplier of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> lcm_ = ops.Lcm()
>>> y = lcm_(x1, x2)
>>> print(y)
[14 24 36]
class tinyms.primitives.LeftShift[source]

Shift the value of each position of the tensor to the left several bits. The inputs are two tensors, dtypes of them must be consistent, and the shapes of them could be broadcast. The output does not support implicit type conversion.

\[\begin{aligned} &out_{i} =x_{i} << y_{i} \end{aligned}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The target tensor whose dtype supports int8, int16, int32, int64, uint8, uint16, uint32, uint64, will be shifted to the left by x2 in element-wise.

  • x2 (Tensor) - The tensor must have the same dtype as x1. And the tensor must have the same shape as x1 or could be broadcast with x1.

Outputs:
  • output (Tensor) - The output tensor, has the same dtype as x1. And the shape of the output tensor is the same shape as x1, or the same shape as x1 and x2 after broadcasting.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> left_shift = ops.LeftShift()
>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.int8))
>>> x2 = Tensor(np.array([0, 1, -1]).astype(np.int8))
>>> output = left_shift(x1, x2)
>>> print(output)
[1 4 3]
class tinyms.primitives.Lerp[source]

Does a linear interpolation of two tensors start and end based on a float or tensor weight.

Refer to mindspore.ops.lerp() for more details.

Inputs:
  • start (Tensor) - The tensor with the starting points. Data type must be float16 or float32.

  • end (Tensor) - The tensor with the ending points. Data type must be the same as start.

  • weight (Union[float, Tensor]) - The weight for the interpolation formula. Must be a float or a scalar tensor with float16 or float32 data type.

Outputs:

Tensor, has the same type and shape as input start.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> end = Tensor(np.array([10., 10., 10., 10.]), mindspore.float32)
>>> lerp = ops.Lerp()
>>> output = lerp(start, end, 0.5)
>>> print(output)
[5.5 6. 6.5 7. ]
class tinyms.primitives.Less[source]

Computes the boolean value of \(x < y\) element-wise.

Refer to mindspore.ops.less() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less = ops.Less()
>>> output = less(x, y)
>>> print(output)
[False False True]
class tinyms.primitives.LessEqual[source]

Computes the boolean value of \(x <= y\) element-wise.

Refer to mindspore.ops.le() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less_equal = ops.LessEqual()
>>> output = less_equal(x, y)
>>> print(output)
[ True False  True]
class tinyms.primitives.Lgamma[source]

Computes the natural logarithm of the absolute value of the gamma function on input.

Refer to mindspore.ops.lgamma() for more details.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 3.2, 8.5]), mindspore.float32)
>>> lgamma = ops.Lgamma()
>>> output = lgamma(x)
>>> print(output)
[0.5723649 0.8854049 9.549267 ]
class tinyms.primitives.LinSpace[source]

Returns a Tensor whose value is num evenly spaced in the interval start and stop (including start and stop), and the length of the output Tensor is num.

Refer to mindspore.ops.linspace() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> linspace = ops.LinSpace()
>>> start = Tensor(1, mindspore.float32)
>>> stop = Tensor(10, mindspore.float32)
>>> num = 5
>>> output = linspace(start, stop, num)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
class tinyms.primitives.ListDiff(out_idx=mindspore.int32)[source]

This function calculates the disparity between two numerical lists.

It generates a list of all elements that are present in list x but not in list y. The output list out retains the same order as the original x including duplicate elements.

Additionally, this class outputs a list idx that identifies the position of each element in out within the original x. That is to say: out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1] .

Parameters:

out_idx (mindspore.dtype, optional) – The dtype of idx, an optioanal datatype of mstype.int32 and mstype.int64. Default: mstype.int32.

Inputs:
  • x - Values to keep. A 1-D Tensor.

  • y - Values to remove. A 1-D Tensor. Must have the same type as x. 1-D.

Outputs:
  • out - The kept values. A 1-D Tensor. Has the same type as x.

  • idx - The original index of kept values. A 1-D Tensor of type out_idx.

Raises:
  • ValueError – If x or y shape is not 1D.

  • TypeError – If x or y is not a Tensor.

  • TypeError – If x or y date type is not int or uint.

  • TypeError – If x has different data type with y.

  • TypeError – If attr out_idx not in [mstype.int32, mstype.int64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1, 7, 1), dtype=mindspore.dtype.int32) # [1, 2, 3, 4, 5, 6]
>>> y = Tensor([1, 3, 5], dtype=mindspore.dtype.int32)
>>> op = ops.ListDiff() # out_idx default is mindspore.dtype.int32
>>> out, idx = op(x, y)
>>> print(out)
[2 4 6]
>>> print(idx)
[1 3 5]
class tinyms.primitives.Log[source]

Returns the natural logarithm of a tensor element-wise.

Refer to mindspore.ops.log() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log = ops.Log()
>>> output = log(x)
>>> print(output)
[0.        0.6931472 1.3862944]
class tinyms.primitives.Log1p[source]

Returns the natural logarithm of one plus the input tensor element-wise.

Refer to mindspore.ops.log1p() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log1p = ops.Log1p()
>>> output = log1p(x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
class tinyms.primitives.LogMatrixDeterminant[source]

Calculates the sign and logarithm of the determinant of one or more square matrices.

Refer to mindspore.ops.slogdet() for more details.

Supported Platforms:

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> op = ops.LogMatrixDeterminant()
>>> sign, output = op(input_x)
>>> print(sign)
[-1.   1.]
>>> print(output)
[2.80336046e+00    3.04452229e+00]
class tinyms.primitives.LogNormalReverse(mean=1.0, std=2.0)[source]

Fills the elements of the input tensor with log normal values initialized by given mean and std:

\[\text{f}(x;1.0,2.0)=\frac{1}{x\delta \sqrt[]{2\pi} }e^{-\frac{(\ln x-\mu )^2}{2\delta ^2} }\]

where mu, delta is mean and standard deviation of lognormal distribution respectively.

Parameters:
  • mean (float, optional) – the mean of normal distribution. With float data type. Default: 1.0.

  • std (float, optional) – the std of normal distribution. With float data type. Default: 2.0.

Inputs:
  • input (Tensor) - The tensor to be generated with log-normal distribution. Must be one of the following types: float16, float32, float64.

Outputs:

Tensor. A Tensor with the same type and shape of input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3,4),mstype.float64)
>>> mean = 2.0
>>> std = 1.0
>>> lognormalreverse = ops.LogNormalReverse(mean, std)
>>> output = lognormalreverse(x)
>>> result = output.shape
>>> print(result)
(3, 4)
class tinyms.primitives.LogSoftmax(axis=-1)[source]

Log Softmax activation function.

Refer to mindspore.ops.log_softmax() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> log_softmax = ops.LogSoftmax()
>>> output = log_softmax(logits)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
class tinyms.primitives.LogSpace(steps=10, base=10, dtype=mindspore.float32)[source]

Generates a 1-D Tensor with a length of steps. The tensor’s values are uniformly distributed on a logarithmic scale, ranging from \(base^{start}\) to \(base^{end}\), including both endpoints. The logarithmic scale is based on the specified base.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [base^{start}, base^{start + 1 * step}, ... , base^{start + (steps-2) * step}, base^{end}] \end{aligned}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • steps (int, optional) – The steps must be a non-negative integer. Default: 10.

  • base (int, optional) – The base must be a non-negative integer. Default: 10.

  • dtype (mindspore.dtype, optional) – The dtype of output, include mindspore.float16, mindspore.float32 or mindspore.float64. Default: mindspore.float32.

Inputs:
  • start (Tensor) - Start value of interval, with shape of 0-D, dtype is float16, float32 or float64.

  • end (Tensor) - End value of interval, with shape of 0-D, dtype is float16, float32 or float64.

Outputs:

Tensor has the shape as \((step, )\). Its datatype is set by the attr ‘dtype’.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If steps is not an int.

  • TypeError – If base is not an int.

  • TypeError – If dtype is not mindspore.float16, mindspore.float32 or mindspore.float64.

  • ValueError – If steps is not a non-negative integer.

  • ValueError – If base is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logspace = ops.LogSpace(steps = 10, base = 10, dtype=mindspore.float32)
>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> output = logspace(start, end)
>>> print(output)
[1.e+01 1.e+02 1.e+03 1.e+04 1.e+05 1.e+06 1.e+07 1.e+08 1.e+09 1.e+10]
class tinyms.primitives.LogUniformCandidateSampler(num_true=1, num_sampled=5, unique=True, range_max=5, seed=0)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Randomly samples a tensor of sampled classes from the range of integers [0, range_max).

Refer to mindspore.ops.log_uniform_candidate_sampler() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> sampler = ops.LogUniformCandidateSampler(2, 5, True, 5)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1, 7], [0, 4], [3, 3]])))
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
class tinyms.primitives.LogicalAnd[source]

Computes the “logical AND” of two tensors element-wise.

Refer to mindspore.ops.logical_and() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_and = ops.LogicalAnd()
>>> output = logical_and(x, y)
>>> print(output)
[ True False False]
class tinyms.primitives.LogicalNot[source]

Computes the “logical NOT” of a tensor element-wise.

Refer to mindspore.ops.logical_not() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> logical_not = ops.LogicalNot()
>>> output = logical_not(x)
>>> print(output)
[False  True False]
class tinyms.primitives.LogicalOr[source]

Computes the “logical OR” of two tensors element-wise.

Refer to mindspore.ops.logical_or() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_or = ops.LogicalOr()
>>> output = logical_or(x, y)
>>> print(output)
[ True  True  True]
class tinyms.primitives.LogicalXor[source]

Computes the “logical XOR” of two tensors element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.logical_xor() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_xor = ops.LogicalXor()
>>> output = logical_xor(x, y)
>>> print(output)
[ False True True]
class tinyms.primitives.Logit(eps=-1.0)[source]

Calculate the logit of a tensor element-wise. Element in x is clamped to [eps, 1-eps].

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.logit() for more details.

Parameters:

eps (float, optional) – The epsilon. The input clamp bound is defined as [eps, 1-eps]. Default: -1.0.

Inputs:
  • x (Tensor) - The input tensor.

Outputs:

Tensor, with the same shape and dtype as the x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.1, 0.2, 0.3]).astype(np.float32))
>>> op = ops.Logit(eps=1e-5)
>>> output = op(x)
>>> print(output)
[-2.1972246 -1.3862944 -0.8472978]
class tinyms.primitives.LowerBound(out_type=mindspore.int32)[source]

Find the index of the lower bound of values in sorted sequence sorted_x element-wise.

Parameters:

out_type (mindspore.dtype, optional) – An optional data type of mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • sorted_x (Tensor) - The input tensor whose dtype is real number and the data of each row must be sorted in ascending order. The rank must be 2.

  • values (Tensor) - The input tensor whose dtype is the same as sorted_x and the first dimension of the shape of values must be equal to that of sorted_x . The rank must be 2.

Outputs:

Tensor, whose dtype is determined by out_type and whose shape is the same as that of values.

Raises:
  • TypeError – If sorted_x is not a Tensor.

  • TypeError – If values is not a Tensor.

  • TypeError – If out_type is invalid.

  • TypeError – If the type of sorted_x is not the same as that of values.

  • ValueError – If rank of the sorted_x is not equal to 2.

  • ValueError – If rank of the values is not equal to 2.

  • ValueError – If the first dimension of the shape of sorted_x is not equal to that of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> lowerbound = ops.LowerBound(out_type = mindspore.int32)
>>> sorted_x = Tensor(np.arange(12).reshape(3, 4).astype(np.int8))
>>> values = Tensor(np.array([[3], [4], [8]]).astype(np.int8))
>>> output = lowerbound(sorted_x, values)
>>> print(output)
[[3]
 [0]
 [0]]
class tinyms.primitives.LpNorm(axis, p=2, keep_dims=False, epsilon=1e-12)[source]

Returns the matrix norm or vector norm of a given tensor.

\[output = sum(abs(input)**p)**(1/p)\]
Parameters:
  • axis (int,list,tuple) – Specifies which dimension or dimensions of input to calculate the norm across.

  • p (int, optional) – The order of norm. Default: 2.

  • keep_dims (bool, optional) – Whether the output tensors have dim retained or not. Default: False.

  • epsilon (float, optional) – A value added to the denominator for numerical stability. Default: 1e-12.

Inputs:
  • input (Tensor) - Input tensor.

Outputs:

Tensor, has the same dtype as input, its shape depends on axis. For example, if the shape of input is \((2, 3, 4)\), axis is \([0, 1]\), output shape will be \((4,)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not one of: float16, float32.

  • TypeError – If p is not an int.

  • TypeError – If axis is not an int, a tuple or a list.

  • TypeError – If axis is a tuple or a list, but the element of axis is not an int.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If the element of axis is out of the range \([-r, r)\), where \(r\) is the rank of input.

  • ValueError – If the length of shape of axis is bigger than the length of shape of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
>>> op = ops.LpNorm(axis=[0, 1], p=2, keep_dims=False)
>>> output = op(input_x)
>>> print(output)
[ 9.165152 10.954452]
class tinyms.primitives.Lstsq(fast=True, l2_regularizer=0.0)[source]

Computes the solutions of the least squares and minimum norm problems of full-rank matrix x of size \((m \times n)\) and matrix a of size \((m \times k)\).

If \(m \geq n\), Lstsq solves the least-squares problem:

\[\begin{array}{ll} \min_y & \|xy-a\|_2 \end{array}\]

If \(m < n\), Lstsq solves the least-norm problem:

\[\begin{array}{llll} \min_y & \|y\|_2 & \text{subject to} & xy = a \end{array}\]
Parameters:
  • fast (bool, optional) –

    Solving algorithm. Default: True.

    • If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition.

    • If fast is False, an algorithm based on numerically robust completed orthogonal decomposition is used.

  • l2_regularizer (float, optional) – L2 regularization coefficient. Default: 0.0.

Inputs:
  • x (Tensor) - \((m \times n)\) matrix x. The input tensor whose data type is float16, float32 or float64.

  • a (Tensor) - \((m \times k)\) matrix a. The input tensor whose data type is float16, float32 or float64.

Outputs:

Tensor, the least squares or minimum norm problems solution, which has shape \((n \times k)\). The data type is the same with x.

Raises:
  • TypeError – If the input x or a is not a Tensor.

  • TypeError – If dtype of x or a is not one of: float16, float32, float64.

  • TypeError – If the dtypes of x and a are not the same.

  • ValueError – If the dimension of x is not equal to 2.

  • ValueError – If the dimension of a is not equal to 2 or 1.

  • ValueError – If the length of x_dims[0] is not equal to the length of a_dims[0].

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([[2,1,5],[3,5,1],[1,1,1]]),mindspore.float32)
>>> a = Tensor(np.array([[10,5],[15,8],[7,4]]),mindspore.float32)
>>> op = ops.Lstsq()
>>> output = op(x, a)
>>> print(output)
[[17.000002  11.000002 ]
 [-6.5000005 -4.500001 ]
 [-3.500002  -2.5000017]]
class tinyms.primitives.LuSolve[source]

Computes the solution y to the system of linear equations \(Ay = b\) , given LU decomposition A and column vector b.

LU decomposition of a matrix can be generated from mindspore.scipy.linalg.lu() .

Note

The batch dimensions of lu_pivots must match the batch dimensions of lu_data, the size of the dimension and the number of each dimension must be the same. For example, lu_data is \((3, 3, 2, 2)\) lu_pivots is \((3, 3, 2)\), lu_data’s batch dimensions is \((3, 3)\), lu_pivots’s batch dimensions is \((3, 3)\).

The batch dimensions of lu_data must match the batch dimensions of x, the batch dimensions may have different sizes, from right to left, the corresponding dimensions must be equal. For example, lu_data is \((3, 3, 2, 2)\) x is \((2, 3, 3, 2, 1)\), lu_data’s batch dimensions is \((3, 3)\), x’s batch dimensions is \((2, 3, 3)\).

Inputs:
  • x (Tensor) - Column vector b in the above equation. It has shape \((*, m, k)\), where \(*\) is batch dimensions, with data type float32, float16.

  • lu_data (Tensor) - LU decomposition. It has shape \((*, m, m)\), where * is batch dimensions, that can be decomposed into an upper triangular matrix U and a lower triangular matrix L, with data type float32, float16.

  • lu_pivots (Tensor) - Permutation matrix P of LU decomposition. It has shape \((*, m)\), where \(*\) is batch dimensions, that can be converted to a permutation matrix P, with data type int32.

Outputs:

Tensor, the same data type as the x and lu_data.

Raises:
  • TypeError – If dtype of x or lu_data is not one of: float32, float16.

  • TypeError – If dtype of lu_pivots is not: int32.

  • TypeError – If x, lu_data or lu_pivots is not Tensor.

  • TypeError – If dtype of x is not same as dtype of lu_data.

  • ValueError – If the batch dimensions of lu_pivots does not match the batch dimensions of lu_data.

  • ValueError – If x dimension less than 2, lu_data dimension less than 2 or lu_pivots dimension less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1], [3], [3]]), mindspore.float32)
>>> lu_data = Tensor(np.array([[2, 1, 1], [0.5, 1, 1.5], [0.5, 0, 2.5]]), mindspore.float32)
>>> lu_pivots = Tensor(np.array([2, 2, 3]), mindspore.int32)
>>> net = ops.LuSolve()
>>> y = net(x, lu_data, lu_pivots)
>>> print(y)
[[ 1.9000002]
 [-1.4000001]
 [ 0.6      ]]
class tinyms.primitives.LuUnpack(unpack_data=True, unpack_pivots=True)[source]

Converts LU_data and LU_pivots back into P, L and U matrices, where P is a permutation matrix, L is a lower triangular matrix, and U is an upper triangular matrix. Typically, LU_data and LU_pivots are generated from the LU decomposition of a matrix.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.lu_unpack() for more details.

Supported Platforms:

GPU CPU

Examples

>>> LU_data = Tensor(np.array([[[-0.3806, -0.4872,  0.5536],
...                             [-0.1287,  0.6508, -0.2396],
...                             [ 0.2583,  0.5239,  0.6902]],
...                             [[ 0.6706, -1.1782,  0.4574],
...                             [-0.6401, -0.4779,  0.6701],
...                             [ 0.1015, -0.5363,  0.6165]]]), mstype.float32)
>>> LU_pivots = Tensor(np.array([[1, 3, 3],
...                              [2, 3, 3]]), mstype.int32)
>>> lu_unpack = ops.LuUnpack()
>>> pivots, L, U = lu_unpack(LU_data, LU_pivots)
>>> print(pivots)
[[[1. 0. 0.]
  [0. 0. 1.]
  [0. 1. 0.]]

 [[0. 0. 1.]
  [1. 0. 0.]
  [0. 1. 0.]]]
>>> print(L)
[[[ 1.      0.      0.    ]
  [-0.1287  1.      0.    ]
  [ 0.2583  0.5239  1.    ]]

 [[ 1.      0.      0.    ]
  [-0.6401  1.      0.    ]
  [ 0.1015 -0.5363  1.    ]]]
>>> print(U)
[[[-0.3806 -0.4872  0.5536]
  [ 0.      0.6508 -0.2396]
  [ 0.      0.      0.6902]]

 [[ 0.6706 -1.1782  0.4574]
  [ 0.     -0.4779  0.6701]
  [ 0.      0.      0.6165]]]
class tinyms.primitives.MapCacheIdx[source]

MapCacheIdx merge SearchCacheIdx, CacheSwapHashmap, UpdateCache together. When input an indices tensor, it will output the cache indices which search in hashmap.

class tinyms.primitives.MapUniform[source]

Map a tensor by using formula : value = key % group_num * per_group_size + key // group_num.

Inputs:
  • input (Tensor) - Input Tensor.

  • per_group_size (int) - The size of each group.

  • group_num (int) - The number of group.

Outputs:

Tensor, has the same dtype and shape as the input.

Supported Platforms:

CPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4, 5, 6, 7]))
>>> per_group_size = 4
>>> group_num = 2
>>> map_uniform = ops.MapUniform()
>>> output = map_uniform(input_x, per_group_size, group_num)
>>> print(output)
[0, 4, 1, 5, 2, 6, 3, 7]
class tinyms.primitives.MaskedFill[source]

Fills elements with value where mask is True.

Note

If value is a floating-point number of Python, it will be converted to float32 later by default. In this case, if input_x is a float16 Tensor, it will be converted to float32 for calculation, and the result type will be converted back to float16 on the CPU and Ascend platforms, which may cause the performance penalty. A TypeError may be raised on the GPU platform. Therefore, it is recommended that ‘value’ should use a Tensor with the same dtype as input_x.

Refer to mindspore.ops.masked_fill() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
>>> output = ops.MaskedFill()(input, mask, 0.5)
>>> print(output)
[0.5 0.5 3.  0.5]
class tinyms.primitives.MaskedSelect[source]

Returns a new 1-D Tensor which indexes the x tensor according to the boolean mask. The shapes of the mask tensor and the x tensor don’t need to match, but they must be broadcastable.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • mask (Tensor[bool]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

A 1-D Tensor, with the same type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int32)
>>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
>>> output = ops.MaskedSelect()(x, mask)
>>> print(output)
[1 3]
class tinyms.primitives.MatMul(transpose_a=False, transpose_b=False)[source]

Multiplies matrix a and matrix b.

\[(Output)_{i j}=\sum_{k=1}^{p} a_{i k} b_{k j}=a_{i 1} b_{1 j}+a_{i 2} b_{2 j}+\cdots+a_{i p} b_{p j}, p\in N\]

where the \(i,j\) indicates the output of the i-th row and j-th column element.

Note

If \(N * M\) cannot be divided by 16, the performance will be poor in ascend environment.

Parameters:
  • transpose_a (bool) – If true, a is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, b is transposed before multiplication. Default: False.

Inputs:
  • a (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((N, C)\). If transpose_a is True, its shape must be \((C, N)\) after transpose.

  • b (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((C, M)\). If transpose_b is True, its shape must be \((M, C)\) after transpose.

Outputs:

Tensor, the shape of the output tensor is \((N, M)\).

Raises:
  • TypeError – If transpose_a or transpose_b is not a bool.

  • ValueError – If the column of matrix dimensions of a is not equal to the row of matrix dimensions of b.

  • ValueError – If length of shape of a or b is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
>>> b = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> matmul = ops.MatMul()
>>> output = matmul(a, b)
>>> print(output)
[[3. 3. 3. 3.]]
class tinyms.primitives.MatrixBandPart[source]

Extracts the central diagonal band of each matrix in a tensor, with all values outside the central band set to zero.

Refer to mindspore.ops.matrix_band_part() for more details.

Supported Platforms:

Examples

>>> matrix_band_part = ops.MatrixBandPart()
>>> x = np.ones([2, 4, 4]).astype(np.float32)
>>> output = matrix_band_part(Tensor(x), 2, 1)
>>> print(output)
[[[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]
 [[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]]
class tinyms.primitives.MatrixDeterminant[source]

Calculates the value of the determinant for one or more square matrices.

Refer to mindspore.ops.det() for more details.

Supported Platforms:

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> op = ops.MatrixDeterminant()
>>> output = op(input_x)
>>> print(output)
[-16.5 21. ]
class tinyms.primitives.MatrixDiagPartV3(align='RIGHT_LEFT')[source]

Returns the diagonal part of a tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.matrix_diag_part() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3, 4],
...                      [5, 6, 7, 8],
...                      [9, 8, 7, 6]]), mindspore.float32)
>>> k =Tensor(np.array([1, 3]), mindspore.int32)
>>> padding_value = Tensor(np.array(9), mindspore.float32)
>>> matrix_diag_part_v3 = ops.MatrixDiagPartV3(align='RIGHT_LEFT')
>>> output = matrix_diag_part_v3(x, k, padding_value)
>>> print(output)
[[9. 9. 4.]
 [9. 3. 8.]
 [2. 7. 6.]]
>>> print(output.shape)
(3, 3)
class tinyms.primitives.MatrixDiagV3(align='RIGHT_LEFT')[source]

Constructs a diagonal matrix or a batch of diagonal matrices from a given input Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.matrix_diag() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 9, 0],
...                      [1, 2, 3],
...                      [0, 4, 5]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 1]), mindspore.int32)
>>> num_rows = Tensor(np.array(3), mindspore.int32)
>>> num_cols = Tensor(np.array(3), mindspore.int32)
>>> padding_value = Tensor(np.array(11), mindspore.float32)
>>> matrix_diag_v3 = ops.MatrixDiagV3(align='LEFT_RIGHT')
>>> output = matrix_diag_v3(x, k, num_rows, num_cols, padding_value)
>>> print(output)
[[ 1.  8. 11.]
 [ 4.  2.  9.]
 [11.  5.  3.]]
>>> print(output.shape)
(3, 3)
class tinyms.primitives.MatrixExp[source]

Computes the matrix exponential of a square matrix. Supports batched inputs.

Refer to mindspore.ops.matrix_exp() for more details.

Supported Platforms:

Examples

>>> matrix_exp = ops.MatrixExp()
>>> x = Tensor(np.array([[1, 2], [0, 1]]), mindspore.float32)
>>> output = matrix_exp(x)
>>> print(output)
[[2.7182817 5.436563 ]
[0.        2.7182817]]
class tinyms.primitives.MatrixInverse(adjoint=False)[source]

Returns the inverse of the input matrix. If the matrix is irreversible, an error may be reported or an unknown result may be returned.

Note

The parameter ‘adjoint’ is only supporting False right now, because complex number is not supported at present.

Parameters:

adjoint (bool) – An optional bool. Default: False.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be at least two dimensions, and the last two dimensions must be the same size.

Outputs:

Tensor, has the same type and shape as input x.

Raises:
  • TypeError – If adjoint is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If the last two dimensions of x is not same size.

  • ValueError – If the dimension of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[-0.710504  , -1.1207525],
...                       [-1.7651395 , -1.7576632]],
...                      [[ 0.52412605,  1.9070215],
...                       [ 1.3384849 ,  1.4274558]]]), mindspore.float32)
>>> matrix_inverse = ops.MatrixInverse(adjoint=False)
>>> output = matrix_inverse(x)
>>> print(output)
[[[ 2.4095478  -1.5364188 ]
  [-2.419797    0.9740167 ]]
 [[-0.79111797  1.0569006 ]
  [ 0.74180895 -0.2904787 ]]]
class tinyms.primitives.MatrixLogarithm[source]

Return the matrix logarithm of one or more square matrices.

Inputs:
  • x (Tensor) - x is a tensor. The shape of tensor is \([..., M, M]\). Must be one of the following types:complex64, complex128. And shape must be 2D-7D.

Outputs:
  • y (Tensor) - has the same shape and type as input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of: complex64, complex128.

  • ValueError – If the dimension of x is less to 2.

  • ValueError – If the size of last two dimensions are not equal.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([[1 + 2j, 2 + 1j], [4 + 1j, 5 + 2j]])
>>> matrix_logarithm = ops.MatrixLogarithm()
>>> y = matrix_logarithm(x)
>>> print(y)
[[0.69155775+1.71618359j 0.64665196-0.34928196j]
 [1.02426074-0.88736831j 1.44677531+0.6400109j ]]
class tinyms.primitives.MatrixPower(n)[source]

Calculates the n-th power of a batch of square matrices. When n equals 0, it returns a group of identity matrices. If n is negative, it computes the inverse of each matrix (if possible) raised to the power of abs(n).

Parameters:

n (int) – The exponent, a required int.

Inputs:
  • x (Tensor) - A 3-D Tensor. Supported data types are float16 and float32. The shape is \((b, m, m)\), represents b m-D square matrices.

Outputs:
  • y (Tensor) - A 3-D Tensor. Data type and shape are the same as x’s.

Raises:
  • TypeError – If the data type of n is not int.

  • TypeError – If the data type of x is neither float32 nor float16.

  • TypeError – If x is not a Tensor.

  • ValueError – If x is not a 3-D tensor.

  • ValueError – If shape[1] and shape[2] of x are not the same.

  • ValueError – If n is negative but got input x has singular matrices.

Supported Platforms:

Examples

>>> x = Tensor([[[0, 1], [-1, 0]], [[1, 0], [0, -1]]], dtype=ms.float32)
>>> matrix_power = ops.MatrixPower(n=2)
>>> y = matrix_power(x)
>>> print(y)
[[[-1.  0.]
  [-0. -1.]]
 [[ 1.  0.]
  [ 0.  1.]]]
class tinyms.primitives.MatrixSetDiagV3(align='RIGHT_LEFT')[source]

Updates the diagonal part of a batched tensor. It takes an Tensor x and diagonal as input and returns a Tensor in which the specified diagonal values in the innermost matrices will be replaced by the values in the diagonal.

Diagonals shorter than max_diag_len need to be padded, where max_diag_len is the longest diagonal value. The dimension of diagonal is \(shape[-2]\) must be equal to num_diags calculated by \(num\_diags = k[1] - k[0] + 1\). The dimension of diagonal is \(shape[-1]\) must be equal to the longest diagonal value max_diag_len calculated by \(max\_diag\_len = min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\).

Assume x is an n-D Tensor with shape \((d_1, d_2, ..., d_{n-2}, d_{n-1}, d_n)\). If k is an integer or \(k[0] == k[1]\), diagonal is an (n-1)-D Tensor with shape \((d_1, d_2, ..., d_{n-2}, max\_diag\_len)\) Otherwise, it has the same rank as x with shape \((d_1, d_2, ..., d_{n-2}, num\_diags, max\_diag\_len)\).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

align (str, optional) –

specifies how superdiagonals and subdiagonals should be aligned. Supported values:”RIGHT_LEFT”, “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Default: “RIGHT_LEFT”.

  • When set to “RIGHT_LEFT”, the alignment of superdiagonals will be towards the right side (padding the row on the left), while subdiagonals will be towards the left side (padding the row on the right)

  • When set to “LEFT_RIGHT”, the alignment of superdiagonals will be towards the left side (padding the row on the right), while subdiagonals will be towards the right side (padding the row on the left)

  • When set to “LEFT_LEFT”, the alignment of both superdiagonals and subdiagonals will be towards the left side(padding the row on the right).

  • When set to “RIGHT_RIGHT”, the alignment of both superdiagonals and subdiagonals will be towards the right side(padding the row on the left).

Inputs:
  • x (Tensor) - A n-D Tensor, where \(n >= 2\).

  • diagonal (Tensor) - A Tensor with the same dtype as x. Its rank depends on k. If k is an integer or \(k[0] == k[1]\), its dimension is \(n-1\). Otherwise, it has dimension \(n\).

  • k (Tensor) - Diagonal offset(s), Tensor of type int32. k can either be a single integer, which represents a single diagonal, or a pair of integers that specify the low and high ends of a matrix band. In this case, k[0] should not be greater than k[1]. The value of k has restructions, which means that value of k must be in range \((-x.shape[-2], x.shape[-1])\). Input k must be const Tensor when taking Graph mode.

    • k > 0 refers to a superdiagonal.

    • k = 0 refers to the main diagonal.

    • k < 0 refers to subdiagonals.

Outputs:

Tensor. The same type and shape as x.

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If input x and diagonal are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the diagonal rank size don’t match with input x rank size.

  • ValueError – If the diagonal shape value don’t match with input x shape value.

  • ValueError – If the diagonal \(shape[-2]\) is not equal to num_diags calculated by \(k[1] - k[0] + 1\) .

  • ValueError – If the value of k is not in \((-x.shape[-2], x.shape[-1])\).

  • ValueError – If the diagonal \(shape[-1]\) is not equal to the max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\) .

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[7, 7, 7, 7],
...                      [7, 7, 7, 7],
...                      [7, 7, 7, 7]]), mindspore.float32)
>>> diagonal = Tensor(np.array([[0, 9, 1],
...                             [6, 5, 8],
...                             [1, 2, 3],
...                             [4, 5, 0]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 2]), mindspore.int32)
>>> matrix_set_diag_v3 = ops.MatrixSetDiagV3(align='RIGHT_LEFT')
>>> output = matrix_set_diag_v3(x, diagonal, k)
>>> print(output)
[[1. 6. 9. 7.]
 [4. 2. 5. 1.]
 [7. 5. 3. 8.]]
>>> print(output.shape)
(3, 4)
class tinyms.primitives.MatrixSolve(adjoint=False)[source]

Solves systems of linear equations.

Parameters:

adjoint (bool, optional) – Indicates whether the adjoint of the matrix is used during the computation. Default: False, use its transpose instead.

Inputs:
  • matrix (Tensor) - A tensor of shape \((..., M, M)\), is a matrix of coefficients for a system of linear equations.

  • rhs (Tensor) - A tensor of shape \((..., M, K)\), is a matrix of the resulting values of a system of linear equations. rhs must have the same type as matrix.

Outputs:

Tensor, a matrix composed of solutions to a system of linear equations, which has the same type and shape as rhs.

Raises:
  • TypeError – If adjoint is not the type of bool.

  • TypeError – If the type of matrix is not one of the following dtype: mstype.float16, mstype.float32, mstype.float64, mstype.complex64, mstype.complex128.

  • TypeError – If the type of matrix is not the same as that of rhs.

  • ValueError – If the rank of matrix less than 2.

  • ValueError – If the dimension of matrix is not the same as rhs .

  • ValueError – If the inner-most 2 dimension of matrix is not the same.

  • ValueError – If the inner-most 2 dimension of rhs does not match matrix .

Supported Platforms:

Ascend CPU

Examples

>>> matrix = Tensor(np.array([[1.0  , 4.0],
...                       [2.0 , 7.0]]), mindspore.float32)
>>> rhs = Tensor(np.array([[1.0]  , [3.0]]), mindspore.float32)
>>> matrix_solve = ops.MatrixSolve(adjoint = False)
>>> output = matrix_solve(matrix, rhs)
>>> print(output)
[[5.0], [-1.0]]
class tinyms.primitives.MatrixSolveLs(fast=True)[source]

Solves one or more linear least-squares problems.

If fast is True,then the solution is computed by solving the normal equations using Cholesky decomposition. If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

Parameters:

fast (bool) – An optional bool. Defaults to True.

Inputs:
  • matrix (Tensor) - A Tensor. Must be one of the following data types: float64, float32, complex64, complex128. Shape is \((*, M, N)\).

  • rhs (Tensor) - A Tensor. Must have the same data type as matrix. Shape is \((*, M, K)\). matrix and rhs should have the same dimensions except the last one.

  • l2_regularizer (Tensor) - A Tensor of type float64. Scalar tensor.

Outputs:

Tensor of shape \((*, N, K)\) with the same data type as matrix.

Raises:
  • TypeError – If matrix, rhs or l2_regularizer is not tensor.

  • TypeError – If either of matrix and rhs is not float32, float64, complex64 or complex128.

  • TypeError – If l2_regularizer is not float64.

  • TypeError – If fast is not bool.

  • ValueError – If dimensions of matrix or rhs is less than 2.

  • ValueError – If shape of matrix dose not match the shape of rhs.

Supported Platforms:

CPU

Examples

>>> matrix_solve_ls = ops.MatrixSolveLs(fast=True)
>>> matrix = Tensor([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]], mstype.float32)
>>> rhs = Tensor(np.array([[4], [2], [4], [2]]), mstype.float32)
>>> l2 = Tensor(0.0, mstype.float64)
>>> output = matrix_solve_ls(matrix, rhs, l2)
>>> print(output)
[[ 1.3333334]
[-0.6666667]
[ 2.6666665]
[-1.3333333]]
class tinyms.primitives.MatrixTriangularSolve(lower=True, adjoint=False)[source]

Returns a new tensor with the solution of a linear equation system with an upper or lower triangular matrix.

Note

Only GPU platforms now support the broadcast mechanism.

Parameters:
  • lower (bool, optional) – If True, the innermost matrices in matrix is are lower triangular. Default: True.

  • adjoint (bool, optional) – Indicates whether the adjoint of the matrix is used during the computation. Default: False, use its transpose instead.

Inputs:
  • matrix (Tensor) - Tensor of shape \((*, M, M)\), with float32, float64, complex64 and complex128 data type.

  • rhs (Tensor) - Tensor of shape \((*, M, N)\), with float32, float64, complex64 and complex128 data type.

Outputs:

Tensor, has the shape of \((*, M, N)\) and the same data type as matrix.

Raises:
  • TypeError – If matrix or rhs is not a Tensor.

  • TypeError – If lower or adjoint is not bool.

  • ValueError – For GPU platform, if the batch sizes of matrix and rhs do not satisfy broadcasting rules. For other platforms, if the batch sizes of matrix and rhs are not equal.

  • ValueError – If the inner-most 2 dimensions of matrix are not equal.

  • ValueError – If the second-last dimensions of matrix and rhs are not equal.

Supported Platforms:

Ascend GPU CPU

Examples

>>> matrix_triangular_solve = ops.MatrixTriangularSolve(lower=True, adjoint=False)
>>> matrix = np.array([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
>>> rhs = np.array([[1, 0],[2, 2],[1, 5],[0, 3]])
>>> output = matrix_triangular_solve(Tensor(matrix, mindspore.float32), Tensor(rhs, mindspore.float32))
>>> print(output)
[[ 0.33333334  0.        ]
 [ 1.3333333   2.        ]
 [ 0.6666666   5.        ]
 [-2.3333333  -4.        ]]
class tinyms.primitives.MaxPool(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

Max pooling operation.

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the height of movement but also the width of movement, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value of pad mode is “same” or “valid”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top, bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of shape of input is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_op = ops.MaxPool(pad_mode="VALID", kernel_size=2, strides=1)
>>> output = maxpool_op(x)
>>> print(output)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPool3D(kernel_size=1, strides=1, pad_mode='VALID', pad_list=0, ceil_mode=None, data_format='NCDHW')[source]

Applies a 3D max pooling over an input Tensor which can be regarded as a composition of 3D planes.

Typically the input is of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows:

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the depth, height of movement but also the width of movement,, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value of pad mode is “same”, “valid” or “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top, bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of “pad” will be padded to the input Tensor borders. “pad_list” must be greater than or equal to 0.

  • pad_list (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equals to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (Union[bool, None]) – Whether to use ceil instead of floor to calculate output shape. Only effective in “pad” mode. When “pad_mode” is “pad” and “ceil_mode” is “None”, “ceil_mode” will be set as “False”. Default: None.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Data type must be float16, float32 or float64.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the data type of x.

Raises:
  • TypeError – If kernel_size or strides is neither an int nor a tuple.

  • TypeError – If pad_mode or data_format is not a string.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad_mode is ‘same’ or ‘valid’, ‘ceil_mode’ is not None.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float32)
>>> max_pool3d = ops.MaxPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = max_pool3d(x)
>>> print(output)
[[[[[10. 11.]]]
  [[[22. 23.]]]]]
class tinyms.primitives.MaxPool3DWithArgmax(ksize, strides, pads, dilation=(1, 1, 1), ceil_mode=False, data_format='NCDHW', argmax_type=mindspore.int64)[source]

Performs a 3D max pooling on the input Tensor and returns both max values and indices.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given ksize \(ks = (d_{ker}, h_{ker}, w_{ker})\) and strides \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]

The output is a Tensor with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\) and its depth, height and width are:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \frac{D_{in} + 2 \times \text{pads}[0] - \text{dilation}[0] \times (\text{ksize}[0] - 1) - 1} {\text{stride}[0]} + 1 \\ H_{out} = \frac{H_{in} + 2 \times \text{pads}[1] - \text{dilation}[1] \times (\text{ksize}[1] - 1) - 1} {\text{stride}[1]} + 1 \\ W_{out} = \frac{W_{in} + 2 \times \text{pads}[2] - \text{dilation}[2] \times (\text{ksize}[2] - 1) - 1} {\text{stride}[2]} + 1 \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • pads (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • dilation (Union[int, tuple[int]]) – Default: ‘(1, 1, 1)’.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

  • argmax_type (mindspore.dtype) – The dtype for argmax. Default: mstype.int64.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int32 or int64.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 5.

  • TypeError – If ksize , strides , pads or dilation is not int or tuple.

  • ValueError – If ksize or strides is less than 1.

  • ValueError – If pads is less than 0.

  • ValueError – If data_format is not ‘NCDHW’.

  • ValueError – If argmax_type is not mindspore.int64 or mindspore.int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(2 * 1 * 2 * 2 * 2).reshape((2, 1, 2, 2, 2)), mindspore.float32)
>>> max_pool3d_with_arg_op = ops.MaxPool3DWithArgmax(ksize=2, strides=1, pads=1)
>>> output_tensor, argmax = max_pool3d_with_arg_op(x)
>>> print(output_tensor.shape)
(2, 1, 3, 3, 3)
>>> print(argmax.shape)
(2, 1, 3, 3, 3)
class tinyms.primitives.MaxPoolWithArgmax(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

ops.MaxPoolWithArgmax is deprecated from version 2.0 and will be removed in a future version, use ops.MaxPoolWithArgmaxV2 instead.

Supported Platforms:

Deprecated

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_arg_op = ops.MaxPoolWithArgmax(pad_mode="VALID", kernel_size=2, strides=1)
>>> output_tensor, argmax = maxpool_arg_op(x)
>>> print(output_tensor)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPoolWithArgmaxV2(kernel_size, strides=None, pads=0, dilation=(1, 1), ceil_mode=False, argmax_type=mindspore.int64)[source]

Performs max pooling on the input Tensor and returns both max values and indices.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and argmax value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the height of movement but also the width of movement, or a tuple of two int numbers that represent height and width of movement respectively. Default: None, meaning that strides = kernel_size.

  • pads (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • dilation (Union[int, tuple[int]]) – Default: ‘(1, 1)’.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • argmax_type (mindspore.dtype) – The dtype for argmax. Default: mstype.int64.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int32 or int64.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 4.

  • TypeError – If kernel_size , strides , pads or dilation is not int or tuple.

  • ValueError – If kernel_size, strides or dilation is less than 1.

  • ValueError – If pads is less than 0.

  • ValueError – If argmax_type is not mindspore.int64 or mindspore.int32.

  • TypeError – If ceil_mode is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(20 * 16 * 50 * 32).reshape((20, 16, 50, 32)), mindspore.float32)
>>> maxpool_arg_v2_op = ops.MaxPoolWithArgmaxV2(kernel_size=(3, 2), strides=(2, 1))
>>> output_tensor, argmax = maxpool_arg_v2_op(x)
>>> print(output_tensor.shape)
(20, 16, 24, 31)
>>> pirnt(argmax.shape)
(20, 16, 24, 31)
class tinyms.primitives.MaxUnpool2D(ksize, strides=0, pads=0, output_shape=(), data_format='NCHW')[source]

Calculates the partial inverse of MaxPool2D operation.

Since MaxPool2D loses non-maximal values, it is not fully invertible. Therefore, MaxUnpool2D takes the output of MaxPool2D, including the indices of the maximal values, and computes a partial inverse where all non-maximal values are set to zero. Typically the input is of shape \((N, C, H_{in}, W_{in})\) , the output is of shape \((N, C, H_{out}, W_{out})\) , the operation is as follows:

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times strides[0] - 2 \times pads[0] + ksize[0] \\ W_{out} = (W{in} - 1) \times strides[1] - 2 \times pads[1] + ksize[1] \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • strides (Union[int, tuple[int]], optional) –

    The strides of kernel moving. If strides is 0 or (0, 0), then strides equal to ksize . Default: 0.

    • An int number that represents the height and width of movement are both strides .

    • A tuple of two int numbers that represent height and width of movement respectively.

  • pads (Union[int, tuple[int]], optional) –

    The pad value to be filled. Default: 0.

    • If pads is an integer, the paddings of height and width are the same, equal to pads.

    • If pads is a tuple of two integers, the padding of height and width equal to pads[0] and pads[1] correspondingly.

  • output_shape (tuple[int], optional) –

    The target output size is an optional input. Default: ().

    • If \(output\_shape == ()\) , then the shape of output computed by kszie, strides and pads .

    • If \(output\_shape != ()\) , then output_shape must be \((N, C, H, W)\) or \((N, H, W, C)\) and output_shape must belong to \([(N, C, H_{out} - strides[0], W_{out} - strides[1]), (N, C, H_{out} + strides[0], W_{out} + strides[1])]\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCHW’ and ‘NHWC’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, H_{in}, W_{in})\) or \((N, H_{in}, W_{in}, C)\).

  • argmax (Tensor) - Max values’ index represented by the argmax. Tensor of shape must be same with input x. Values of argmax must belong to \([0, H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

Outputs:

Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((N, H_{out}, W_{out}, C)\). Has the same data type with x.

Raises:
  • TypeError – If data type of x or argmax is not supported.

  • TypeError – If ksize, strides or pads is neither int nor tuple.

  • ValueError – If numbers in strides (also support 0 and (0, 0)) or ksize is not positive.

  • ValueError – If numbers in pads is negative.

  • ValueError – If ksize, strides or pads is a tuple whose length is not equal to 2.

  • ValueError – If data_format is not a str or is neither NCHW nor NHWC.

  • ValueError – If output_shape whose length is neither 0 or 4.

  • ValueError – If output_shape is not close to output size computed by attr ksize, strides and pads.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0, 1], [8, 9]]]]).astype(np.float32))
>>> argmax = Tensor(np.array([[[[0, 1], [2, 3]]]]).astype(np.int64))
>>> maxunpool2d = ops.MaxUnpool2D(ksize=1, strides=1, pads=0)
>>> output = maxunpool2d(x, argmax)
>>> print(output.asnumpy())
[[[[0. 1.]
    [8. 9.]]]]
class tinyms.primitives.MaxUnpool3D(ksize, strides=0, pads=0, output_shape=(), data_format='NCDHW')[source]

Computes the inverse of mindspore.ops.MaxPool3D.

MaxUnpool3D keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), the output is of shape \((N, C, D_{out}, H_{out}, W_{out})\), the operation is as follows.

\[\begin{split}\begin{array}{ll} \\ D_{out} = (D{in} - 1) \times strides[0] - 2 \times pads[0] + ksize[0] \\ H_{out} = (H{in} - 1) \times strides[1] - 2 \times pads[1] + ksize[1] \\ W_{out} = (W{in} - 1) \times strides[2] - 2 \times pads[2] + ksize[2] \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • strides (Union[int, tuple[int]], optional) –

    The distance of kernel moving. Default: 0.

    • If it is an int number, the depth, height and width of movement are all equal to strides.

    • If it is a tuple of three int numbers, they represent depth, height and width of movement respectively.

    • If strides is 0 or (0, 0, 0), then strides equal to ksize.

  • pads (Union[int, tuple[int]], optional) –

    The pad value to be filled. Default: 0.

    • If pads is an integer, the paddings of depth, height and width are the same, equal to pads.

    • If pads is a tuple of three integers, the padding of depth, height and width equal to pads[0], pads[1] and pads[2] correspondingly.

  • output_shape (tuple[int], optional) – The target output size. Default: (). If \(output\_shape == ()\), then the shape of output computed by kszie, strides and pads shown above. If \(output\_shape != ()\), then output_shape format must be \((N, C, D, H, W)\) or \((N, D, H, W, C)\) and output_shape must be in range \([(N, C, D_{out} - strides[0], H_{out} - strides[1], W_{out} - strides[2]), (N, C, D_{out} + strides[0], H_{out} + strides[1], W_{out} + strides[2])]\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCDHW’ and ‘NDHWC’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((N, D_{in}, H_{in}, W_{in}, C)\).

  • argmax (Tensor) - Max values’ index. Tensor that has the same shape as x. Values of argmax must be in range \([0, D_{in} \times H_{in} \times W_{in} - 1]\). Data type must be int32 or int64.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((N, D_{out}, H_{out}, W_{out}, C)\). Has the same data type with x.

Raises:
  • TypeError – If data type of x or argmax is Number.

  • TypeError – If ksize, strides or pads is neither int nor tuple.

  • ValueError – If numbers in strides or ksize is negative.

  • ValueError – If numbers in pads is negative.

  • ValueError – If ksize, strides or pads is a tuple whose length is not equal to 3.

  • ValueError – If data_format is not a str or is neither NCDHW nor NDHWC.

  • ValueError – If output_shape whose length is neither 0 or 5.

  • ValueError – If output_shape is not close to output size range computed by attr ksize, strides, pads.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[[0, 1], [8, 9]]]]]).astype(np.float32))
>>> argmax = Tensor(np.array([[[[[0, 1], [2, 3]]]]]).astype(np.int64))
>>> maxunpool3d = ops.MaxUnpool3D(ksize=1, strides=1, pads=0)
>>> output = maxunpool3d(x, argmax)
>>> print(output.asnumpy())
[[[[[0. 1.]
    [8. 9.]]]]]
class tinyms.primitives.Maximum[source]

Computes the maximum of input tensors element-wise.

Refer to mindspore.ops.maximum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> maximum = ops.Maximum()
>>> output = maximum(x, y)
>>> print(output)
[4. 5. 6.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = maximum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.Merge[source]

Merges all input data to one.

One and only one of the inputs must be selected as the output

Inputs:
  • inputs (Union(Tuple, List)) - The data to be merged. All tuple elements must have the same data type.

Outputs:

tuple. Output is tuple(data, output_index). The data has the same shape of inputs element.

Raises:

TypeError – If inputs is neither Tuple nor list.

Examples

>>> merge = ops.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)
>>> input_y = Tensor(np.random.randint(-4, 4, (2, 4)), mindspore.float32)
>>> result = merge((input_x, input_y))
class tinyms.primitives.Meshgrid(indexing='xy')[source]

Generates coordinate matrices from given coordinate tensors.

Refer to mindspore.ops.meshgrid() for more details.

Parameters:

indexing (str, optional) – Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. Valid options: xy’ or ‘ij’. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing.

Inputs:
  • input (Union[tuple]) - A Tuple of N 1-D Tensor objects. The length of input should be greater than 1. The data type is Number.

Outputs:

Tensors, A Tuple of N N-D Tensor objects. The data type is the same with the Inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> inputs = (x, y, z)
>>> meshgrid = ops.Meshgrid(indexing='xy')
>>> output = meshgrid(inputs)
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
class tinyms.primitives.Minimum[source]

Computes the minimum of input tensors element-wise.

Refer to mindspore.ops.minimum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> minimum = ops.Minimum()
>>> output = minimum(x, y)
>>> print(output)
[1. 2. 3.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = minimum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.MirrorPad(mode='REFLECT')[source]

Pads the input tensor according to the paddings and mode.

Parameters:

mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • paddings (Tensor) - Paddings requires constant tensor. The value of paddings is a matrix(list), and its shape is \((N, 2)\). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input_x.dim_size(D) (or input_x.dim_size(D) - 1) if mode is SYMMETRIC (if REFLECT, respectively).

Outputs:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]]. For a more intuitive understanding, please see the example below.

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]]. For a more intuitive understanding, please see the example below.

Raises:
  • TypeError – If input_x or paddings is not a Tensor.

  • TypeError – If mode is not a str.

  • ValueError – If paddings.size is not equal to 2 * rank of input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, nn, ops
>>> # case1: mode="REFLECT"
>>> class Net(nn.Cell):
...    def __init__(self, mode):
...        super(Net, self).__init__()
...        self.pad = ops.MirrorPad(mode=mode)
...        self.paddings = Tensor([[1, 1], [2, 2]])
...    def construct(self, input_x):
...        return self.pad(input_x, self.paddings)
...
>>> input_x = Tensor([[1,2,3], [4,5,6], [7,8,9]])
>>> pad = Net("REFLECT")
>>> output = pad(input_x)
>>> print(output)
[[6 5 4 5 6 5 4]
 [3 2 1 2 3 2 1]
 [6 5 4 5 6 5 4]
 [9 8 7 8 9 8 7]
 [6 5 4 5 6 5 4]]
>>> # case2: mode="SYMMETRIC"
>>> pad = Net("SYMMETRIC")
>>> output = pad(input_x)
>>> print(output)
[[2 1 1 2 3 3 2]
 [2 1 1 2 3 3 2]
 [5 4 4 5 6 6 5]
 [8 7 7 8 9 9 8]
 [8 7 7 8 9 9 8]]
class tinyms.primitives.Mish[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

The function is shown as follows:

\[\text{output} = x * \tanh(\log(1 + \exp(\text{x})))\]

See more details in A Self Regularized Non-Monotonic Neural Activation Function.

Inputs:
  • x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> mish = ops.Mish()
>>> output = mish(x)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.Mod[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} \text{ % } y_{i}\]

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048, the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1,D2... ,Dn)\), then \(D1*D2... *DN<=1000000,n<=8\).

Inputs:
  • x (Union[Tensor, numbers.Number, bool]) - The first input is a number, a bool or a tensor whose data type is number.

  • y (Union[Tensor, numbers.Number, bool]) - When the first input is a tensor, The second input could be a number, a bool or a tensor whose data type is number. When the first input is a number or a bool the second input must be a tensor whose data type is number.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> mod = ops.Mod()
>>> output = mod(x, y)
>>> print(output)
[-1.  1.  0.]
class tinyms.primitives.Mul[source]

Multiplies two tensors element-wise.

Refer to mindspore.ops.mul() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> mul = ops.Mul()
>>> output = mul(x, y)
>>> print(output)
[ 4. 10. 18.]
class tinyms.primitives.MulNoNan[source]

Computes x * y element-wise. If y is zero, no matter what x is, it will return 0.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcasted. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}output_{ij} = \begin{cases} 0, & y_{ij} = 0;\\ x_{ij} * y_{ij}, & otherwise. \end{cases}\end{split}\]

Note

The shapes of x and y should be the same or can be broadcasted. This is noncommutative: if y is NaN or infinite and x is 0, the result will be NaN.

Inputs:
  • x (Union[Tensor]) - The first input is a tensor whose data type is one of int32, int64, float16, float32, float64, complex64, complex128 currently or scalar.

  • y (Union[Tensor]) - The second input is a tensor whose data type is one of int32, int64, float16, float32, float64, complex64, complex128 currently or scalar.

Outputs:

Tensor, the shape is the same as the shape after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type and shape of two inputs, there are some 0 in y.
>>> x = Tensor(np.array([[-1.0, 6.0, np.inf], [np.nan, -7.0, 4.0]]), mindspore.float32)
>>> y = Tensor(np.array([[-1.0, 4.0, 0], [0, -3.0, 1.0]]), mindspore.float32)
>>> mul_no_nan = ops.MulNoNan()
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. 0.]
[ 0. 21. 4.]]
>>> # case 2 : the shape of two inputs is same, there are some 0 in x, y.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.float32)
>>> y = Tensor(np.array([[-1.0, 4.0, np.inf], [np.nan, 0, 1.0]]), mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. nan]
 [nan  0. 4.]]
>>> print(output.dtype)
Float32
>>> # case 3 : the y is a scalar.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.float32)
>>> y = Tensor(0, mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]]
class tinyms.primitives.MultiMarginLoss(p=1, margin=1.0, reduction='mean')[source]

Creates a loss function that minimizes the hinge loss for multi-class classification tasks. The loss is calculated by comparing the input and output of the function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.multi_margin_loss() for more details.

Parameters:
  • p (int, optional) – The norm degree for pairwise distance. Should be 1 or 2. Default: 1.

  • margin (int, optional) – A parameter to change pairwise distance. Default: 1.0.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Inputs:
  • inputs (Tensor) - Input , with shape \((N, C)\). Data type only support float32, float16 or float64.

  • target (Tensor) - Ground truth labels, with shape \((N,)\). Data type only support int64. The value of target should be non-negative, less than C.

  • weight (Tensor) - The rescaling weight to each class with shape \((C,)\). Data type only support float16, float32 or float64.

Outputs:

Tensor, When reduction is ‘none’, the shape is \((N,)\). Otherwise, it is a scalar. Has the same data type with inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[3, 3]), mindspore.float32)
>>> target = Tensor(np.array([1, 2, 1]), mindspore.int64)
>>> weight = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> loss = ops.MultiMarginLoss()
>>> output = loss(x, target, weight)
>>> print(output)
0.6666667
class tinyms.primitives.MultilabelMarginLoss(reduction='mean')[source]

Creates a loss criterion that minimizes the hinge loss for multi-class classification tasks. It takes a 2D mini-batch Tensor \(x\) as input and a 2D Tensor \(y\) containing target class indices as output.

Refer to mindspore.ops.multilabel_margin_loss() for more details.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.MultilabelMarginLoss()
>>> x = Tensor(np.array([[0.1, 0.2, 0.4, 0.8], [0.2, 0.3, 0.5, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[1, 2, 0, 3], [2, 3, -1, 1]]), mindspore.int32)
>>> output = loss(x, target)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 0.325), Tensor(shape=[2, 4], dtype=Int32, value=
[[1, 1, 1, 1], [0, 0, 1, 1]]))
class tinyms.primitives.Multinomial(seed=0, seed2=0, dtype=mindspore.int32)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

  • dtype (dtype) – The type of output, must be int32 or int64. Default: int32.

Inputs:
  • x (Tensor) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • num_samples (int) - number of samples to draw, must be a nonnegative number.

Outputs:

Tensor with the same rows as x, each row has num_samples sampled indices.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If dtype of num_samples is not int.

  • TypeError – If dtype is not int32 or int64.

  • ValueError – If seed or seed2 is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> multinomial = ops.Multinomial(seed=10)
>>> output = multinomial(x, 2)
>>> print(output)
[[1 1]]
class tinyms.primitives.MultinomialWithReplacement(numsamples, replacement=False)[source]

Returns a tensor where each row contains numsamples indices sampled from the multinomial distribution with replacement. It diffs from Multinomial in that it allows the same outcome to be chosen multiple times.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.multinomial_with_replacement() for more details.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • numsamples (int) – number of samples to draw, must be a nonnegative number.

  • replacement (bool, optional) – Whether to draw with replacement or not. Default: False.

Inputs:
  • x (Tensor) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • seed (Tensor) - If seed is set to -1, and offset is set to 0, the random number generator is seeded by a random seed. Otherwise, it is seeded by the given seed. Supported dtype: int64.

  • offset (Tensor) - Offset used to avoid seed collision. Supported dtype: int64.

Outputs:

Tensor with the same rows as x, each row has numsamples sampled indices.

Supported Platforms:

CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> seed = Tensor(2, mstype.int64)
>>> offset = Tensor(5, mstype.int64)
>>> multinomialwithreplacement = ops.MultinomialWithReplacement(numsamples=2,replacement=True)
>>> output = multinomialwithreplacement(x, seed, offset)
>>> print(output)
[[1 1]]
class tinyms.primitives.Mvlgamma(p)[source]

Calculates the multivariate log-gamma function element-wise for a given dimension p.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.mvlgamma() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
>>> op = ops.Mvlgamma(p=3)
>>> y = op(x)
>>> print(y)
[[ 2.694925   5.402975   9.140645 ]
 [ 5.402975   1.5963125 13.640454 ]]
class tinyms.primitives.NLLLoss(reduction='mean')[source]

Gets the negative log likelihood loss between logits and labels.

The nll loss with reduction=none can be described as:

\[\ell(x, t)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{t_{n}} x_{n, t_{n}}, \quad w_{c}=\text { weight }[c] \cdot 1\]

where \(x\) is the logits, \(t\) is the labels, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

If reduction is not ‘none’ (default ‘mean’), then

\[\begin{split}\ell(x, t)=\left\{\begin{array}{ll} \sum_{n=1}^{N} \frac{1}{\sum_{n=1}^{N} w_{t n}} l_{n}, & \text { if reduction }=\text { 'mean'; } \\ \sum_{n=1}^{N} l_{n}, & \text { if reduction }=\text { 'sum' } \end{array}\right.\end{split}\]
Parameters:

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type only supports float32 or float16.

  • labels (Tensor) - Ground truth labels, with shape \((N,)\), where each value belong to \([0, C-1]\). Data type only supports int32 or int64.

  • weight (Tensor) - The rescaling weight to each class, with shape \((C,)\) and data type only supports float32 or float16.

Outputs:

Tuple of 2 tensors composed with loss and total_weight.

  • loss (Tensor) - When reduction is ‘none’ and logits is a 2D tensor, the loss shape is \((N,)\). Otherwise, the loss is a scalar. The data type is the same with input’s.

  • total_weight (Tensor) - The total_weight is a scalar. The data type is the same with weight’s.

Raises:
  • TypeError – If dtype of logits or weight is neither float16 nor float32.

  • TypeError – If dtype of labels is neither int32 nor int64.

  • ValueError – If logits is not a one or two dimension tensor, labels and weight are not one dimension tensors. When logits is a two dimension tensor, the first dimension of logits is not equal to labels, and second dimension of logits is not equal to weight. When logits is a one dimension tensor, the dimensions of logits, labels and weight should be equal to each other.

  • ValueError – If the value of labels exceed \([0, C-1]\), where \(C\) is the number of classes.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[0.5488135, 0.71518934],
...                           [0.60276335, 0.5448832],
...                           [0.4236548, 0.6458941]]).astype(np.float32))
>>> labels = Tensor(np.array([0, 0, 0]).astype(np.int32))
>>> weight = Tensor(np.array([0.3834415, 0.79172504]).astype(np.float32))
>>> nll_loss = ops.NLLLoss(reduction="mean")
>>> loss, weight = nll_loss(logits, labels, weight)
>>> print(loss)
-0.52507716
>>> print(weight)
1.1503246
class tinyms.primitives.NMSWithMask(iou_threshold=0.5)[source]

Non-maximum Suppression. When object detection problem is performed in the computer vision field, object detection algorithm generates a plurality of bounding boxes. Use the box with the highest score, calculate the overlap between other boxes and the current box, and delete the box based on a certain threshold(IOU). On Ascend platform, the input box score is ignored, which only selects boexs based on the IOU between boxes, which means if you want to remove boxes that has lower scores, you need to sort the input boxes by score in descending order in advance. The IOU is as follows:

\[\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\]

Warning

Only supports up to 2864 input boxes at one time.

Parameters:

iou_threshold (float) – Specifies the threshold of overlap boxes with respect to IOU. Default: 0.5.

Inputs:
  • bboxes (Tensor) - The shape of tensor is \((N, 5)\). Input bounding boxes. N is the number of input bounding boxes. Every bounding box contains 5 values, the first 4 values are the coordinates(x0, y0, x1, y1) of bounding box which represents the point of top-left and bottom-right, and the last value is the score of this bounding box. The data type must be float16 or float32.

Outputs:

tuple[Tensor], tuple of three tensors, they are output_boxes, output_idx and selected_mask.

  • output_boxes (Tensor) - The shape of tensor is \((N, 5)\). On GPU and CPU platform, it is a sorted list of bounding boxes by sorting the input bboxes in descending order of score. On Ascend platform, it is same as input bboxes.

  • output_idx (Tensor) - The shape of tensor is \((N,)\). The indexes list of output_boxes.

  • selected_mask (Tensor) - The shape of tensor is \((N,)\). A mask list of valid output bounding boxes. Apply this mask on output_boxes to get the list of bounding boxes after non-max suppression calculation, or apply this mask on output_idx to get the indexes list of bounding boxes after non-max suppression calculation.

Raises:
  • ValueError – If the iou_threshold is not a float number.

  • ValueError – if the first dimension of input Tensor is less than or equal to 0.

  • TypeError – if the dtype of the bboxes is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bbox = np.array([[100.0, 100.0, 50.0, 68.0, 0.63], [150.0, 75.0, 165.0, 115.0, 0.55],
...                  [12.0, 190.0, 288.0, 200.0, 0.9], [28.0, 130.0, 106.0, 172.0, 0.3]])
>>> bbox[:, 2] += bbox[:, 0]
>>> bbox[:, 3] += bbox[:, 1]
>>> inputs = Tensor(bbox, mindspore.float32)
>>> nms = ops.NMSWithMask(0.1)
>>> output_boxes, indices, mask = nms(inputs)
>>> indices_np = indices.asnumpy()
>>> print(indices_np[mask.asnumpy()])
[0 1 2]
class tinyms.primitives.NPUAllocFloatStatus[source]

Allocates a flag to store the overflow status.

The flag is a tensor whose shape is \((8,)\) and data type is mindspore.dtype.float32.

Note

Please refer to the Examples of mindspore.ops.NPUGetFloatStatus.

Outputs:

Tensor, has the shape of \((8,)\).

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> output = alloc_status()
>>> print(output)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.NPUClearFloatStatus[source]

Clears the flag which stores the overflow status.

Note

The flag is in the register on the Ascend device. It will be reset and can not be reused again after the NPUClearFloatStatus is called. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use mindspore.ops.Depend on ensure the execution order.

Please refer to the Examples of mindspore.ops.NPUGetFloatStatus.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import ops
>>> from mindspore.common import dtype as mstype
>>> from mindspore.common.tensor import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.alloc_status = ops.NPUAllocFloatStatus()
...         self.get_status = ops.NPUGetFloatStatus()
...         self.clear_status = ops.NPUClearFloatStatus()
...         self.sub = ops.Sub()
...         self.neg = ops.Neg()
...
...     def construct(self, x):
...         init = self.alloc_status()
...         clear_status = self.clear_status(init)
...         x = ops.depend(x, clear_status)
...         res = self.sub(x, self.neg(x))
...         init = ops.depend(init, res)
...         get_status = self.get_status(init)
...         res = ops.depend(res, get_status)
...         return res
>>>
>>> value = 5
>>> data = np.full((2, 3), value, dtype=np.float16)
>>> x = Tensor(data, dtype=mstype.float16)
>>> net = Net()
>>> res = net(x)
>>> print(res)
[[10. 10. 10.]
 [10. 10. 10.]]
class tinyms.primitives.NPUGetFloatStatus[source]

mindspore.ops.NPUGetFloatStatus updates the flag which is the output tensor of mindspore.ops.NPUAllocFloatStatus with the latest overflow status.

Note

The flag is a tensor whose shape is \((8,)\) and data type is mindspore.dtype.float32. If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use mindspore.ops.Depend to ensure the execution order.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import ops
>>> from mindspore.common import dtype as mstype
>>> from mindspore.common.tensor import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.alloc_status = ops.NPUAllocFloatStatus()
...         self.get_status = ops.NPUGetFloatStatus()
...         self.clear_status = ops.NPUClearFloatStatus()
...         self.sub = ops.Sub()
...         self.neg = ops.Neg()
...
...     def construct(self, x):
...         init = self.alloc_status()
...         clear_status = self.clear_status(init)
...         x = ops.depend(x, clear_status)
...         res = self.sub(x, self.neg(x))
...         init = ops.depend(init, res)
...         get_status = self.get_status(init)
...         res = ops.depend(res, get_status)
...         return res
>>>
>>> value = 5
>>> data = np.full((2, 3), value, dtype=np.float16)
>>> x = Tensor(data, dtype=mstype.float16)
>>> net = Net()
>>> res = net(x)
>>> print(res)
[[10. 10. 10.]
 [10. 10. 10.]]
class tinyms.primitives.NanToNum(nan=0.0, posinf=None, neginf=None)[source]

Replaces NaN, positive infinity and negative infinity values in the input Tensor with the values specified by nan, posinf and neginf respectively.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.nan_to_num() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> nan_to_num = ops.NanToNum()
>>> x = Tensor(np.array([float('nan'), float('inf'), -float('inf'), 3.14]), mindspore.float32)
>>> output = nan_to_num(x)
>>> print(output)
[ 0.0000000e+00  3.4028235e+38 -3.4028235e+38  3.1400001e+00]
class tinyms.primitives.Neg[source]

Returns a tensor with negative values of the input tensor element-wise.

Refer to mindspore.ops.neg() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> neg = ops.Neg()
>>> x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = neg(x)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
class tinyms.primitives.NeighborExchange(send_rank_ids, recv_rank_ids, recv_shapes, send_shapes, recv_type, group='hccl_world_group')[source]

NeighborExchange is a collective operation.

NeighborExchange sends data from the local rank to ranks in the send_rank_ids, as while receive data from recv_rank_ids.

Note

The user needs to preset communication environment variables before running the following example, please check the details on the official website of MindSpore.

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details.

Parameters:
  • send_rank_ids (list(int)) – Ranks which the data is sent to.

  • recv_rank_ids (list(int)) – Ranks which the data is received from.

  • recv_shapes (tuple(list(int))) – Data shape which received from recv_rank_ids.

  • send_shapes (tuple(list(int))) – Data shape which send to the send_rank_ids.

  • recv_type (type) – Data type which received from recv_rank_ids

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (tuple[Tensor]) - Shapes are same as args of send_shapes.

Outputs:

Tuple tensor, shapes are same as args of recv_shapes.

Supported Platforms:

Ascend

Examples

>>> # This example should be run with 2 devices. Refer to the tutorial > Distributed Training on mindspore.cn
>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.neighborexchange = ops.NeighborExchange(send_rank_ids=[1], recv_rank_ids=[1],
...                                                      recv_shapes=([2, 2],), send_shapes=([3, 3],),
...                                                      recv_type=ms.float32)
...
...
...     def construct(self, x):
...         out = self.neighborexchange((x,))
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> net = Net()
>>> input_x = Tensor(np.ones([3, 3]), dtype = ms.float32)
>>> output = net(input_x)
>>> print(output)
[[2. 2.], [2. 2.]]
class tinyms.primitives.NeighborExchangeV2(send_rank_ids, send_lens, recv_rank_ids, recv_lens, data_format, group='hccl_world_group')[source]

NeighborExchangeV2 is a collective communication operation.

NeighborExchangeV2 sends data from the local rank to ranks in the send_rank_ids, as while receive data from recv_rank_ids. Please refer to Distributed Set Communication Primitives - NeighborExchangeV2 to learn about how the data is exchanged between neighborhood devices.

Note

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details .

Parameters:
  • send_rank_ids (list(int)) – Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one direction is not send to , set it -1.

  • recv_rank_ids (list(int)) – Ranks which the data is received from. 8 rank_ids represents 8 directions, if one direction is not recv from , set it -1.

  • send_lens (list(int)) – Data lens which send to the send_rank_ids, 4 numbers represent the lens of [send_top, send_bottom, send_left, send_right].

  • recv_lens (list(int)) – Data lens which received from recv_rank_ids, 4 numbers represent the lens of [recv_top, recv_bottom, recv_left, recv_right].

  • data_format (str) – Data format, only support NCHW now.

  • group (str, optional) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”, which means “hccl_world_group” in Ascend, and “nccl_world_group” in GPU.

Inputs:
  • input_x (Tensor) - The Tensor before being exchanged. It has a shape of \((N, C, H, W)\).

Outputs:

The Tensor after being exchanged. If input shape is \((N, C, H, W)\), output shape is \((N, C, H+recv\_top+recv\_bottom, W+recv\_left+recv\_right)\).

Raises:
  • TypeError – If group is not a string or any one of send_rank_ids, recv_rank_ids, send_lens, recv_lens is not a list.

  • ValueError – If send_rank_ids or recv_rank_ids has value less than -1 or has repeated values.

  • ValueError – If send_lens, recv_lens has value less than 0.

  • ValueError – If data_format is not “NCHW”.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.neighborexchangev2 = ops.NeighborExchangeV2(send_rank_ids=[-1, -1, -1, -1, 1, -1, -1, -1],
...                                                          send_lens=[0, 1, 0, 0],
...                                                          recv_rank_ids=[-1, -1, -1, -1, 1, -1, -1, -1],
...                                                          recv_lens=[0, 1, 0, 0],
...                                                          data_format="NCHW")
...
...     def construct(self, x):
...         out = self.neighborexchangev2(x)
...         return out
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> input_x = Tensor(np.ones([1, 1, 2, 2]), dtype = ms.float32)
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[[[1. 1.], [1. 1.], [2. 2.]]]]
class tinyms.primitives.NextAfter[source]

Returns the next representable floating-point value after x1 towards x2 element-wise.

Say there are two float32 numbers \(a, b\), and let the representable delta of float32 datatype is \(eps\). If \(a < b\), then the next representable of \(a\) towards \(b\) is \(a+eps\), If \(a > b\), the next representable of \(b\) towards \(a\) is \(b-eps\).

\[out_{i} = nextafter({x1_{i}, x2_{i}})\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • x2 (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

Outputs:

Tensor, has the same shape and data type as x1.

Raises:
  • TypeError – If neither x1 nor x2 is a Tensor.

  • TypeError – If the dtype of x1 and x2 is not one of: float32, float64.

  • TypeError – If the dtypes of x1 and x2 are not same.

  • ValueError – If x1’s shape is not the same as x2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> nextafter = ops.NextAfter()
>>> x1 = Tensor(np.asarray([0.0]), mindspore.float32)
>>> x2 = Tensor(np.asarray([0.1]), mindspore.float32)
>>> output = nextafter(x1, x2)
>>> print(output)
[1.e-45]
class tinyms.primitives.NoRepeatNGram(ngram_size=1)[source]

Updates the probability of occurrence of words with its corresponding n-grams.

During beam search, if consecutive ngram_size words exist in the generated word sequence, the consecutive ngram_size words will be avoided during subsequent prediction. For example, when ngram_size is 3, the generated word sequence is [1, 2, 3, 2, 3], the next predicted word will not be 2 and the value of log_probs will be replaced with -FLOAT_MAX. Because 3 consecutive words [2, 3, 2] do not appear twice in the word sequence.

Parameters:

ngram_size (int) – Size of n-grams, must be greater than 0. Default: 1.

Inputs:
  • state_seq (Tensor) - n-gram word series, a 3-D tensor with shape: \((batch\_size, beam\_width, m)\).

  • log_probs (Tensor) - Probability of occurrence of n-gram word series, a 3-D tensor with shape: \((batch\_size, beam\_width, vocab\_size)\). The value of log_probs will be replaced with -FLOAT_MAX when n-grams repeated.

Outputs:
  • log_probs (Tensor) - The output Tensor with same shape and type as original log_probs.

Raises:
  • TypeError – If ngram_size is not an int.

  • TypeError – If neither state_seq nor log_probs is a Tensor.

  • TypeError – If the dtype of state_seq is not int.

  • TypeError – If the dtype of log_probs is not float.

  • ValueError – If ngram_size is less than zero.

  • ValueError – If ngram_size is greater than m.

  • ValueError – If neither state_seq nor log_probs is not a 3-D Tensor.

  • ValueError – If the batch_size of state_seq and log_probs are not equal.

  • ValueError – If the beam_width of state_seq and log_probs are not equal.

Supported Platforms:

Ascend GPU CPU

Examples

>>> no_repeat_ngram = ops.NoRepeatNGram(ngram_size=3)
>>> state_seq = Tensor([[[1, 2, 1, 2, 5, 1, 2],
...                      [9, 3, 9, 5, 4, 1, 5]],
...                     [[4, 8, 6, 4, 5, 6, 4],
...                      [4, 8, 8, 4, 3, 4, 8]]], dtype=mindspore.int32)
>>> log_probs = Tensor([[[0.7, 0.8, 0.6, 0.9, 0.2, 0.8, 0.4, 0.6, 0.2, 0.7],
...                      [0.4, 0.5, 0.6, 0.7, 0.8, 0.1, 0.9, 0.8, 0.7, 0.1]],
...                     [[0.9, 0.7, 0.6, 0.3, 0.5, 0.3, 0.5, 0.4, 0.8, 0.6],
...                      [0.5, 0.8, 0.8, 0.7, 0.7, 0.8, 0.2, 0.7, 0.9, 0.7]]], dtype=mindspore.float32)
>>> output = no_repeat_ngram(state_seq, log_probs)
>>> print(output)
[[[ 6.9999999e-01 -3.4028235e+38  6.0000002e-01  8.9999998e-01
    2.0000000e-01 -3.4028235e+38  4.0000001e-01  6.0000002e-01
    2.0000000e-01  6.9999999e-01]
  [ 4.0000001e-01  5.0000000e-01  6.0000002e-01  6.9999999e-01
    8.0000001e-01  1.0000000e-01  8.9999998e-01  8.0000001e-01
    6.9999999e-01  1.0000000e-01]]
 [[ 8.9999998e-01  6.9999999e-01  6.0000002e-01  3.0000001e-01
    5.0000000e-01 -3.4028235e+38  5.0000000e-01  4.0000001e-01
    8.0000001e-01  6.0000002e-01]
  [ 5.0000000e-01  8.0000001e-01  8.0000001e-01  6.9999999e-01
    6.9999999e-01  8.0000001e-01  2.0000000e-01  6.9999999e-01
   -3.4028235e+38  6.9999999e-01]]]
class tinyms.primitives.NonDeterministicInts(dtype=mindspore.int64)[source]

Generates some integers that match the given type.

Returns the tensor with the given shape, the random numbers in it drawn from the data range that a given type can represent.

Warning

The value of shape must be greater than zero. The number of elements of output can not exceed 1000000.

Parameters:

dtype (mindspore.dtype, optional) – The date type of output. The supported values are: mstype.int32 and mstype.int64. Default: mstype.int64.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. The supported values are: int32 and int64.

Outputs:

Tensor. Its shape is specified by the input shape. Its type is specified by dtype.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If dtype is not mstype.int32 or mstype.int64.

  • ValueError – If shape has negative elements.

  • ValueError – If shape has less than 2 elements.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor((3,4), mstype.int32)
>>> ndints = ops.NonDeterministicInts(dtype=mstype.int32)
>>> output = ndints(shape)
>>> print(output.shape)
(3, 4)
class tinyms.primitives.NonMaxSuppressionV3[source]

Selects a subset of bounding boxes in a greedy manner, based on their descending score. It removes boxes that have high intersection-over-union (IOU) overlap with previously selected boxes, and eliminates boxes with scores lower than a given threshold.

Warning

When input max_output_size is negative, it will be treated as 0.

Note

  • This algorithm does not depend on the location of the origin in the coordinate system.

  • This algorithm remains unaffected by orthogonal transformations and translations of the coordinate system, which means that translating or reflecting the coordinate system will result in the same boxes being chosen by the algorithm.

Inputs:
  • boxes (Tensor) - A 2-D Tensor of shape \((num\_boxes, 4)\).

  • scores (Tensor) - A 1-D Tensor of shape \((num\_boxes)\) where each element represents a single score associated with each box (i.e., each row of the boxes Tensor). It is required that the number of scores in scores must be equal to the number of boxes in boxes. The supported data type is float32.

  • max_output_size (Union[Tensor, Number.Int]) - A scalar integer Tensor representing the maximum number of boxes to be selected by non max suppression. The supported data type is int32.

  • iou_threshold (Union[Tensor, Number.Float]) - A scalar float Tensor represents the threshold used for determining if the intersection over union (IOU) between boxes is too high. Data type of iou_threshold is float32 and must be in range [0, 1].

  • score_threshold (Union[Tensor, Number.Float]) - A scalar float Tensor represents the threshold for determining when to remove boxes based on score. The supported data type is float32.

Outputs:

A 1-D integer Tensor of shape \((M)\) representing the selected indices from the boxes tensor, where M <= max_output_size.

Raises:
  • TypeError – If the dtype of boxes and scores are different.

  • TypeError – If the dtype of iou_threshold and score_threshold are different.

  • TypeError – If boxes is not tensor or its dtype is not float16 or float32.

  • TypeError – If scores is not tensor or its dtype is not float16 or float32.

  • TypeError – If max_output_size is not tensor or scalar or its date type is not int32 or int64.

  • TypeError – If iou_threshold is not tensor or scalar or its type is neither float16 or float32.

  • TypeError – If score_threshold is not tensor or scalar or its type is neither float16 or float32.

  • ValueError – If the size of shape of boxes is not 2 or the second value of its shape is not 4.

  • ValueError – If the size of shape of scores is not 1.

  • ValueError – If any of the size of shape of max_output_size, iou_threshold, score_threshold is not 0.

Supported Platforms:

Ascend GPU

Examples

>>> boxes = Tensor(np.array([[1, 2, 3, 4], [1, 3, 3, 4], [1, 3, 4, 4],
...                          [1, 1, 4, 4], [1, 1, 3, 4]]), mstype.float32)
>>> scores = Tensor(np.array([0.4, 0.5, 0.72, 0.9, 0.45]), mstype.float32)
>>> max_output_size = Tensor(5, mstype.int32)
>>> iou_threshold = Tensor(0.5, mstype.float32)
>>> score_threshold = Tensor(0, mstype.float32)
>>> nonmaxsuppression = ops.NonMaxSuppressionV3()
>>> output = nonmaxsuppression(boxes, scores, max_output_size, iou_threshold, score_threshold)
>>> print(output)
[3 2 0]
class tinyms.primitives.NonMaxSuppressionWithOverlaps[source]

Selects a subset of bounding boxes in a greedy manner by prioritizing those with higher scores and removing those with high overlaps with previously selected boxes. Boxes with scores lower than the score threshold are also removed. The overlap values between boxes are represented as an N-by-N square matrix, which can be customized to define different overlap criteria such as intersection over union or intersection over area.

Note

  • This algorithm does not depend on the location of the origin in the coordinate system.

  • This algorithm remains unaffected by orthogonal transformations and translations of the coordinate system, which means that translating or reflecting the coordinate system will result in the same boxes being chosen by the algorithm.

Inputs:
  • overlaps (Tensor) - A 2-D Tensor of shape \((num\_boxes, num\_boxes)\), representing the n-by-n box overlap values. Types allowed:float16, float32 and float64.

  • scores (Tensor) - A 1-D Tensor of shape \((num\_boxes)\) where each element represents a single score associated with each box (i.e., each row of the boxes Tensor). It is required that the number of scores in scores must be equal to the number of boxes in boxes. The supported data type is float32.

  • max_output_size (Union[Tensor, Number.Int]) - A scalar integer Tensor representing the maximum number of boxes to be selected by non max suppression, and max_output_size must be equal to or greater than 0. Types allowed:int32.

  • overlap_threshold (Union[Tensor, Number.Float]) - A scalar value, represented by a 0-D float Tensor, which is used as a threshold to determine if two boxes overlap too much. Types allowed:float16, float32 and float64.

  • score_threshold (Union[Tensor, Number.Float]) - A 0-D float Tensor representing the threshold for deciding when to remove boxes based on score. It has the same dtype as overlap_threshold.

Outputs:

A 1-D integer Tensor of shape \((M)\) representing the selected indices from the boxes Tensor, where M <= max_output_size. Its data type is int32.

Raises:
  • TypeError – If the dtype of overlaps , scores overlap_threshold and score_threshold is not float16, float32 or float64.

  • TypeError – If overlaps or scores is not Tensor。

  • TypeError – If max_output_size is not Tensor or Scalar.If max_output_size is not int32.

  • TypeError – If overlap_threshold is not Tensor or scalar. If its type is not float16, float32 or float64.

  • TypeError – If score_threshold is not Tensor or scalar. If its type is not float16, float32 or float64.

  • ValueError – If the size of shape of overlaps is not 2 or the second value of its shape is not equal to the first value of its shape.

  • ValueError – If the size of shape of scores is not 1.

  • ValueError – If any of the size of shape of max_output_size, overlap_threshold, score_threshold is not 0.

  • ValueError – If max_output_size is negative.

  • ValueError – If the shape of scores is not equal to the shape of the dim0 or dim1 of overlaps.

Supported Platforms:

Ascend GPU CPU

Examples

>>> overlaps = Tensor(np.array([[0.6964692, 0.28613934, 0.22685145, 0.5513148],
...                     [0.71946895, 0.42310646, 0.9807642, 0.6848297],
...                     [0.4809319, 0.39211753, 0.343178, 0.7290497],
...                     [0.43857226, 0.059677895, 0.39804426, 0.7379954]
...                     ]), mstype.float32)
>>> scores = Tensor(np.array([0.18249173, 0.17545176, 0.53155136, 0.53182757]), mstype.float32)
>>> max_output_size = Tensor(4, mstype.int32)
>>> overlap_threshold = Tensor(0.1, mstype.float32)
>>> score_threshold = Tensor(0.2, mstype.float32)
>>> nonmaxsuppression = ops.NonMaxSuppressionWithOverlaps()
>>> output = nonmaxsuppression(overlaps, scores, max_output_size, overlap_threshold, score_threshold)
>>> print(output)
[3]
class tinyms.primitives.NonZero[source]

Return a tensor of the positions of all non-zero values.

Refer to mindspore.ops.nonzero() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import NonZero
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> nonzero = NonZero()
>>> output = nonzero(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
>>> x = Tensor(np.array([1, 0, 2, 0, 3]), mindspore.int32)
>>> nonzero = NonZero()
>>> output = nonzero(x)
>>> print(output)
[[0]
 [2]
 [4]]
class tinyms.primitives.NotEqual[source]

Computes the non-equivalence of two tensors element-wise.

Refer to mindspore.ops.ne() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, y)
>>> print(output)
[False False  True]
class tinyms.primitives.NthElement(reverse=False)[source]

Computes the n-th smallest values for the last dimension of the input Tensor.

  • When input is a 1-D Tensor (i.e. Vector), it finds the nth-smallest value in the vector and outputs its value as a scalar Tensor.

  • When input is matrices or has higher rank, it finds the nth-smallest value in each row (or vector along the last dimension) and outputs these values in a Tensor with shape of values.shape = input.shape[:-1].

Parameters:

reverse (bool, optional) – An optional bool. If set to True, it find the \(n\)-th largest value in the vector instead of the nth-smallest. Default: False.

Inputs:
  • input (Tensor) - Input Tensor with 1-D or higher dimension.

  • n (Union[int, Tensor]) - If the n is a Tensor, it should be a 0-D Tensor, dtype is int32. Valid range of n is \([0, input.shape[-1])\) where \(input.shape[-1]\) is last dimension size of input.

Outputs:
  • values (Tensor) - Its shape satisfies: values.shape = input.shape[:-1]. The dtype is the same as input.

Raises:
  • TypeError** – If the type of input is out of the valid list.

  • TypeError** – If n is not int32 or not a Tensor.

  • ValueError** – If n is out of \([0, input.shape[-1])\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[1,2,3],[4,5,6]]) , mstype.int8)
>>> n = 1
>>> net = ops.NthElement()
>>> out = net(input, n)
>>> print(out)
[2 5]
class tinyms.primitives.NuclearNorm(dim=None, keepdim=False)[source]

Returns the matrix nuclear norm of a given Tensor.

Attr dim specifies which two dimensions of the input x to calculate the nuclear norm across. If dim is None, the nuclear norm will be calculated across all dimensions of input. Because the nuclear norm is the sum of the singular values of the matrix, the input at this time should be 2-dimensional. That is, if the input is 2-dimensional, we compute the nuclear norm of the input matrix. At this point, dim should be None. If you set dim, it also needs to be in the proper range, otherwise it wonn’t work. If the input is 3-dimensional and above, the attribute dim is required. It specifies which two dimensions of input to calculate the nuclear norm across.

According to the dim list, the input Tensor is reordered by dim. The two dimensions pointed to by the attribute dim are placed at the end, and the order of the other dimensions is relatively unchanged. Perform the SVD of each slice of the adjusted Tensor to obtain the singular value. Sum all of the singular value of each slice/matrix to obtain the nuclear norm.

Parameters:
  • dim (Union[list(int), tuple(int)], optional) – Specifies which two dimensions of x to calculate the matrix nuclear norm across. If dim is None, the nuclear norm will be calculated across all dimensions of x. The length of dim should be 2. The value in dim should be in this range:[-x_rank, x_rank). x_rank is the dimension of Tensor x. The value of dim[0] or dim[1] can not point to the same dimension. Default: None.

  • keepdim (bool, optional) – Whether the output Tensor have dim retained or not. Default: False.

Inputs:
  • x (Tensor) - Input to compute the matrix nuclear norm. The dimension of x should be greater than or equal to 2. Data type must be float32 or float64.

Outputs:

Tensor, output Tensor with dimensions in dim reduced to 1 will be returned if keepdim is True; otherwise a Tensor with dimensions in dim removed is returned. The data type is same as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float32 nor float64.

  • TypeError – If dtype of dim is neither list(int) nor tuple(int).

  • TypeError – If dtype of keepdim is not bool.

  • ValueError – If dimension of Tensor x is less than 2.

  • ValueError – If the length of dim is not 2 when dim is set.

  • ValueError – If the dimension of Tensor x is not 2 when dim is not set.

  • ValueError – If dim[0] or dim[1] point to the same dimension.

  • ValueError – If dim[0] or dim[1] is not in this range:[-x_rank, x_rank). x_rank is the dimension of Tensor x.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
...                           [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]), ms.float32)
>>> dim = [0, 2]
>>> keepdim = True
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[[[15.407588]
[21.711605]]]
>>> keepdim = False
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[15.407588 21.711605]
>>> dim = [0, 1]
>>> keepdim = True
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[[[14.212674 15.81139  17.492853]]]
>>> keepdim = False
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[14.212674 15.81139  17.492853]
class tinyms.primitives.OneHot(axis=-1)[source]

Computes a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters:

axis (int) – Position to insert the value. e.g. If shape of indices is \((N, C)\), and axis is -1, the output shape will be \((N, C, D)\), If axis is 0, the output shape will be \((D, N, C)\). Default: -1.

Inputs:
  • indices (Tensor) - A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be uint8, int32 or int64.

  • depth (int) - A scalar defining the depth of the one-hot dimension.

  • on_value (Tensor) - A value to fill in output when indices[j] = i. Support uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, bool, complex64, complex128.

  • off_value (Tensor) - A value to fill in output when indices[j] != i. Has the same data type as on_value.

Outputs:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Raises:
  • TypeError – If axis or depth is not an int.

  • TypeError – If dtype of indices is not uint8, int32 or int64.

  • TypeError – If indices, on_value or off_value is not a Tensor.

  • ValueError – If axis is not in range [-1, len(indices_shape)].

  • ValueError – If depth is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> onehot = ops.OneHot()
>>> output = onehot(indices, depth, on_value, off_value)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
class tinyms.primitives.Ones[source]

Creates a tensor filled with value ones.

Refer to mindspore.ops.ones() for more details.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor.

  • type (mindspore.dtype) - The specified type of output tensor.

Outputs:

Tensor, has the same type and shape as input shape value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> ones = ops.Ones()
>>> output = ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ones((3, 3), mindspore.float32)
>>> print(output)
[[1. 1. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
class tinyms.primitives.OnesLike[source]

Returns a Tensor with a value of 1 and its shape and data type is the same as the input.

Refer to mindspore.ops.ones_like() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> oneslike = ops.OnesLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = oneslike(input_x)
>>> print(output)
[[1 1]
 [1 1]]
class tinyms.primitives.Orgqr[source]

Calculates the explicit representation of the orthogonal matrix \(Q\) returned by mindspore.ops.Geqrf.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.orgqr() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-114.6, 10.9, 1.1], [-0.304, 38.07, 69.38], [-0.45, -0.17, 62.]]), mindspore.float32)
>>> tau = Tensor(np.array([1.55, 1.94, 0.0]), mindspore.float32)
>>> net = ops.Orgqr()
>>> y = net(x, tau)
>>> print(y)
[[-0.54999995 -0.2128925   0.8137956 ]
 [ 0.47119996 -0.8752807   0.08240613]
 [ 0.69749993  0.42560163  0.57772595]]
class tinyms.primitives.PReLU[source]

Parametric Rectified Linear Unit activation function.

Refer to mindspore.ops.prelu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.prelu = ops.PReLU()
...     def construct(self, x, weight):
...         result = self.prelu(x, weight)
...         return result
...
>>> x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> output = net(x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]
class tinyms.primitives.Pack(axis=0)[source]

Same as operator Stack. Pack will be deprecated in the future. Please use Stack instead.

class tinyms.primitives.Pad(paddings)[source]

Pads the input tensor according to the paddings.

Refer to mindspore.ops.pad() for more details. Use mindspore.ops.pad() instead if paddings has negative values.

Parameters:

paddings (tuple) – The shape of parameter paddings is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Inputs:
  • input_x (Tensor) - Tensor to be padded. It has shape \((N, *)\), where \(*\) means any number of additional dimensions.

Outputs:

Tensor, the tensor after padding.

Raises:
  • TypeError – If paddings is not a tuple.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If shape of paddings is not \((N, 2)\).

  • ValueError – If paddings.size is not equal to 2 * len(input_x).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> pad_op = ops.Pad(((1, 2), (2, 1)))
>>> output = pad_op(input_x)
>>> print(output)
[[ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.  -0.1  0.3  3.6  0. ]
 [ 0.   0.   0.4  0.5 -3.2  0. ]
 [ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.   0.   0.   0.   0. ]]
class tinyms.primitives.PadV3(mode='constant', paddings_contiguous=True)[source]

Pads the input Tensor according to the paddings, mode and paddings_contiguous.

Parameters:
  • mode (str, optional) –

    An optional string indicates padding mode, support “constant”, “reflect”, “edge”, “circular”. Default: “constant”. The effects of various padding modes are as follows:

    • ”constant”: Pads the input Tensor with value specified by constant_value.

    • ”reflect”: Pads the input Tensor by reflecting the values of the pixels at the boundary of the Tensor.

    • ”edge”: Pads the input Tensor with the values of the pixels on the border of the Tensor.

    • ”circular”: Circular padding mode. In this mode, the pixels from one edge of the image are wrapped around to the opposite edge, such that the pixel on the right edge of the image is replaced with the pixel on the left edge, and the pixel on the bottom edge is replaced with the pixel on the top edge.

  • paddings_contiguous (bool, optional) – An optional bool value indicates if the padding is paddings_contiguous. If true, paddings is arranged as [begin0, end0, begin1, end1, …] If false, paddings is arranged as [begin0, begin1, …, end1, end2, …] Default:True.

Inputs:
  • x (Tensor) - Tensor to be padded. It has shape \((N, *)\), where \(*\) means any number of additional dimensions.

  • paddings (Tensor) - Specifies the number of zeros to be padded before and after each dimension of the input Tensor x. It’s a 1D Tensor of type int32 or int64.

  • constant_value (Tensor, optional) - Padding value to use in ‘constant’ mode, if not specified, 0 is used instead. It has the same type as x.

Outputs:

Tensor, the tensor after padding.

Raises:
  • TypeError – If x or paddings is not a Tensor.

  • TypeError – If padding_contiguous is not a bool.

  • ValueError – If mode is not a str or not in support modes.

  • ValueError – If mode is “constant”, the element’s number of paddings not be even.

  • ValueError – If mode is “constant”, the element’s number of paddings large than input dim * 2.

  • ValueError – If mode is “edge” “reflect” or “circular”, the element’s number of paddings is not 2, 4 or 6.

  • ValueError – If mode is “edge” “reflect” or “circular”, x dims equals 3, the element’s number of paddings is not 2.

  • ValueError – If mode is “edge” “reflect” or “circular”, x dims equals 4, the element’s number of paddings is not 4.

  • ValueError – If mode is “circular”, x dims equals 5, the element’s number of paddings is not 6.

  • ValueError – If mode is “edge”, “reflect” or “circular”, x dims smaller than 3.

  • ValueError – If mode is “edge” or “circular”, x dims bigger than 5.

  • ValueError – If mode is “reflect”, x dims bigger than 4.

  • ValueError – If mode is “reflect”, padding size bigger than the corresponding x dimension.

  • ValueError – After padding, output’s shape number is not greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: mode="reflect", paddings_contiguous=True
>>> class Net(nn.Cell):
...    def __init__(self, mode, paddings_contiguous):
...        super(Net, self).__init__()
...        self.pad = ops.PadV3(mode=mode, paddings_contiguous=paddings_contiguous)
...        self.paddings = Tensor([1, 1])
...    def construct(self, x):
...        return self.pad(x, self.paddings)
...
>>> x = Tensor([[[0., 1.]]])
>>> pad = Net(mode="reflect", paddings_contiguous=True)
>>> output = pad(x)
>>> print(output)
[[[1., 0., 1., 0.]]]
>>> # case2: mode="constant", padding_contigous=False
>>> class Net(nn.Cell):
...    def __init__(self, mode, paddings_contiguous):
...        super(Net, self).__init__()
...        self.pad = ops.PadV3(mode=mode, paddings_contiguous=paddings_contiguous)
...        self.paddings = Tensor([1, 0, 1, 0])
...        self.value = Tensor(1.5)
...    def construct(self, x):
...        return self.pad(x, self.paddings, self.value)
...
>>> x = Tensor([[0., 1., 2.]])
>>> pad = Net(mode="constant", paddings_contiguous=False)
>>> output = pad(x)
>>> print(output)
[[[1.5, 0., 1., 2., 1.5]]])
class tinyms.primitives.Padding(pad_dim_size=8)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Refer to mindspore.ops.padding() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.Padding(pad_dim_size)(x)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
class tinyms.primitives.ParallelConcat[source]

Concats input tensors along the first dimension.

The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Note

The input tensors are all required to have size 1 in the first dimension.

Inputs:
  • values (tuple, list) - A tuple or a list of input tensors. The data type and shape of these tensors must be the same and their rank should not be less than 1. The supported date type is Number on CPU, the same for Ascend except [float64, complex64, complex128].

Outputs:

Tensor, data type is the same as values.

Raises:
  • TypeError – If any type of the inputs is not a Tensor.

  • TypeError – If the data type of these tensors are not the same.

  • ValueError – If any tensor.shape[0] is not 1.

  • ValueError – If rank of any Tensor in values is less than 1.

  • ValueError – If the shape of these tensors are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
>>> op = ops.ParallelConcat()
>>> output = op((data1, data2))
>>> print(output)
[[0 1]
 [2 1]]
class tinyms.primitives.ParameterizedTruncatedNormal(seed=0, seed2=0)[source]

Returns a tensor of the specified shape filled with truncated normal values. When shape is \((batch\_size, *)\), the shape of mean, stdevs, min and max should be \(()\) or \((batch\_size, )\).

Note

  • The value in tensor min must be strictly less than max at any position after broadcasting.

  • When seed or seed2 is assigned a non-zero value, that value will be used as the seed. Otherwise, a random seed will be used instead.

Parameters:
  • seed (int, optional) – Random number seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It has shape \((batch\_size, *)\) where \(*\) is an additional dimension with a length of no less than 1. Its type must be one of the following types: int32 and int64.

  • mean (Tensor) - The parameter defines the mean of truncated normal distribution. It has shape \(()\) or \((batch\_size, )\). Its type must be one of the following types:float16, float32, float64.

  • stdevs (Tensor) - The parameter defines the standard deviation for truncation of the normal distribution. It must be greater than 0 and have the same shape and type as means.

  • min (Tensor) - The parameter defines the minimum of truncated normal distribution. It must have the same shape and type as means.

  • max (Tensor) - The parameter defines the maximum of truncated normal distribution. It must have the same shape and type as means.

Outputs:

Tensor. Its shape is specified by the input shape and it must have the same type as means.

Raises:
  • TypeError – If data type of shape, mean, stdevs, min and max are not allowed.

  • TypeError – If mean, stdevs, min, max don’t have the same type.

  • TypeError – If any of shape, mean, stdevs, min and max is not Tensor.

  • ValueError – When shape is \((batch\_size, *)\), if the shape of mean, stdevs, min or max is not \(()\) or \((batch\_size, )\).

  • ValueError – If shape elements are not positive.

  • ValueError – If stdevs elements are not positive.

  • ValueError – If shape has less than 2 elements.

  • ValueError – If shape is not a 1-D tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor(np.array([2, 3]), mstype.int32)
>>> mean = Tensor(np.array([0]), mstype.float32)
>>> stdevs = Tensor(np.array([1]), mstype.float32)
>>> min = Tensor(np.array([-100]), mstype.float32)
>>> max = Tensor(np.array([100]),  mstype.float32)
>>> seed = 1
>>> seed2 = 2
>>> parameterized_truncated_normal = ops.ParameterizedTruncatedNormal(seed=seed, seed2=seed2)
>>> output = parameterized_truncated_normal(shape, mean, stdevs, min, max)
>>> print(output)
[[-0.54974616 -1.4028727   1.5827523 ]
 [ 0.25759354 -1.9593946  -1.5078077 ]]
class tinyms.primitives.Partial[source]

Makes a partial function instance. Partial function can be used to derived specialized functions from general functions by fixing the value of certain number of arguments.

Inputs:
  • args (Union[FunctionType, Tensor]) - The function and bind arguments.

Outputs:

FunctionType, partial function bound with arguments.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> def show_input(x, y, z):
...     return x, y, z
>>> partial = ops.Partial()
>>> partial_show_input = partial(show_input, Tensor(1))
>>> output1 = partial_show_input(Tensor(2), Tensor(3))
>>> print(output1)
(Tensor(shape=[], dtype=Int64, value= 1), Tensor(shape=[], dtype=Int64, value= 2), Tensor(shape=[], dtype=Int64,
 value= 3))
>>> output2 = partial_show_input(Tensor(3), Tensor(4))
>>> print(output2)
(Tensor(shape=[], dtype=Int64, value= 1), Tensor(shape=[], dtype=Int64, value= 3), Tensor(shape=[], dtype=Int64,
 value= 4))
class tinyms.primitives.Pdist(p=2.0)[source]

Computes the p-norm distance between each pair of row vectors in the input.

Refer to mindspore.ops.pdist() for more details.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]).astype(np.float32))
>>> op = ops.Pdist(p=2.0)
>>> y = op(x)
>>> print(y)
[1.4142135 2.828427  1.4142135]
class tinyms.primitives.Poisson(seed=0, seed2=0)[source]

Produces random non-negative integer values i. Distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • mean (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. It must be greater than 0. With float32 data type.

Outputs:

Tensor. Its shape must be the broadcasted shape of shape and the shape of mean. The dtype is int32.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

Supported Platforms:

deprecated

Examples

>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mstype.float32)
>>> poisson = ops.Poisson(seed=5)
>>> output = poisson(shape, mean)
>>> result = output.shape
>>> print(result)
(4, 2)
class tinyms.primitives.Polar[source]

Converts polar coordinates to Cartesian coordinates.

Refer to mindspore.ops.polar() for more details.

Supported Platforms:

GPU CPU

Examples

>>> polar = ops.Polar()
>>> x1 = Tensor(np.array([1, 2]), mindspore.float64)
>>> x2 = Tensor(np.array([3, 4]), mindspore.float64)
>>> output = polar(x1, x2)
>>> print(output)
[-0.9899925 +0.14112001j -1.30728724-1.51360499j]
class tinyms.primitives.Polygamma[source]

Computes the \(a`th derivative of the polygamma function on `x\).

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.polygamma() for more details.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.0, -0.5]), mindspore.float32)
>>> a = Tensor(np.array(1), mindspore.int64)
>>> polygamma = ops.Polygamma()
>>> output = polygamma(a, x)
>>> print(output)
[1.644934 8.934802]
>>> a = Tensor(np.array(2), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[-2.404114  -0.8287967]
>>> a = Tensor(np.array(3), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[  6.4939404 193.40909  ]
>>> a = Tensor(np.array(4), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[-24.886265   -3.4742498]
class tinyms.primitives.PopulationCount[source]

Computes element-wise population count(a.k.a bitsum, bitcount).

Refer to mindspore.ops.population_count() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 3], mindspore.int16)
>>> output = ops.PopulationCount()(input_x)
>>> print(output)
[0 1 2]
class tinyms.primitives.Pow[source]

Calculates the y power of each element in x.

Refer to mindspore.ops.pow() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = 3.0
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1. 16. 64.]
class tinyms.primitives.Print[source]

Print the inputs to stdout.

Refer to mindspore.ops.print_() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class PrintDemo(nn.Cell):
...     def __init__(self):
...         super(PrintDemo, self).__init__()
...         self.print = ops.Print()
...
...     def construct(self, x, y):
...         self.print('Print Tensor x and Tensor y:', x, y)
...         return x
...
>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> net = PrintDemo()
>>> result = net(x, y)
Print Tensor x and Tensor y:
Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [1]])
Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [1, 1]])
class tinyms.primitives.Pull[source]

Pulls weight from parameter server.

Inputs:
  • key (Tensor) - The key of the weight.

  • weight (Tensor) - The weight to be updated.

Outputs:

None.

class tinyms.primitives.Push(optim_type='ApplyMomentum', only_shape_indices=None)[source]

Pushes the inputs of the corresponding optimizer to parameter server.

Parameters:
  • optim_type (string) – The optimizer type. Default: ‘ApplyMomentum’.

  • only_shape_indices (list) – The indices of input of which only shape will be pushed to parameter server. Default: None.

Inputs:
  • optim_inputs (tuple) - The inputs for this kind of optimizer.

  • optim_input_shapes (tuple) - The shapes of the inputs.

Outputs:

Tensor, the key of the weight which needs to be updated.

class tinyms.primitives.PyExecute[source]

Execute Python expression.

class tinyms.primitives.PyFunc(fn, in_types, in_shapes, out_types, out_shapes, stateful=True)[source]

Execute Python function.

PyFunc encapsulates Python functions as an operator which could be compiled into computation graph. Unlike normal operators, it cannot be exported to MindIR as it is executed in current Python context. As only the weights of the network is stored in the checkpoint, network include PyFunc could save checkpoint and load to the network again, but will lose any Python function state.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • fn (function) – Python function which inputs and outputs should be Python built-in scalar or numpy ndarray.

  • in_types (list[mindspore.dtype]) – The type of the inputs.

  • in_shapes (list[tuple[int]]) – The dimensionality of the inputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • out_types (list[mindspore.dtype]) – The type of the outputs.

  • out_shapes (list[tuple[int]]) – The dimensionality of the outputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • stateful (bool) – Whether the function is stateful or not. If True, the execution order is same with model definition.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors.

Outputs:

tuple[Tensor], execution results Python functions.

Raises:
  • TypeError – The Python function execution failed.

  • TypeError – The attributes(in_types/in_shapes/out_types/out_shapes) are inconsistent with Python function specifications.

Supported Platforms:

CPU

Examples

>>> def func(x1, x2):
...     return x1 + x2
>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> x2 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> op = P.PyFunc(func, [x1.dtype, x2.dtype], [x1.shape, x2.shape], [x1.dtype], [x1.shape])
>>> output = op((x1, x2))
>>> print(output[0].asnumpy())
[2. 4. 6.]
class tinyms.primitives.Qr(full_matrices=False)[source]

Returns the QR decomposition of one or more matrices. If full_matrices is true, compute full-sized q and r, If False (the default), compute the P columns of q where P is minimum of the 2 innermost dimensions of x.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

full_matrices (bool, optional) – Whether compute full-sized QR decomposition. Default: False.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be at least two dimensions. types: float16, float32, float64, complex64, complex128. Define the shape of x as \((..., m, n)\) p as the minimum values of m and n.

Outputs:
  • q (Tensor) - The orthonormal matrices of x. If full_matrices is true, the shape is \((m, m)\), else the shape is \((m, p)\). The dtype of q is same as x.

  • r (Tensor) - The upper triangular matrices of x. If full_matrices is true, the shape is \((m, n)\), else the shape is \((p, n)\). The dtype of r is same as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> qr_op = ops.Qr(full_matrices=False)
>>> x = Tensor([[20., -31, 7], [4, 270, -90], [-8, 17, -32]], mstype.float32)
>>> q, r = qr_op(x)
>>> print(q)
[[-0.912871    0.16366126  0.37400758]
 [-0.18257418 -0.9830709  -0.01544376]
 [ 0.36514837 -0.08238228  0.92729706]]
>>> print(r)
[[ -21.908903  -14.788506  -1.6431675]
[    0.       -271.9031    92.25824  ]
[    0.          0.       -25.665514 ]]
class tinyms.primitives.Quantile(dim=None, keep_dims=False, ignore_nan=False)[source]

Computes the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points.

Refer to mindspore.ops.quantile() and mindspore.ops.nanquantile() for more details.

Supported Platforms:

Examples

>>> quantile = ops.Quantile()
>>> input = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = quantile(input, q)
>>> print(output)
[-0.5446  0.07  0.9214]
class tinyms.primitives.RGBToHSV[source]

Transform one single or a batch of images from RGB to HSV color space. Each pixel’s RGB value is converted to its corresponding HSV value. Note that the function is only well-defined for input pixel values in the range [0, 1].

Note

Last dimension of input images must be size 3.

Inputs:
  • images (Tensor) - 1-D or higher rank RGB data Tensor to convert, last dimension must be size 3. Must be one of the following types: float16, float32, float64.

Outputs:

A Tensor, has the same type and shape as input images.

Raises:
  • TypeError – If images is not tensor or its dtype is not float.

  • ValueError – If the rank of images is less than 1.

  • ValueError – If the last value of shape of images is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> images =  np.array([0.25, 0.5, 0.5]).astype(np.float32).reshape([1, 1, 1, 3])
>>> rgb_to_hsv = ops.RGBToHSV()
>>> output = rgb_to_hsv(Tensor(images))
>>> print(output)
[[[[0.5, 0.5, 0.5]]]]
class tinyms.primitives.RNNTLoss(blank_label=0)[source]

Computes the RNNTLoss and its gradient with respect to the softmax outputs.

Parameters:

blank_label (int) – blank label. Default: 0.

Inputs:
  • acts (Tensor) - Tensor of shape \((B, T, U, V)\). Data type must be float16 or float32.

  • labels (Tensor) - Tensor of shape \((B, U-1)\). Data type is int32.

  • input_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • label_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

Outputs:
  • costs (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • grads (Tensor) - Has the same shape and dtype as acts.

Raises:
  • TypeError – If acts, labels, input_lengths or label_lengths is not a Tensor.

  • TypeError – If dtype of acts is neither float16 nor float32.

  • TypeError – If dtype of labels, input_lengths or label_lengths is not int32.

Supported Platforms:

Ascend

Examples

>>> B, T, U, V = 1, 2, 3, 5
>>> blank = 0
>>> acts = np.random.random((B, T, U, V)).astype(np.float32)
>>> labels = np.array([[1, 2]]).astype(np.int32)
>>> input_length = np.array([T] * B).astype(np.int32)
>>> label_length = np.array([len(l) for l in labels]).astype(np.int32)
>>> rnnt_loss = ops.RNNTLoss(blank_label=0)
>>> costs, grads = rnnt_loss(Tensor(acts), Tensor(labels), Tensor(input_length), Tensor(label_length))
>>> print(costs.shape)
(1,)
>>> print(grads.shape)
(1, 2, 3, 5)
class tinyms.primitives.ROIAlign(pooled_height, pooled_width, spatial_scale, sample_num=2, roi_end_mode=1)[source]

Computes the Region of Interest (RoI) Align operator.

The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling points. The details of (RoI) Align operator are described in Mask R-CNN.

Parameters:
  • pooled_height (int) – The output features height.

  • pooled_width (int) – The output features width.

  • spatial_scale (float) – A scaling factor that maps the raw image coordinates to the input feature map coordinates. Suppose the height of a RoI is ori_h in the raw image and fea_h in the input feature map, the spatial_scale must be fea_h / ori_h.

  • sample_num (int) – Number of sampling points. Default: 2.

  • roi_end_mode (int) – Number must be 0 or 1. If roi_end_mode=0, use the legacy implementation. If roi_end_mode=1, end pixel of the roi_box will be shifted by +1*spatial_scale. Default: 1.

Inputs:
  • features (Tensor) - The input features, whose shape must be \((N, C, H, W)\).

  • rois (Tensor) - The shape is \((rois\_n, 5)\). With data type of float16 or float32. rois_n represents the number of RoI. The size of the second dimension must be 5 and the 5 colunms are \((image\_index, top\_left\_x, top\_left\_y, bottom\_right\_x, bottom\_right\_y)\). image_index represents the index of image. top_left_x and top_left_y represent the x, y coordinates of the top left corner of corresponding RoI, respectively. bottom_right_x and bottom_right_y represent the x, y coordinates of the bottom right corner of corresponding RoI, respectively.

Outputs:

Tensor, the shape is \((rois\_n, C, pooled\_height, pooled\_width)\).

Raises:
  • TypeError – If pooled_height, pooled_width, sample_num or roi_end_mode is not an int.

  • TypeError – If spatial_scale is not a float.

  • TypeError – If features or rois is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> features = Tensor(np.array([[[[1., 2.], [3., 4.]]]]), mindspore.float32)
>>> rois = Tensor(np.array([[0, 0.2, 0.3, 0.2, 0.3]]), mindspore.float32)
>>> roi_align = ops.ROIAlign(2, 2, 0.5, 2)
>>> output = roi_align(features, rois)
>>> print(output)
[[[[1.775 2.025]
   [2.275 2.525]]]]
class tinyms.primitives.RaggedRange(Tsplits)[source]

Returns a RaggedTensor containing the specified sequences of numbers.

Parameters:

Tsplits (mindspore.dtype) – An mindspore.dtype from: mindspore.int32, mindspore.int64.

Inputs:
  • starts (Tensor) - The starts of each range, whose type is int32, int64, float32 or float64, and shape is 0D or 1D.

  • limits (Tensor) - The limits of each range, whose type and shape should be same as input starts.

  • deltas (Tensor) - The deltas of each range, whose type and shape should be same as input starts, and each element in the tensor should not be equal to 0.

Outputs:
  • rt_nested_splits (Tensor) - The nested splits of the return RaggedTensor, and type of the tensor is Tsplits, shape of the tensor is equal to shape of input starts plus 1.

  • rt_dense_values (Tensor) - The dense values of the return RaggedTensor, and type of the tensor should be same as input starts. Let size of input starts, input limits and input deltas are i,

    • if type of the input starts, input limits and input deltas are int32 or int64, shape of the output rt_dense_values is equal to \(sum(abs(limits[i] - starts[i]) + abs(deltas[i] - 1) / abs(deltas[i]))\).

    • if type of the input starts, input limits and input deltas are float32 or float64, shape of the output rt_dense_values is equal to \(sum(ceil(abs((limits[i] - starts[i]) / deltas[i])))\).

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If the type of starts is not one of the following dtype: int32, int64, float32, float64.

  • TypeError – If the type of starts, limits and deltas are not same.

  • TypeError – If the type of Tsplits is not one of the following dtype: mstype.int32, mstype.int64.

  • ValueError – If the inputs starts, limits, and deltas are not 0D or 1D.

  • ValueError – If the input deltas is equal to 0.

  • ValueError – If the shape of starts, limits and deltas are not same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> raggedrange = ops.RaggedRange(Tsplits=mstype.int64)
>>> starts = Tensor(np.array([2, 5, 8]).astype(np.int32))
>>> limits = Tensor(np.array([3, 5, 12]).astype(np.int32))
>>> deltas = Tensor(np.array([1, 1, 1]).astype(np.int32))
>>> (rt_nested_splits, rt_dense_values) = raggedrange(starts, limits, deltas)
>>> print(rt_nested_splits)
[0 1 1 5]
>>> print(rt_dense_values)
[ 2  8  9 10 11]
class tinyms.primitives.RandomCategorical(dtype=mindspore.int64)[source]

Generates random samples from a given categorical distribution tensor.

Parameters:

dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Inputs:
  • logits (Tensor) - The input tensor. 2-D Tensor with shape \((batch\_size, num\_classes)\).

  • num_sample (int) - Number of sample to be drawn. Only constant values is allowed.

  • seed (int) - Random seed. Default: 0. Only constant values is allowed.

Outputs:
  • output (Tensor) - The output Tensor with shape \((batch_size, num_samples)\).

Raises:
  • TypeError – If dtype is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.

  • TypeError – If logits is not a Tensor.

  • TypeError – If neither num_sample nor seed is an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...   def __init__(self, num_sample):
...     super(Net, self).__init__()
...     self.random_categorical = ops.RandomCategorical(mindspore.int64)
...     self.num_sample = num_sample
...   def construct(self, logits, seed=0):
...     return self.random_categorical(logits, self.num_sample, seed)
...
>>> x = np.random.random((10, 5)).astype(np.float32)
>>> net = Net(8)
>>> output = net(Tensor(x))
>>> result = output.shape
>>> print(result)
(10, 8)
class tinyms.primitives.RandomChoiceWithMask(count=256, seed=0, seed2=0)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

Refer to mindspore.ops.choice_with_mask() for more details.

Parameters:
  • count (int, optional) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: 0.

  • seed2 (int, optional) – Second seed to avoid collision. Default: 0.

Inputs:
  • input_x (Tensor[bool]) - The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

Outputs:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> rnd_choice_mask = ops.RandomChoiceWithMask()
>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = rnd_choice_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
class tinyms.primitives.RandomGamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int, optional) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int, optional) – The global random seed, which combines with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It must be constant value.

  • alpha (Tensor) - α is the shape parameter of RandomGamma distribution, it mainly determines the shape of the graph curve. It must be greater than 0 and have date type float32.

Outputs:

Tensor. The shape should be equal to the concat shape between the input shape and alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If shape or alpha is not a Tensor.

  • TypeError – If data type of alpha is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

CPU

Examples

>>> shape = Tensor(np.array([3, 1, 2]), mstype.int32)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> gamma = ops.RandomGamma(seed=3)
>>> output = gamma(shape, alpha)
>>> result = output.shape
>>> print(result)
(3, 1, 2, 2, 2)
class tinyms.primitives.RandomGamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int, optional) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int, optional) – The global random seed, which combines with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It must be constant value.

  • alpha (Tensor) - α is the shape parameter of RandomGamma distribution, it mainly determines the shape of the graph curve. It must be greater than 0 and have date type float32.

Outputs:

Tensor. The shape should be equal to the concat shape between the input shape and alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If shape or alpha is not a Tensor.

  • TypeError – If data type of alpha is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

CPU

Examples

>>> shape = Tensor(np.array([3, 1, 2]), mstype.int32)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> gamma = ops.RandomGamma(seed=3)
>>> output = gamma(shape, alpha)
>>> result = output.shape
>>> print(result)
(3, 1, 2, 2, 2)
class tinyms.primitives.RandomPoisson(seed=0, seed2=0, dtype=mindspore.int64)[source]

Produces random non-negative values i, distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • seed (int, optional) – Random number seed. If either seed or seed2 are set to be non-zero, the seed is set by the given seed. Otherwise, it is seeded by a random seed. Default: 0.

  • seed2 (int, optional) – A second seed to avoid seed collision. Default: 0.

  • dtype (mindspore.dtype, optional) – The type of output. Default: mstype.int64.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated, 1-D Tensor, whose dtype must be in [int32, int64].

  • rate (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. Its type must be in [float16, float32, float64, int32, int64].

Outputs:

Tensor. Its shape is \((*shape, *rate.shape)\). Its type is specified by dtype.

Raises:
  • TypeError – If shape is not a Tensor or its dtype is not int32 or int64.

  • TypeError – If dtype is not int32 or int64.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If shape elements are negative.

Supported Platforms:

GPU CPU

Examples

>>> shape = Tensor(np.array([2, 3]), mstype.int32)
>>> rate = Tensor(np.array([2, 2]), mstype.int32)
>>> seed = 0
>>> seed2 = 0
>>> random_poisson = ops.RandomPoisson(seed=seed, seed2=seed2)
>>> output = random_poisson(shape,rate)
>>> print(output.shape)
(2, 3, 2)
class tinyms.primitives.RandomShuffle(seed=0, seed2=0)[source]

Randomly shuffles a Tensor along its first dimension.

Parameters:
  • seed (int, optional) – Random seed. If seed or seed2 is set to non-zero, the random number generator will be seeded by the given seed. Otherwise, it will be seeded randomly. The seed must be non-negative. Default: 0.

  • seed2 (int, optional) – A second seed to avoid seed collision. If seed is 0, the seed2 will be used as the seed of the random generator. It must be non-negative. Default: 0.

Inputs:
  • x (Tensor) - The Tensor need be shuffled.

Outputs:

Tensor. The shape and type are the same as the input x.

Raises:

TypeError – If data type of seed or seed2 is not int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mstype.float32)
>>> shuffle = ops.RandomShuffle(seed=1, seed2=1)
>>> output = shuffle(x)
>>> print(output.shape)
(4,)
class tinyms.primitives.Randperm(max_length=1, pad=-1, dtype=mindspore.int32)[source]

Generates n random samples from 0 to n-1 without repeating. If max_length > n, the last max_length-n elements will be filled with pad.

Parameters:
  • max_length (int) – Number of items expected to get and the number must be greater than 0. Default: 1.

  • pad (int) – The pad value to be filled. Default: -1.

  • dtype (mindspore.dtype) – The type of output. Default: mindspore.int32.

Inputs:
  • n (Tensor) - The input tensor with shape \((1,)\) with and dtype int32 or int64. n must be in range [0, max_length].

Outputs:
  • output (Tensor) - The output Tensor with shape: (max_length,) and type: dtype.

Raises:
Supported Platforms:

Ascend GPU

Examples

>>> # The result of every execution is different because this operator will generate n random samples.
>>> randperm = ops.Randperm(max_length=30, pad=-1)
>>> n = Tensor([20], dtype=mindspore.int32)
>>> output = randperm(n)
>>> print(output)
[15 6 11 19 14 16 9 5 13 18 4 10 8 0 17 2 1 12 3 7
 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
class tinyms.primitives.Range(maxlen=1000000)[source]

Creates a sequence of numbers that begins at start and extlimits by increments of delta up to but not including limit.

Refer to mindspore.ops.range() for more details.

Parameters:

maxlen (int, optional) – Memory that can fit maxlen many elements will be allocated for the output. Optional, must be positive, defaults to 1000000. If the output has more than maxlen elements, a runtime error will occur.

Inputs:
  • start (Tensor) - A scalar Tensor. The first number in the sequence. Must have type: int32 ,int64, float32 or float64.

  • limit (Tensor) - A scalar Tensor. Upper limit of the sequence, exclusive. Must have type: int32 ,int64, float32 or float64.

  • delta (Tensor) - A scalar Tensor. Number that increments start. Must have type: int32 ,int64, float32 or float64.

Outputs:

A 1-D Tensor, with the same type as the inputs.

Supported Platforms:

GPU CPU

Examples

>>> start = Tensor(0, mstype.int32)
>>> limit = Tensor(10, mstype.int32)
>>> delta = Tensor(4, mstype.int32)
>>> output = ops.Range()(start, limit, delta)
>>> print(output)
[0 4 8]
infer_value(start_value, limit_value, delat_value)[source]

Infer the value of input for Range.

class tinyms.primitives.Rank[source]

Returns the rank of a tensor.

Refer to mindspore.ops.rank() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> rank = ops.Rank()
>>> output = rank(input_tensor)
>>> print(output)
2
>>> print(type(output))
<class 'int'>
class tinyms.primitives.ReLU[source]

Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise.

Refer to mindspore.ops.relu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu = ops.ReLU()
>>> output = relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
class tinyms.primitives.ReLU6[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

Refer to mindspore.ops.relu6() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu6 = ops.ReLU6()
>>> result = relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
class tinyms.primitives.ReLUV2[source]

The ReLUV2 interface is deprecated, please use the mindspore.ops.ReLU instead.

Rectified Linear Unit activation function.

It returns element-wise \(\max(0, x)\), specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[\text{ReLU}(x) = (x)^+ = \max(0, x)\]
Inputs:
  • input_x (Tensor) - The input tensor must be a 4-D tensor.

Outputs:
  • output (Tensor) - Has the same type and shape as the input_x.

  • mask (Tensor) - A tensor, but it is meaningless.

Raises:
Supported Platforms:

deprecated

Examples

>>> input_x = Tensor(np.array([[[[1, -2], [-3, 4]], [[-5, 6], [7, -8]]]]), mindspore.float32)
>>> relu_v2 = ops.ReLUV2()
>>> output, _= relu_v2(input_x)
>>> print(output)
[[[[1. 0.]
   [0. 4.]]
  [[0. 6.]
   [7. 0.]]]]
class tinyms.primitives.Real[source]

Returns a Tensor that is the real part of the input. If input is real, it is returned unchanged.

Inputs:
  • input (Tensor) - The input tensor to compute to.

Outputs:

Tensor, the shape is the same as the input.

Raises:

TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> real = ops.Real()
>>> output = real(x)
>>> print(output)
1.3
class tinyms.primitives.RealDiv[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Refer to mindspore.ops.div() for more details.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> realdiv = ops.RealDiv()
>>> output = realdiv(x, y)
>>> print(output)
[0.25 0.4  0.5 ]
class tinyms.primitives.Reciprocal[source]

Returns reciprocal of a tensor element-wise.

\[out_{i} = \frac{1}{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> reciprocal = ops.Reciprocal()
>>> output = reciprocal(x)
>>> print(output)
[1.   0.5  0.25]
class tinyms.primitives.ReduceAll(keep_dims=False)[source]

Reduces a dimension of a tensor by the “logicalAND” of all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical and” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAll(keep_dims=True)
>>> # case 1: Reduces a dimension by the "logicalAND" of all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[False]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True False]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[False]
[ True]]
class tinyms.primitives.ReduceAny(keep_dims=False)[source]

Reduces a dimension of a tensor by the “logical OR” of all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical or” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAny(keep_dims=True)
>>> # case 1: Reduces a dimension by the "logical OR" of all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[ True]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True True]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[True]
[ True]]
class tinyms.primitives.ReduceMax(keep_dims=False)[source]

Reduces a dimension of a tensor by the maximum value in this dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the maximum of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMax(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the maximum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[9.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[7. 7. 7. 7. 7. 7.]
  [8. 8. 8. 8. 8. 8.]
  [9. 9. 9. 9. 9. 9.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[3. 3. 3. 3. 3. 3.]]
 [[6. 6. 6. 6. 6. 6.]]
 [[9. 9. 9. 9. 9. 9.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceMean(keep_dims=False)[source]

Reduces a dimension of a tensor by averaging all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the mean of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMean(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2]],
... [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
... [[6, 6, 6, 6, 6, 6], [8, 8, 8, 8, 8, 8], [10, 10, 10, 10, 10, 10]]]),
... mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[5.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along the axis 0
>>> output = op(x, 0)
>>> print(output)
[[[4. 4. 4. 4. 4. 4.]
  [5. 5. 5. 5. 5. 5.]
  [6. 6. 6. 6. 6. 6.]]]
>>> # case 3: Reduces a dimension along the axis 1
>>> output = op(x, 1)
>>> print(output)
[[[2. 2. 2. 2. 2. 2.]]
 [[5. 5. 5. 5. 5. 5.]]
 [[8. 8. 8. 8. 8. 8.]]]
>>> # case 4: Reduces a dimension along the axis 2
>>> output = op(x, 2)
>>> print(output)
[[[ 2.]
  [ 2.]
  [ 2.]]
 [[ 4.]
  [ 5.]
  [ 6.]]
 [[ 6.]
  [ 8.]
  [10.]]]
class tinyms.primitives.ReduceMin(keep_dims=False)[source]

Reduces a dimension of a tensor by the minimum value in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the minimum of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMin(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the minimum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[1.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]
  [2. 2. 2. 2. 2. 2.]
  [3. 3. 3. 3. 3. 3.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]]
 [[4. 4. 4. 4. 4. 4.]]
 [[7. 7. 7. 7. 7. 7.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceOp[source]

Operation options for reducing tensors. This is an enumerated type, not an operator.

The main calling methods are as follows:

  • SUM: ReduceOp.SUM.

  • MAX: ReduceOp.MAX.

  • MIN: ReduceOp.MIN.

  • PROD: ReduceOp.PROD.

There are four kinds of operation options, “SUM”, “MAX”, “MIN”, and “PROD”.

  • SUM: Take the sum.

  • MAX: Take the maximum.

  • MIN: Take the minimum.

  • PROD: Take the product.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> import mindspore
>>> from mindspore.communication import init
>>> from mindspore import Tensor, ops, nn
>>> from mindspore.ops import ReduceOp
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceProd(keep_dims=False)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceProd(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by multiplying all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[2.2833798e+33]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[ 28.  28.  28.  28.  28.  28.]
  [ 80.  80.  80.  80.  80.  80.]
  [162. 162. 162. 162. 162. 162.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[  6.   6.   6.   6.   6.   6.]]
 [[120. 120. 120. 120. 120. 120.]]
 [[504. 504. 504. 504. 504. 504.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.00000e+00]
  [6.40000e+01]
  [7.29000e+02]]
 [[4.09600e+03]
  [1.56250e+04]
  [4.66560e+04]]
 [[1.17649e+05]
  [2.62144e+05]
  [5.31441e+05]]]
class tinyms.primitives.ReduceScatter(op='sum', group='hccl_world_group')[source]

Reduces and scatters tensors from the specified communication group. For more details about it, please refer to Distributed Set Communication Primitives - ReduceScatter .

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • op (str) – Specifies an operation used for element-wise reductions, like SUM and MAX. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - Input Tensor, suppose it has a shape \((N, *)\), where * means any number of additional dimensions. N must be divisible by rank_size. rank_size refers to the number of cards in the communication group.

Outputs:

Tensor, it has the same dtype as input_x with a shape of \((N/rank\_size, *)\).

Raises:
  • TypeError – If any of operation and group is not a string.

  • ValueError – If the first dimension of the input cannot be divided by the rank_size.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.reducescatter = ops.ReduceScatter(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.reducescatter(x)
...
>>> input_ = Tensor(np.ones([8, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceStd(axis=(), unbiased=True, keep_dims=False)[source]

Returns the standard-deviation and mean of the input Tensor along dimension(s) specified by axis.

Parameters:
  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Let r be rank of input_x, it should be in the range \([-r,r)\).

  • unbiased (bool, optional) – Whether to use Bessel’s correction. If True, will use the Bessel correction unbiased estimation. If False, will through the biased estimation to calculate the standard deviation. Default: True.

  • keep_dims (bool, optional) – Whether the output Tensor has dim retained or not. If True, keep these reduced dimensions specified by axis and the length is 1. If False, don’t keep these dimensions. Default: Fasle.

Inputs:
  • input_x (Tensor[Number]) - The input Tensor, it has dtype Number with shape \((N, *)\) where \(*\) means any number of additional dimensions.

Outputs:

Tuple(output_std, output_mean) containing the standard deviation and mean.

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> op = ops.ReduceStd(axis=1, unbiased=True, keep_dims=False)
>>> output = op(input_x)
>>> output_std, output_mean = output[0], output[1]
>>> print(output_std)
[1.        2.5166113]
>>> print(output_mean)
[2.        1.3333334]
class tinyms.primitives.ReduceSum(keep_dims=False, skip_mode=False)[source]

Reduces a dimension of a tensor by summing all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • skip_mode (bool) – If true and axis is empty tuple or empty list, the ReduceSum operation isn’t performed, skip it. If true and axis is other values, the ReduceSum calculation is performed normally. If false, do reduce. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions when skip_mode is false. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), keep_dims is False, and skip_mode is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is (), and skip_mode is True, the ReduceSum operation is not performed, output tensor is equal to the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceSum(keep_dims=True)
>>> output = op(x, 1)
>>> output.shape
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by summing all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[270.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[12. 12. 12. 12. 12. 12.]
  [15. 15. 15. 15. 15. 15.]
  [18. 18. 18. 18. 18. 18.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[ 6.  6.  6.  6.  6.  6.]]
 [[15. 15. 15. 15. 15. 15.]]
 [[24. 24. 24. 24. 24. 24.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[ 6.]
  [12.]
  [18.]]
 [[24.]
  [30.]
  [36.]]
 [[42.]
  [48.]
  [54.]]]
infer_value(input_x, axis)[source]

return reduce op value

class tinyms.primitives.Renorm(p, dim, maxnorm)[source]

Renormalizes the sub-tensors along dimension dim, and each sub-tensor’s p-norm should not exceed the ‘maxnorm’. The values of current sub-tensor don’t need change if the p-norm of the sub-tensor is less than maxnorm. Otherwise the sub-tensor needs to be modified to the original value of the corresponding position divided by the p-norm of the substensor and then multiplied by maxnorm.

Refer to mindspore.ops.renorm() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
>>> y = ops.Renorm(p=1, dim=0, maxnorm=5.)(x)
>>> print(y)
[[1.       1.        1.        ]
[1.6666666 1.6666666 1.6666666 ]
[1.6666667 1.6666667 1.6666667 ]]
class tinyms.primitives.Reshape[source]

Rearranges the input Tensor based on the given shape.

Refer to mindspore.ops.reshape() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> reshape = ops.Reshape()
>>> output = reshape(input_x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
infer_value(x, shape)[source]

infer value

class tinyms.primitives.ResizeArea(align_corners=False)[source]

Resize images to a certain size using area interpolation.

The resizing process only changes the two dimensions of images, which represent the width and height of images.

Warning

The values of size must be greater than zero.

Parameters:

align_corners (bool, optional) – A boolean flag that specifies whether to align the centers of the four corner pixels of the input and output tensors. When this flag is set to True, the corner pixels of the output tensor are aligned with the corner pixels of the input tensor, which preserves the values at the corner pixels. Defaults: False.

Inputs:
  • images (Tensor) - Input images must be a 4-D tensor with shape which is \((batch, channels, height, width)\). The format must be “NHWC”. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • size (Tensor) - Input size must be a 1-D tensor of 2 elements: new_height, new_width. The new size of output image. Types allowed: int32.

Outputs:

A 4-D tensor of shape \((batch, new\_height, new\_width, channels)\) with type float32.

Raises:
  • TypeError – If dtype of images is not supported.

  • TypeError – If dtype of size is not int32.

  • TypeError – If dtype of align_corners is not bool.

  • ValueError – If the num of inputs is not 2.

  • ValueError – If the dimension of images is not 4.

  • ValueError – If the dimension of size is not 1.

  • ValueError – If the element num of size is not 2.

  • ValueError – If any value of size is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> images = Tensor([[[[2], [4], [6], [8]], [[10], [12], [14], [16]]]], mindspore.float16)
>>> size = Tensor([1, 2], mindspore.int32)
>>> resizearea = ops.ResizeArea()
>>> output = resizearea(images, size)
>>> print(output.asnumpy())
    [[[[ 7.]
       [11.]]]]
class tinyms.primitives.ResizeBicubic(align_corners=False, half_pixel_centers=False)[source]

Resize images to size using bicubic interpolation.

Parameters:
  • align_corners (bool, optional) – If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.Default: False.

  • half_pixel_centers (bool, optional) – Whether to use half-pixel center alignment. If set to True, align_corners should be False. Default: False.

Inputs:
  • images (Tensor) - The input image must be a 4-D tensor of shape \((batch, channels, height, width)\). The format must be NCHW. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • size (Tensor) - A 1-D tensor of shape [2], with 2 elements: new_height, new_width. Types allowed: int32.

Outputs:

A 4-D tensor of shape \((batch, channels, new\_height, new\_width)\) with type float32.

Raises:
  • TypeError – If images type is not allowed.

  • TypeError – If size type is not int32.

  • TypeError – If align_corners type is not bool.

  • TypeError – If half_pixel_centers type is not bool.

  • ValueError – If images dim is not 4.

  • ValueError – If size dim is not 1.

  • ValueError – If size size is not 2.

  • ValueError – If any size value is not positive.

  • ValueError – If align_corners and half_pixel_centers value are both True.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetResizeBicubic(nn.Cell):
...     def __init__(self):
...         super(NetResizeBicubic, self).__init__()
...         align_corners = False
...         half_pixel_centers = False
...         self.resize = ops.ResizeBicubic(align_corners, half_pixel_centers)
...
...     def construct(self, images, size):
...         return self.resize(images, size)
...
>>> images = Tensor(np.array([1, 2, 3, 4]).reshape(1, 2, 2, 1).astype(np.float32))
>>> size = Tensor([1, 4], mindspore.int32)
>>> resizebicubic = NetResizeBicubic()
>>> output = resizebicubic(images, size)
>>> print(output)
    [[[[1.     ]
    [1.5    ]
    [2.     ]
    [2.09375]]]]
class tinyms.primitives.ResizeBilinear(size, align_corners=False, half_pixel_centers=False)[source]

This API is deprecated, please use the mindspore.ops.ResizeBilinearV2 instead. For general resizing with other interpolation methods, refer to mindspore.ops.interpolate() for more details.

Note

Dynamic shape feature is not supported for now.

Supported Platforms:

Ascend GPU CPU

class tinyms.primitives.ResizeBilinearV2(align_corners=False, half_pixel_centers=False)[source]

Resizes an image to a certain size using the bilinear interpolation.

The resizing only affects the lower two dimensions which represent the height and width.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • align_corners (bool, optional) – If true, rescale input by \((new\_height - 1) / (height - 1)\), which exactly aligns the 4 corners of images and resized images. If false, rescale by \(new\_height / height\). Default: False.

  • half_pixel_centers (bool, optional) – Whether half pixel center. If set to True, align_corners should be False. Default: False.

Inputs:
  • x (Tensor): Image to be resized. Input images must be a 4-D tensor with shape \((batch, channels, height, width)\), with data type of float32 or float16.

  • size (Union[tuple[int], list[int], Tensor]): The new size of the images. A tuple or list or Tensor of 2 int elements \((new\_height, new\_width)\).

Outputs:

Tensor, resized image. 4-D with shape \((batch, channels, new\_height, new\_width)\), with the same data type as input x.

Raises:
  • TypeError – If align_corners is not a bool.

  • TypeError – If half_pixel_centers is not a bool.

  • TypeError – If align_corners and half_pixel_centers are all True.

  • ValueError – If half_pixel_centers is True and device_target is CPU.

  • ValueError – If dim of x is not 4.

  • ValueError – If size is Tensor and its dim is not 1.

  • ValueError – If size contains other than 2 elements.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
>>> output = ops.ResizeBilinearV2()(x, (5, 5))
>>> print(output)
[[[[1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]]]]
class tinyms.primitives.ResizeLinear1D(coordinate_transformation_mode='align_corners')[source]

Using the linear interpolate method resize the input tensor ‘x’.

For general resize, refer to mindspore.ops.interpolate() for more details.

Warning

  • This is an experimental API that is subject to change.

  • Currently, the Ascend platform only supports scenarios where the input size is Tuple or List.

Parameters:

coordinate_transformation_mode (str) – Default is ‘align_corners’. Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Other optional: ‘half_pixel’.

Inputs:
  • x (Tensor) - A 3-D tensor which to resize, with shape [batch, channel, width]. Must be one of the following types: uint8, int8, int16, int32, int64, float16, float32, double.

  • size (Union[Tuple[int], List[int], Tensor[int]]): describes the new width of x . A tuple or list or 1-D tensor with only one int element \((new\_width)\).

Outputs:

A 3-D tensor which shape is [batch, channel, new_width] with the same type as x.

Raises:
  • TypeError – If dtype of x is not in the support list.

  • TypeError – If size is not in Union[Tuple[int], List[int], Tensor[int]].

  • TypeError – If coordinate_transformation_mode is not a string.

  • TypeError – If coordinate_transformation_mode is not in the support list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
>>> size = (6,)
>>> resize_linear_1d = ops.ResizeLinear1D(coordinate_transformation_mode="align_corners")
>>> output = resize_linear_1d(x, size)
>>> print(output)
[[[1. 1.4 1.8 2.2 2.6 3.]
  [4. 4.4 4.8 5.2 5.6 6.]]]
class tinyms.primitives.ResizeNearestNeighbor(size, align_corners=False)[source]

Resizes the input tensor to a given size by using the nearest neighbor algorithm. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters:
  • size (Union[tuple, list]) – The target size. The dimension of size must be 2.

  • align_corners (bool) – Whether the centers of the 4 corner pixels of the input and output tensors are aligned. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of the tensor is \((N, C, H, W)\).

Outputs:

Tensor, the shape of the output tensor is \((N, C, NEW\_H, NEW\_W)\). The data type is the same as the input_x.

Raises:
  • TypeError – If size is neither tuple nor list.

  • TypeError – If align_corners is not a bool.

  • ValueError – If length of size is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input_tensor = Tensor(np.array([[[[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]]]), mindspore.float32)
>>> size = (2, 2)
>>> output = ops.ResizeNearestNeighbor(size=size)(input_tensor)
>>> print(output)
[[[[-0.1  0.3]
   [ 0.4  0.5]]]]
class tinyms.primitives.ResizeNearestNeighborV2(align_corners=False, half_pixel_centers=False, data_format='NHWC')[source]

Resizes the input tensor to specific size by using the nearest neighbor algorithm.

The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters:
  • align_corners (bool, optional) – If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults: False.

  • half_pixel_centers (bool, optional) – Whether half pixel center. If set to True, align_corners should be False. Default: False.

  • data_format (str, optional) – An optional string that describes the format of the input x. Default: NHWC.

Inputs:
  • x (Tensor) - 4-D with shape \((batch, height, width, channels)\) or \((batch, channels, height, width)\) depending on the attr ‘data_format’. Support type [int8, uint8, int16, uint16, int32, int64, float16, float32, float64].

  • size (Tensor) - The new size for the images. A 1-D int32 Tensor of 2 elements: [new_height, new_width].

Outputs:
  • y (Tensor) - The resized images. A 4-D with shape \((batch, new\_height, new\_width, channels)\) or \((batch, channels, new\_height, new\_width)\) depending on the attr data_format. It has the same dtype as x.

Raises:
  • TypeError – If x or size is not a Tensor.

  • TypeError – If the data type of x is not in supported list.

  • TypeError – If the data type of size is not int32.

  • TypeError – If align_corners or half_pixel_centers is not bool.

  • TypeError – If data_format is not string.

  • ValueError – If data_format not in [NHWC, NCHW].

  • ValueError – If any value of size is non positive.

  • ValueError – If the dimension of x is not 4.

  • ValueError – If the dimension of size is not 1.

  • ValueError – If the elements number of size is not 2.

  • ValueError – If attr half_pixel_centers and align_corners are True at the same time.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.ones((1, 4, 4, 1)), mstype.float32)
>>> size = Tensor([2, 2], mstype.int32)
>>> resize = ops.ResizeNearestNeighborV2()
>>> output = resize(input_tensor, size)
>>> print(output)
[[[[1.]
   [1.]]
  [[1.]
   [1.]]]]
>>> print(output.shape)
(1, 2, 2, 1)
class tinyms.primitives.ReverseSequence(seq_dim, batch_dim=0)[source]

Reverses variable length slices.

Parameters:
  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Inputs:
  • x (Tensor) - The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) - Must be a 1-D vector with int32 or int64 types.

Outputs:

Tensor, with the same shape and data type as x.

Raises:
  • TypeError – If seq_dim or batch_dim is not an int.

  • ValueError – If value of batch_dim is equal to or greater than length of shape of x .

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=0, batch_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 5. 9.]
 [4. 2. 6.]
 [7. 8. 3.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([2, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[2. 1. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([3, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[3. 2. 1.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([4, 4]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[4. 3. 2. 1.]
 [8. 7. 6. 5.]]
class tinyms.primitives.ReverseV2(axis)[source]

Reverses specific dimensions of a tensor.

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters:

axis (Union[tuple(int), list(int)]) – The indices of the dimensions to reverse.

Inputs:
  • input_x (Tensor) - The target tensor. The data type is Number except float64. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If axis is neither list nor tuple.

  • TypeError – If element of axis is not an int.

  • ValueError – There are multiple identical axes in axis.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> op = ops.ReverseV2(axis=[1])
>>> output = op(input_x)
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
>>> op = ops.ReverseV2(axis=[1, 0])
>>> output = op(input_x)
>>> print(output)
[[8 7 6 5]
 [4 3 2 1]]
class tinyms.primitives.RightShift[source]

Shift the value of each position of Tensor input_x to the right by corresponding bits in Tensor input_y. The inputs are two tensors, dtypes of them must be consistent, and the shapes of them could be broadcast.

\[\begin{aligned} &out_{i} =x_{i} >> y_{i} \end{aligned}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • input_x (Tensor) - The target tensor, will be shifted to the right by input_y bits element-wise.

  • input_y (Tensor) - Number of bits shifted, the tensor must have the same type as input_x.

Outputs:
  • output (Tensor) - The output tensor, has the same type as input_x.

Raises:
  • TypeError – If input_x or input_y is not tensor.

  • TypeError – If input_x and input_y could not be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> rightshift = ops.RightShift()
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.uint8))
>>> input_y = Tensor(np.array([1, 1, 1]).astype(np.uint8))
>>> output = rightshift(input_x, input_y)
>>> print(output)
[0 1 1]
class tinyms.primitives.Rint[source]

Returns an integer that is closest to input_x element-wise.

Inputs:
  • input_x (Tensor) - The target tensor, which must be one of the following types: float16, float32, float64. The shape is \((N,*)\) where \(*\) means any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises:

TypeError – If dtype of input_x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-1.6, -0.1, 1.5, 2.0]), mindspore.float32)
>>> op = ops.Rint()
>>> output = op(input_x)
>>> print(output)
[-2.  0.  2.  2.]
>>> input_x = Tensor(np.array([[-2.0, -1.9, -1.8, -1.7, -1.6],
...                            [-2.0, -1.9, -1.8, -1.7, -1.6]]), mindspore.float32)
>>> output = op(input_x)
>>> print(output)
[[-2. -2. -2. -2. -2.]
 [-2. -2. -2. -2. -2.]]
class tinyms.primitives.Roll(shift, axis)[source]

Rolls the elements of a tensor along an axis.

Refer to mindspore.ops.roll() for more details.

Parameters:
  • shift (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • axis (Union[list(int), tuple(int), int]) – Specifies the dimension indexes of shape to be rolled.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> op = ops.Roll(shift=2, axis=0)
>>> output = op(input_x)
>>> print(output)
[3. 4. 0. 1. 2.]
>>> input_x = Tensor(np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]).astype(np.float32))
>>> op = ops.Roll(shift=-1, axis=0)
>>> output = op(input_x)
>>> print(output)
[[5. 6. 7. 8. 9.]
 [0. 1. 2. 3. 4.]]
class tinyms.primitives.Round[source]

Returns half to even of a tensor element-wise.

Refer to mindspore.ops.round() for more detailsed.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> round = ops.Round()
>>> output = round(x)
>>> print(output)
[ 1.  2.  2.  2. -4.]
class tinyms.primitives.Rsqrt[source]

Computes reciprocal of square root of input tensor element-wise.

\[out_{i} = \frac{1}{\sqrt{x_{i}}}\]
Inputs:
  • x (Tensor) - The input of Rsqrt. Its rank must be in [0, 7] inclusive and each element must be a non-negative number.

Outputs:

Tensor, has the same type and shape as x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor([[4, 4], [9, 9]], mindspore.float32)
>>> rsqrt = ops.Rsqrt()
>>> output = rsqrt(input_tensor)
>>> print(output)
[[0.5        0.5       ]
 [0.33333334 0.33333334]]
class tinyms.primitives.SGD(dampening=0.0, weight_decay=0.0, nesterov=False)[source]

Computes the stochastic gradient descent. Momentum is optional.

Nesterov momentum is based on the formula from paper On the importance of initialization and momentum in deep learning.

Note

If parameters are not grouped, the weight_decay in optimizer will be applied on the network parameters without ‘beta’ or ‘gamma’ in their names. Users can group parameters to change the strategy of decaying weight. When parameters are grouped, each group can set weight_decay. If not, the weight_decay in optimizer will be applied. For more details, please refer to mindspore.nn.SGD.

Parameters:
  • dampening (float) – The dampening for momentum. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0.

  • nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • parameters (Tensor) - Parameters to be updated. With float16 or float32 data type.

  • gradient (Tensor) - Gradient, with float16 or float32 data type.

  • learning_rate (Tensor) - Learning rate, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32)

  • accum (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type.

  • momentum (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32).

  • stat (Tensor) - States to be updated with the same shape as gradient, with float16 or float32 data type.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If dampening or weight_decay is not a float.

  • TypeError – If nesterov is not a bool.

  • TypeError – If parameters, gradient, learning_rate, accum, momentum or stat is not a Tensor.

  • TypeError – If dtype of parameters, gradient, learning_rate, accum, momentum or stat is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sgd = ops.SGD()
>>> parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
>>> gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
>>> learning_rate = Tensor(0.01, mindspore.float32)
>>> accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
>>> momentum = Tensor(0.1, mindspore.float32)
>>> stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
>>> output = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
>>> print(output.asnumpy())
[1.99 -0.4903 1.695 3.9801]
class tinyms.primitives.STFT(n_fft, hop_length, win_length, normalized, onesided, return_complex)[source]

Applies Short-time Fourier transform (STFT) on input signal.

STFT segments the signal into narrow time intervals and takes the Fourier transform of each segment to quantify the change of a nonstationary signal’s frequency and phase content over time.

Refer to mindspore.ops.stft() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore as ms
>>> from mindspore.ops import STFT
>>> import numpy as np
>>> x = ms.Tensor(np.random.rand(2,7192), ms.float32)
>>> window = ms.Tensor(np.random.rand(64), ms.float32)
>>> stft = STFT(64, 16, 64, False, True, True)
>>> output = stft(x, window)
>>> print(output.shape)
(2, 33, 446)
class tinyms.primitives.SampleDistortedBoundingBoxV2(seed=0, seed2=0, aspect_ratio_range=(0.75, 1.33), area_range=(0.05, 1.0), max_attempts=100, use_image_if_no_bounding_boxes=False)[source]

Creates a single bounding box that is randomly distorted for an image.

It is often used for object localization and image recognition tasks. In such tasks, bounding box annotations are supplied in addition to ground-truth labels, and data augmentation techniques are often used to randomly distort an image while preserving its content.

This function takes the image_size, bounding_boxes, and a series of constraints as input, and outputs a randomly distorted localization of an object (i.e., bounding box) based on these inputs.

The output is returned as 3 tensors:

The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into mindspore.ops.Slice to crop the image. The latter is the generated distorted bounding box.

Parameters:
  • seed (int, optional) – Random number seed. If either seed or seed2 is set to a non-zero value, the seed is to the given value. Otherwise, a random seed is uesed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

  • aspect_ratio_range (Union[list(float), tuple(float)], optional) – Specifying the valild range of aspect ratio of cropped area. Aspect ratio of area = area_width / area_height. The value of this attribute should be positive. Default: (0.75, 1.33).

  • area_range (Union[list(float), tuple(float)], optional) – The cropped area of the image must contain a fraction of the supplied image within this range. The value of this attribute should be in range (0.0, 1.0]. Default: (0.05, 1.0).

  • max_attempts (int, optional) – A poditive integer specifies the number of attempts that will be made to generate a cropped region of the image based on the given constraints. If the maximum number of attempts is exceeded without success, the function will return the entire original image. Default: 100.

  • use_image_if_no_bounding_boxes (bool, optional) – Controls behavior if no bounding boxes supplied. If no bounding boxes supplied (bounding_boxes in shape \((0, N, 4)\) or \((batch, 0, 4)\)), and this attribute is set True, then assume an implicit bounding box covering the whole input, else if this attribute is set False, then raise an error. Default: False.

Inputs:
  • image_size (Tensor) - 1-D Tensor, containing [height, width, channels]. The value of this input tensor should be positive.

  • bounding_boxes (Tensor) - 3-D Tensor with shape \((batch, N, 4)\) describing the N bounding boxes associated with the image. The value of this input tensor should be in range [0.0, 1.0]. The data type is float32.

  • min_object_covered (Tensor) - The least fraction of bounding box the croped area need to cover. This parameter’s value should be between 0.0 and 1.0, inclusive. If the value is 0, the cropped area does not need to overlap with any of the supplied bounding boxes. The data type is float32.

Outputs:
  • begin (Tensor) - A 1-D Tensor, containing [offset_height, offset_width, 0]. The data type is same as image_size.

  • size (Tensor) - A 1-D Tensor, containing [target_height, target_width, -1]. The data type is same as image_size. When the data type of image_size is uint8, the last value of size, which is originally -1, will be forced to 255.

  • bboxes (Tensor) - A 3-D Tensor with shape \((1, 1, 4)\), containing the distorted bounding box. The data type is float32.

Raises:
  • TypeError – If image_size is not a Tensor.

  • TypeError – If bounding_boxes is not a Tensor.

  • TypeError – If min_object_covered is not a Tensor.

  • TypeError – If seed or seed2 is not an int.

  • TypeError – If aspect_ratio_range is not a list or a tuple with type float.

  • TypeError – If area_range is not a list or a tuple with type float.

  • TypeError – If use_image_if_no_bounding_boxes is not a bool.

  • ValueError – If the dimension of image_size is not 1.

  • ValueError – If the elements of image_size is not 3.

  • ValueError – If the dimension of bounding_boxes is not 3.

  • ValueError – If the elements of each bounding box in bounding_boxes is not 4.

  • ValueError – If the elements of min_object_covered is not 1.

  • ValueError – If the elements of aspect_ratio_range list or tuple is not 2.

  • ValueError – If the values of aspect_ratio_range is not positive.

  • ValueError – If the second value of aspect_ratio_range is less than or equal to the first one.

  • ValueError – If the elements of area_range list or tuple is not 2.

  • ValueError – If the values of area_range is out of range (0.0, 1.0].

  • ValueError – If the second value of area_range is less than or equal to the first one.

  • ValueError – If the value of max_attempts is not positive int.

  • ValueError – If use_image_if_no_bounding_boxes is False and no bounding boxes supplied.

  • RuntimeError – If the values of image_size is not positive.

  • RuntimeError – If the values of bounding_boxes is out of range [0.0, 1.0].

  • RuntimeError – If the bounding_boxes cannot make up bounding box.

  • RuntimeError – If the value of min_object_covered is out of range [0.0, 1.0].

Supported Platforms:

Ascend CPU

Examples

>>> image_size = Tensor([640, 480, 3], mindspore.int32)
>>> bounding_boxes = Tensor([[[0.38, 0.17, 0.95, 0.40]]], mindspore.float32)
>>> min_object_covered = Tensor([0.8], mindspore.float32)
>>> sample_distorted_bounding_box_v2 = \
...   ops.SampleDistortedBoundingBoxV2(seed=1, seed2=1, aspect_ratio_range=(0.9, 1.1),
...                                    area_range=(0.1,1.0), max_attempts=100,
...                                    use_image_if_no_bounding_boxes=False)
>>> output = sample_distorted_bounding_box_v2(image_size, bounding_boxes, min_object_covered)
>>> begin, size, bboxes = output[0], output[1], output[2]
>>> print(begin)
[133   1   0]
>>> print(size)
[502 457  -1]
>>> print(bboxes)
[[[0.2078125  0.00208333 0.9921875  0.95416665]]]
class tinyms.primitives.ScalarCast[source]

Casts the input scalar to another type.

Refer to mindspore.ops.scalar_cast() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> scalar_cast = ops.ScalarCast()
>>> output = scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
class tinyms.primitives.ScalarSummary[source]

This operator will put a scalar to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of scalar, and the dim of value must be 0 or 1.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.ScalarSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         name = "x"
...         self.summary(name, x)
...         x = self.add(x, y)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor(3), Tensor(4))
>>> print(summary)
7
class tinyms.primitives.ScalarToArray[source]

The ScalarToArray primitive is deprecated. Please use the mindspore.ops.ScalarToTensor instead.

class tinyms.primitives.ScalarToTensor[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Refer to mindspore.ops.scalar_to_tensor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToTensor()
>>> data = 1
>>> output = op(data, mindspore.float32)
>>> print(output)
1.0
class tinyms.primitives.ScaleAndTranslate(kernel_type='lanczos3', antialias=True)[source]

Scale And Translate the input image tensor.

Note

  • Input images must be a 4-D tensor.

  • Input size, scale and translation must be a 1-D tensor with two elements.

Parameters:
  • kernel_type (str, optional) – Deciding which image filtering algorithm to choose. Valid options: [“lanczos1”, “lanczos3”, “lanczos5”, “gaussian”, “box”, “triangle”, “keyscubic”, “mitchellcubic”] Default: “lanczos3”.

  • antialias (bool, optional) – Deciding whether to use the antialias. Default: True.

Inputs:
  • images (Tensor) - A 4-D tensor of shape \((batch, image\_height, image\_width, channel)\).

  • size (Tensor) - The size of the output image after scale and translate operations. A 1-D tensor with two positive elements whose dtype is int32 and shape must be \((2,)\).

  • scale (Tensor) - Indicates the zoom factor. A 1-D tensor with two positive elements whose dtype is float32 and shape must be \((2,)\).

  • translation (Tensor) - Translate the pixel value. A 1-D tensor with two elements whose dtype is float32 and shape must be \((2,)\).

Outputs:

A 4-D tensor with type: float32 and shape \((batch, size[0], size[1], channel)\).

Raises:
  • TypeError – If kernel_type is not str.

  • TypeError – If antialias is not bool.

  • TypeError – If images is not tensor with valid dtype.

  • TypeError – If size is not a tensor of int32.

  • TypeError – If scale is not a tensor of float32.

  • TypeError – If translation is not a tensor of float32.

  • ValueError – If kernel_type is not in [“lanczos1”, “lanczos3”, “lanczos5”, “gaussian”, “box”, “triangle”, “keyscubic”, “mitchellcubic”].

  • ValueError – If the rank of images is not 4.

  • ValueError – If the shape of size is not \((2,)\).

  • ValueError – If the shape of scale is not \((2,)\).

  • ValueError – If the shape of translation is not \((2,)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScaleAndTranslate()
>>> image = Tensor(np.array([[[[9.0], [5.0], [2.0], [1.0]],
...                           [[6.0], [1.0], [9.0], [7.0]]]]), mindspore.float32)
>>> size = Tensor(np.array([2, 2]).astype(np.int32))
>>> scale = Tensor(np.array([1, 1]).astype(np.float32))
>>> translation = Tensor(np.array([1, 1]).astype(np.float32))
>>> output = op(image, size, scale, translation)
>>> print(output)
[[[[0.]
   [0.]]
  [[0.]
   [9.]]]]
class tinyms.primitives.ScatterAdd(use_locking=False)[source]

Updates the value of the input tensor through the addition operation.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{+}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

This is an in-place update operator. Therefore, the input_x will be updated after the operation is completed.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. If true, input_x will be protected by the lock. Otherwise, the calculation result is undefined. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[1. 1. 1.]
 [3. 3. 3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [7.0, 7.0, 7.0] = [10.0, 10.0, 10.0]
>>> # input_x[1] = [10.0, 10.0, 10.0] + [9.0, 9.0, 9.0] = [19.0, 19.0, 19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 1.  1.  1.]
 [19. 19. 19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [8.0, 8.0, 8.0] + [9.0, 9.0, 9.0] = [17.0, 17.0, 17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 3.  3.  3.]
 [17. 17. 17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [9.0, 9.0, 9.0] = [12.0, 12.0, 12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 8.  8.  8.]
 [12. 12. 12.]]
class tinyms.primitives.ScatterAddWithAxis(axis=0)[source]

‘ops.ScatterAddWithAxis’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.TensorScatterElements’ instead.

Supported Platforms:

Deprecated

Examples

>>> op = ops.ScatterAddWithAxis(0)
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> indices = Tensor(np.array([[1, 0, 2], [0, 2, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[1, 1, 1], [1, 1, 1]]), mindspore.float32)
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 2.  3.  3.]
 [ 5.  5.  7.]
 [ 7.  9.  10.]]
>>> op = ops.ScatterAddWithAxis(1)
>>> input_x = Tensor(np.array([[1, 2, 3, 4, 5]]), mindspore.int32)
>>> indices = Tensor(np.array([[2, 4]]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.int32)
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 1  2  11  4  13]]
class tinyms.primitives.ScatterDiv(use_locking=False)[source]

Updates the value of the input tensor through the divide operation.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do divide operation whose data type must be mstype.int32 or mstype.int64.

  • updates (Tensor) - The tensor doing the divide operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [7.0, 7.0, 7.0] = [15.0, 15.0, 15.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[21. 21. 21.]
 [15. 15. 15.]]
class tinyms.primitives.ScatterMax(use_locking=False)[source]

Updates the value of the input tensor through the maximum operation.

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do max operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor that performs the maximum operation with input_x, the data type is the same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> scatter_max = ops.ScatterMax()
>>> output = scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
class tinyms.primitives.ScatterMin(use_locking=False)[source]

Updates the value of the input tensor through the minimum operation.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 1.0, 2.0], [0.0, 0.0, 0.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_min = ops.ScatterMin()
>>> output = scatter_min(input_x, indices, update)
>>> print(output)
[[0. 1. 1.]
 [0. 0. 0.]]
class tinyms.primitives.ScatterMul(use_locking=False)[source]

Updates the value of the input tensor through the multiply operation.

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do multiply operation whose data type must be mstype.int32 or mstype.int64.

  • updates (Tensor) - The tensor doing the multiply operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
>>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  1.   1.   1.]
 [378. 378. 378.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
>>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  3.   3.   3.]
 [126. 126. 126.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7.  7.  7.]
 [54. 54. 54.]]
class tinyms.primitives.ScatterNd[source]

Scatters a tensor into a new tensor depending on the specified indices.

The following figure shows the calculation process of inserting two slices in the first dimension of a rank-3 with two matrices of new values:

tinyms/ScatterNd.png

Refer to mindspore.ops.scatter_nd() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScatterNd()
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
>>> shape = (4, 4, 4)
>>> output = op(indices, updates, shape)
>>> print(output)
[[[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]
 [[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = op(indices, updates, shape)
>>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
>>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
>>> # [
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> # ]
>>> # Step 2: Modify the data at the specified location according to the indicators
>>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
>>> # means that the empty tensor in the 0th row and 1st col set to 3.2
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 0.   0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
>>> # means that the empty tensor in the 1th row and 1st col set to 1.1
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 1.1  0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # The final result is as follows:
>>> print(output)
[[0. 3.2 0.]
 [0. 1.1 0.]
 [0. 0.  0.]]
class tinyms.primitives.ScatterNdAdd(use_locking=False)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_add = ops.ScatterNdAdd(use_locking)
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_add = ops.ScatterNdAdd(use_locking)
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]]
class tinyms.primitives.ScatterNdDiv(use_locking=False)[source]

Applies sparse division to individual values or slices in a tensor.

Using given values to update tensor value through the division operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.scatter_nd_div() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_div = ops.ScatterNdDiv(use_locking)
>>> output = scatter_nd_div(input_x, indices, updates)
>>> print(output)
[1.         0.25       0.5        4.         0.71428573 6.
 7.         0.8888889 ]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.float32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_div = ops.ScatterNdDiv(use_locking)
>>> output = scatter_nd_div(input_x, indices, updates)
>>> print(output)
[[[1.         1.         1.         1.        ]
  [0.5        0.5        0.5        0.5       ]
  [0.33333334 0.33333334 0.33333334 0.33333334]
  [0.25       0.25       0.25       0.25      ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]
 [[0.2        0.2        0.2        0.2       ]
  [0.16666667 0.16666667 0.16666667 0.16666667]
  [0.14285715 0.14285715 0.14285715 0.14285715]
  [0.125      0.125      0.125      0.125     ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]]
class tinyms.primitives.ScatterNdMax(use_locking=False)[source]

Applies sparse maximum to individual values or slices in a tensor.

Using given values to update parameter value through the maximum operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_max() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_max = ops.ScatterNdMax()
>>> output = scatter_nd_max(input_x, indices, updates)
>>> print(output)
[ 1. 8. 6.  4. 7.  6.  7. 9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_max = ops.ScatterNdMax()
>>> output = scatter_nd_max(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
class tinyms.primitives.ScatterNdMin(use_locking=False)[source]

Applies sparse minimum to individual values or slices in a tensor.

Using given values to update tensor value through the minimum operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.ones(8) * 10, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_min = ops.ScatterNdMin(use_locking)
>>> output = scatter_nd_min(input_x, indices, updates)
>>> print(output)
[10.  8.  6. 10.  7. 10. 10.  9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)) * 10, mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_min = ops.ScatterNdMin(use_locking)
>>> output = scatter_nd_min(input_x, indices, updates)
>>> print(output)
[[[ 1  1  1  1]
  [ 2  2  2  2]
  [ 3  3  3  3]
  [ 4  4  4  4]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]
 [[ 5  5  5  5]
  [ 6  6  6  6]
  [ 7  7  7  7]
  [ 8  8  8  8]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]]
class tinyms.primitives.ScatterNdMul(use_locking=False)[source]

Applies sparse multiplication to individual values or slices in a tensor.

Using given values to update parameter value through the multiplication operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.scatter_nd_mul() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_mul = ops.ScatterNdMul()
>>> output = scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[ 1. 16. 18.  4. 35.  6.  7. 72.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_mul = ops.ScatterNdMul()
>>> output = scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
class tinyms.primitives.ScatterNdSub(use_locking=False)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_sub = ops.ScatterNdSub(use_locking)
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_sub = ops.ScatterNdSub(use_locking)
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[[[-1 -1 -1 -1]
  [-2 -2 -2 -2]
  [-3 -3 -3 -3]
  [-4 -4 -4 -4]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]
 [[-5 -5 -5 -5]
  [-6 -6 -6 -6]
  [-7 -7 -7 -7]
  [-8 -8 -8 -8]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]]
class tinyms.primitives.ScatterNdUpdate(use_locking=True)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N, and its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor, with int32 or int64 data type.

  • updates (Tensor) - N-D(2D or 3D) Tensor The tensor to be updated to the input tensor, has the same type as input. The shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.ScatterNdUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[1.   0.3   3.6]
 [0.4  2.2  -3.2]]
class tinyms.primitives.ScatterNonAliasingAdd[source]

Applies sparse addition to the input using individual values or slices.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • input_x (Parameter) - The target parameter. The data type must be float16, float32 or int32.

  • indices (Tensor) - The index to perform the addition operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the addition operation with input_x, the data type is the same as input_x, the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

Outputs:

Parameter, the updated input_x.

Raises:
  • TypeError – If dtype of indices is not int32.

  • TypeError – If dtype of input_x is not one of float16, float32, int32.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_non_aliasing_add = ops.ScatterNonAliasingAdd()
>>> output = scatter_non_aliasing_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
class tinyms.primitives.ScatterSub(use_locking=False)[source]

Updates the value of the input tensor through the subtraction operation.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{-}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[-1. -1. -1.]
 [-1. -1. -1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [7.0, 7.0, 7.0] = [-10.0, -10.0, -10.0]
>>> # input_x[1] = [-10.0, -10.0, -10.0] - [9.0, 9.0, 9.0] = [-19.0, -19.0, -19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -1.  -1.  -1.]
 [-19. -19. -19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-8.0, -8.0, -8.0] - [9.0, 9.0, 9.0] = [-17.0, -17.0, -17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -3.  -3.  -3.]
 [-17. -17. -17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [9.0, 9.0, 9.0] = [-12.0, -12.0, -12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -8.  -8.  -8.]
 [-12. -12. -12.]]
class tinyms.primitives.ScatterUpdate(use_locking=True)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor. With int32 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> op = ops.ScatterUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[2. 1.2  1.]
 [3. 1.2  1.]]
class tinyms.primitives.SeLU[source]

Activation function SeLU (Scaled exponential Linear Unit).

The activation function is defined as:

\[E_{i} = scale * \begin{cases} x_{i}, &\text{if } x_{i} \geq 0; \cr \text{alpha} * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(alpha\) and \(scale\) are pre-defined constants(\(alpha=1.67326324\) and \(scale=1.05070098\)).

See more details in Self-Normalizing Neural Networks.

Inputs:
  • input_x (Tensor) - Tensor of any dimension. The data type is int8, int32, float16, float32, float64(only CPU, GPU).

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If dtype of input_x is not int8, int32, float16, float32, float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> selu = ops.SeLU()
>>> output = selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
class tinyms.primitives.SearchSorted(dtype=mindspore.int64, right=False)[source]

Returns the indices correspond to the positions where the given numbers in values should be inserted into sorted_sequence so that the order of the sequence is maintained.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.searchsorted() for more details.

Parameters:
  • dtype (mindspore.dtype, optional) – Output data type. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int64.

  • right (bool, optional) – Search Strategy. If True, return the last suitable index found; if False, return the first such index. Default: False.

Inputs:
  • sorted_sequence (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R-1, x_R)\) or (x_1). It must contain a monotonically increasing sequence on the innermost dimension.

  • values (Tensor) - The value that should be inserted. The shape of tensor is \((x_1, x_2, ..., x_R-1, x_S)\).

Outputs:

Tensor containing the indices from the innermost dimension of sorted_sequence such that, if insert the corresponding value in the values tensor, the order of sorted_sequence would be preserved, whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sorted_sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
>>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
>>> output = ops.SearchSorted()(sorted_sequence, values)
>>> print(output)
[[2 4 5]
 [1 2 4]]
class tinyms.primitives.SegmentMax[source]

Computes the maximum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=max_j(input\_x_j)\) in which the maximum value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMax()
>>> output = op(x, segment_ids)
>>> print(output)
[[4. 5. 6.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentMean[source]

Computes the mean along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=mean_j(input\_x_j)\) in which the mean value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [1, 2, 3], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMean()
>>> output = op(x, segment_ids)
>>> print(output)
[[1. 2. 3.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentMin[source]

Computes the minimum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=min_j(input\_x_j)\) in which the minimum value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMin()
>>> output = op(x, segment_ids)
>>> print(output)
[[1. 2. 3.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentProd[source]

Computes the cumulative product along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i = \prod_j input\_x_j\) in which the cumulative product is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to 1: \(output[i] = 1\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentProd()
>>> output = op(x, segment_ids)
>>> print(output)
[[ 4. 10. 18.]
 [ 1.  1.  1.]
 [ 7.  8.  9.]]
class tinyms.primitives.SegmentSum[source]

Computes the cumulative sum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i = \sum_j input\_x_j\) in which the cumulative sum is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to 0: \(output[i] = 0\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentSum()
>>> output = op(x, segment_ids)
>>> print(output)
[[5. 7. 9.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.Select[source]

The conditional tensor determines whether the corresponding element in the output must be selected from x (if True) or y (if False) based on the value of each element.

It can be defined as:

\[\begin{split}out_i = \begin{cases} x_i, & \text{if } condition_i \\ y_i, & \text{otherwise} \end{cases}\end{split}\]
Inputs:
  • condition (Tensor[bool]) - The condition tensor, decides which element is chosen. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • x (Tensor) - The first tensor to be selected and the shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • y (Tensor) - The second tensor to be selected and the shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

Outputs:

Tensor, has the same shape as condition.

Raises:
  • TypeError – If x or y is not a Tensor.

  • ValueError – If shape of the three inputs are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> select = ops.Select()
>>> input_cond = Tensor([True, False])
>>> input_x = Tensor([2,3], mindspore.float32)
>>> input_y = Tensor([1,2], mindspore.float32)
>>> output = select(input_cond, input_x, input_y)
>>> print(output)
[2. 2.]
class tinyms.primitives.Shape[source]

Returns the shape of the input tensor.

Refer to mindspore.ops.shape() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.Shape()
>>> output = shape(input_x)
>>> print(output)
(3, 2, 1)
class tinyms.primitives.Sigmoid[source]

Sigmoid activation function. Refer to mindspore.ops.sigmoid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> sigmoid = ops.Sigmoid()
>>> output = sigmoid(input_x)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
class tinyms.primitives.SigmoidCrossEntropyWithLogits[source]

Uses the given logits to compute sigmoid cross entropy between the logits and the label.

Measures the distribution error in discrete classification tasks where each class is independent and not mutually exclusive using cross entropy loss.

Sets input logits as \(X\), input label as \(Y\), output as \(loss\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})] \end{array}\end{split}\]
Inputs:
  • logits (Tensor) - Input logits. Tensor of shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • label (Tensor) - Ground truth label. With the same shape and type as logits.

Outputs:

Tensor, with the same shape and type as input logits.

Raises:

TypeError – If logits or label is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32))
>>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32))
>>> sigmoid = ops.SigmoidCrossEntropyWithLogits()
>>> output = sigmoid(logits, labels)
>>> print(output)
[[ 0.6111007   0.5032824   0.26318604]
 [ 0.58439666  0.5530153  -0.4368139 ]]
class tinyms.primitives.Sign[source]

Performs sign on the tensor element-wise.

\[sign(x) = \begin{cases} -1, &if\ x < 0 \cr 0, &if\ x = 0 \cr 1, &if\ x > 0\end{cases}\]
Inputs:
  • x (Tensor) - The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
>>> sign = ops.Sign()
>>> output = sign(x)
>>> print(output)
[[ 1.  0. -1.]]
class tinyms.primitives.Sin[source]

Computes sine of the input element-wise.

Refer to mindspore.ops.sin() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sin = ops.Sin()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sin(x)
>>> print(output)
[0.5810352 0.27635565 0.41687083 0.5810352]
class tinyms.primitives.Sinc[source]

Computes the normalized sinc of input.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.sinc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.ops.operations.math_ops as ops
>>> from mindspore import Tensor, dtype
>>> sinc = ops.Sinc()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinc(x)
>>> print(output)
[0.47735003 0.8759357  0.7224278  0.47735003]
class tinyms.primitives.Sinh[source]

Computes hyperbolic sine of the input element-wise.

Refer to mindspore.ops.sinh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sinh = ops.Sinh()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinh(x)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
class tinyms.primitives.Size[source]

Returns a Scalar of type int that represents the size of the input Tensor and the total number of elements in the Tensor.

Refer to mindspore.ops.size() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> size = ops.Size()
>>> output = size(input_x)
>>> print(output)
4
class tinyms.primitives.Slice[source]

Slices a tensor in the specified shape.

Refer to mindspore.ops.slice() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> slice_op = ops.Slice()
>>> output = slice_op(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 2))
>>> print(output)
[[[3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 1))
>>> print(output)
[[[3]]]
>>> output = slice_op(data, (1, 1, 0), (1, 1, 3))
>>> print(output)
[[[4 4 4]]]
>>> output = slice_op(data, (1, 0, 1), (1, 1, 2))
>>> print(output)
[[[3 3]]]
class tinyms.primitives.SmoothL1Loss(beta=1.0, reduction='none')[source]

Calculate the smooth L1 loss, and the L1 loss function has robustness.

Refer to mindspore.ops.smooth_l1_loss() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> loss = ops.SmoothL1Loss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
[0.  0.  0.5]
class tinyms.primitives.SoftMarginLoss(reduction='mean')[source]

SoftMarginLoss operation.

Creates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).

\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}\]

where \(x.nelement()\) is the number of elements of x.

Parameters:

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’ or ‘sum’. Default: “mean”.

Inputs:
  • logits (Tensor) - Predict data. Data type must be float16 or float32.

  • labels (Tensor) - Ground truth data, with the same type and shape as logits.

Outputs:

Tensor or Scalar, if reduction is “none”, its shape is the same as logits. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If logits or labels is not a Tensor.

  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • ValueError – If shape of logits is not the same as labels.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.SoftMarginLoss()
>>> logits = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> labels = Tensor(np.array([[-1, 1], [1, -1]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.6764238
class tinyms.primitives.SoftShrink(lambd=0.5)[source]

Applies the SoftShrink function element-wise.

Refer to mindspore.ops.softshrink() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> input_x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mindspore.float16)
>>> softshrink = ops.SoftShrink()
>>> output = softshrink(input_x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
class tinyms.primitives.Softmax(axis=-1)[source]

Applies the Softmax operation to the input tensor on the specified axis.

Refer to mindspore.ops.softmax() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softmax = ops.Softmax()
>>> output = softmax(logits)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
class tinyms.primitives.SoftmaxCrossEntropyWithLogits[source]

Gets the softmax cross-entropy value between logits and labels with one-hot encoding.

The updating formulas of SoftmaxCrossEntropyWithLogits algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})} \end{array}\end{split}\]

where \(X\) represents logits. \(Y\) represents label. \(loss\) represents output.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N, C)\), has the same data type with logits.

Outputs:

Tuple of 2 tensors(loss, dlogits), the loss shape is \((N,)\), and the dlogits with the same shape as logits.

Raises:
  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • TypeError – If logits or labels is not a Tensor.

  • ValueError – If shape of logits is not the same as labels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32)
>>> softmax_cross = ops.SoftmaxCrossEntropyWithLogits()
>>> loss, dlogits = softmax_cross(logits, labels)
>>> print(loss)
[0.5899297  0.52374405]
>>> print(dlogits)
[[ 0.02760027  0.20393994  0.01015357  0.20393994 -0.44563377]
 [ 0.08015892  0.02948882  0.08015892 -0.4077012   0.21789455]]
class tinyms.primitives.Softplus[source]

Softplus activation function.

Softplus is a smooth approximation to the ReLU function. It can be used to constrain the output of a machine to always be positive. The function is shown as follows:

\[\text{output} = \log(1 + \exp(\text{x}))\]
Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If the dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softplus = ops.Softplus()
>>> output = softplus(input_x)
>>> print(output)
[1.3132615 2.126928  3.0485873 4.01815   5.0067153]
class tinyms.primitives.Softsign[source]

Softsign activation function.

Refer to mindspore.ops.softsign() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> softsign = ops.Softsign()
>>> output = softsign(input_x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
class tinyms.primitives.Sort(axis=-1, descending=False)[source]

Sorts the elements of the input tensor along the given dimension in the specified order.

Warning

Currently, the data types of Float16 is well supported. Using Float32 might cause loss of accuracy.

Parameters:
  • axis (int) – The dimension to sort along. Default: -1.

  • descending (bool) – Controls the sort order. If descending is True then the elements are sorted in descending order by value. Default: False.

Inputs:
  • x (Tensor) - The input tensor of any dimension, with a type of float16 or float32.

Outputs:
  • y1 (Tensor) - A tensor whose values are the sorted values, with the same shape and data type as input.

  • y2 (Tensor) - the indices of the elements in the original input tensor. Data type is int32.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If descending is not a bool.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is not in range of [-len(x.shape), len(x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.Sort()
>>> output = sort(x)
>>> # The output below is based on the Ascend platform.
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
 [ 3.0000e+00,  5.0000e+00,  9.0000e+00],
 [ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
 [2, 0, 1],
 [0, 1, 2]]))
class tinyms.primitives.SpaceToBatch(block_size, paddings)[source]

SpaceToBatch is deprecated. Please use mindspore.ops.SpaceToBatchND instead. Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_size, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the square of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters:
  • block_size (int) – The block size of dividing blocks with value greater than or equal to 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction lists. Each subtraction list contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_size.

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor. The data type is Number.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_size\) and \(paddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_size*block\_size)\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_size\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_size\)

Raises:
Supported Platforms:

Deprecated

Examples

>>> block_size = 2
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch = ops.SpaceToBatch(block_size, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToBatchND(block_shape, paddings)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions into blocks with block_shape, and then the output tensor’s spatial dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and all elements in block_shape. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters:
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all elements greater than or euqal to 1. If block_shape is a list or tuple, the length of block_shape is the number of spatial dimensions, called M later. If block_shape is an int, the block size of M dimensions are the same, equal to block_shape. In this case of Ascend, M must be 2.

  • paddings (Union[tuple, list]) – The padding values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be greater than or equal to 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i + offset,where offset = N-M, and N is the number of input dimensions. For each i, input_shape[i + offset]+paddings[i][0]+paddings[i][1] should be divisible by block_shape[i].

Inputs:
  • input_x (Tensor) - The input tensor. The input tensor must be a 4-D tensor on Ascend.

Outputs:

Tensor, the output tensor with the same data type as the input. Assume the input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\) with \(block\_shape\) and \(paddings\). The shape of the output tensor will be \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n*(block\_shape[0]*...*block\_shape[M-1]) \\ w'_i = (w_i+paddings[i-1][0]+paddings[i-1][1])//block\_shape[i-1] \end{array}\end{split}\]
Raises:
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If paddings is neither list nor tuple.

  • ValueError – If block_shape is not one dimensional when block_shape is a list or tuple.

  • ValueError – If the length of block_shape is not 2 on Ascend.

  • ValueError – If shape of paddings is not (M, 2), where M is the length of block_shape.

  • ValueError – If the element of block_shape is not an integer larger than or equal to 1.

  • ValueError – If the element of paddings is not an integer larger than or euqal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_shape = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch_nd = ops.SpaceToBatchND(block_shape, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch_nd(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToDepth(block_size)[source]

Rearrange blocks of spatial data into depth.

The output tensor’s height dimension is \(height / block\_size\).

The output tensor’s weight dimension is \(weight / block\_size\).

The depth of output tensor is \(block\_size * block\_size * input\_depth\).

The input tensor’s height and width must be divisible by block_size. The data format is “NCHW”.

Parameters:

block_size (int) – The block size used to divide spatial data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. The data type is Number. It must be a 4-D tensor.

Outputs:

Tensor, the same data type as x. It must be a 4-D tensor. Tensor of shape \((N, (C_{in} * \text{block_size} * 2), H_{in} / \text{block_size}, W_{in} / \text{block_size})\).

Raises:
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
>>> block_size = 2
>>> space_to_depth = ops.SpaceToDepth(block_size)
>>> output = space_to_depth(x)
>>> print(output.shape)
(1, 12, 1, 1)
class tinyms.primitives.SparseApplyAdadelta(epsilon, use_locking=False)[source]

Updates relevant entries according to the adadelta scheme.

\[\begin{split}\begin{array}{ll} \\ accum = \rho * accum + (1 - \rho) * grad^2 \\ \text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{grad}{\sqrt{accum + \epsilon}} \\ var = var - update * lr \\ \text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * update^2 \\ \end{array}\end{split}\]

Inputs of ‘var’, ‘accum’, ‘accum_update’ and ‘grad’ comply with the implicit type conversion rules to make the data types consistent. Besides, inputs of ‘lr’ and ‘rho’ also support implicit type conversion. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

If there are negative values or values greater than or equal to var.shape[0] in indices, the behavior is undefined. Besides, this operator doesn’t support duplicates in indices.

Parameters:
  • epsilon (float) – A small value added for numerical stability. Its value must be greater or equal to 0.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type.

  • accum (Parameter) - Accumulation to be updated. Mush have the same shape and dtype as var. With float32 or float16 data type.

  • accum_update (Parameter) - Accum_update to be updated. Must have the same shape and dtype as var. With float32 or float16 data type.

  • lr (Union[float, Tensor]) - Learning rate, must be a scalar. With float32 or float16 data type.

  • rho (Union[float, Tensor]) - Decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. Must be one of the following types: int32, int64 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Raises:
  • TypeError – If epsilon is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If var, ‘accum’, ‘accum_update’ is not a Parameter.

  • TypeError – If dtype of accum, accum_updata, grad is not same as var.

  • TypeError – If dtype of var, accum, accum_update, lr, rho or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If epsilon is less than 0.

  • ValueError – If the shape of accum, accum_updata, grad is not same as var.

  • ValueError – If the rank of indices is not equal to 1.

  • ValueError – If shape of indices is not same as shape of first dimension of grad.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self,epsilon,use_locking = False):
...         super(Net, self).__init__()
...         self.sparse_apply_adadelta = P.SparseApplyAdadelta(epsilon,use_locking)
...         self.var = Parameter(Tensor(np.array([[1.0,2.0],[2.0,3.0]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[1.5,2.5],[3.5,4.5]]).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.array([[1.2,2.4],[1.8,0.6]]).astype(np.float32)),
...                name="accum_update")
...     def construct(self, lr, rho, grad, indices):
...         out = self.sparse_apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, grad, indices)
...         return out
...
>>> epsilon = 1e-6
>>> net = Net(epsilon)
>>> lr = 0.01
>>> rho = 0.2
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, rho, grad, Tensor(np.array([0,1],dtype=np.int32)))
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.94611859e-01,  1.98851788e+00],
 [ 1.99840558e+00,  2.99478507e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.72000009e-01,  8.91999960e-01],
 [ 7.08000004e-01,  1.41200006e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.72257614e-01,  1.53470778e+00],
 [ 3.80338937e-01,  3.37563992e-01]]))
class tinyms.primitives.SparseApplyAdagrad(lr, update_slots=True, use_locking=False)[source]

Deprecated

class tinyms.primitives.SparseApplyAdagradV2(lr, epsilon, use_locking=False, update_slots=True)[source]

Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – Learning rate.

  • epsilon (float) – A small value added for numerical stability.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

  • update_slots (bool) – If True, the computation logic will be different to False. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradients has the same data type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and \(indices.shape[0] = grad.shape[0]\).

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If neither lr nor epsilon is a float.

  • TypeError – If neither update_slots nor use_locking is a bool.

  • TypeError – If dtype of var, accum or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad_v2 = ops.SparseApplyAdagradV2(lr=1e-8, epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad_v2(self.var, self.accum, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.array([0]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[ 1.99999988e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[ 5.89999974e-01]]))
class tinyms.primitives.SparseApplyFtrl(lr, l1, l2, lr_power, use_locking=False)[source]

Updates relevant entries according to the FTRL-proximal scheme For more details, please refer to mindspore.nn.FTRL.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool, optional) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - The linear coefficient to be updated, must be the same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. The type must be int32 or int64 and \(indices.shape[0] = grad.shape[0]\).

Outputs:
  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Raises:
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, linear or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • RuntimeError – If the data type of all of inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.SparseApplyFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.6]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[2.00000003e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[1.00000001e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[6.00000024e-01]]))
class tinyms.primitives.SparseApplyFtrlV2(lr, l1, l2, l2_shrinkage, lr_power, use_locking=False)[source]

The SparseApplyFtrlV2 interface is deprecated, please use the mindspore.ops.SparseApplyFtrl instead.

Supported Platforms:

Deprecated

class tinyms.primitives.SparseApplyProximalAdagrad(use_locking=False)[source]

Updates relevant entries according to the proximal adagrad algorithm. Compared with mindspore.ops.ApplyProximalAdagrad, an additional index tensor is input.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type. It must be positive.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type. It must be non-negative.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type. It must be non-negative.

  • grad (Tensor) - A tensor of the same type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and \(indices.shape[0] = grad.shape[0]\).

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2 or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If lr <= 0 or l1 < 0 or l2 < 0.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.SparseApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[4.1, 7.2], [1.1, 3.0]], np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0, 0], [0, 0]], np.float32)), name="accum")
...         self.lr = 1.0
...         self.l1 = 1.0
...         self.l2 = 0.0
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[1, 1], [1, 1]], np.float32))
>>> indices = Tensor(np.array([0, 1], np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.09999990e+00,  5.19999981e+00],
 [ 0.00000000e+00,  1.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
class tinyms.primitives.SparseApplyRMSProp(rho, momentum, epsilon, use_locking=False)[source]

Update relevant entries according to the rmsprop algorithm.

\[\begin{split}\begin{array}{ll} \\ ms = rho * ms_{t-1} + (1 - rho) * grad * grad \\ mom = momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) \\ var = var - mom \end{array}\end{split}\]

Inputs of var, ms, mom and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • rho (float) – Decay rate. The value should be between 0 and 1, otherwise the behavior is undefined.

  • momentum (float) – Momentum. The value should be greater or equal to 0, otherwise the behavior is undefined.

  • epsilon (float) – A small value added for numerical stability. The value should be greater than 0, otherwise the behavior is undefined.

  • use_locking (bool) – If True, updating of the var, ms, and mom tensors are protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • ms (Parameter) - The dict of mutable tensor ms. Must have the same shape and dtype as var.

  • mom (Parameter) - The dict of mutable tensor mom. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Learning rate. Must be a scalar. With float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • indices (Tensor) - A tensor of indices in the first dimension of var, ms and mom. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and indices.shape[0] = var.shape[0].

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • ms (Tensor) - The same shape and data type as ms.

  • mom (Tensor) - The same shape and data type as mom.

Raises:
  • TypeError – If var, ms or mom is not a Parameter.

  • TypeError – If grad or indices is not a Tensor.

  • TypeError – If dtype of var, ms, mom, lr, grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • TypeError – If lr is neither a Number or a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of epsilon, rho, momentum is not a float.

  • ValueError – If shape of ms, mom, grad is not same as var.

  • ValueError – If the shape size of lr is not 0.

  • ValueError – If shape of indices is not same as shape of first dimension of var.

  • ValueError – If epsilon is less than or equal to 0.

  • ValueError – If momentum is less than 0.

  • ValueError – If rho is less than 0 or greater than 1.

  • ValueError – If dimension of var is less than 1.

  • RuntimeError – If the data type of var, ms, mom and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SparseApplyRMSPropNet(nn.Cell):
...     def __init__(self, rho, momentum, epsilon, use_locking=False):
...         super(SparseApplyRMSPropNet, self).__init__()
...         self.sparse_apply_r_m_s_prop = P.SparseApplyRMSProp(rho, momentum, epsilon, use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.3], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.ms = Parameter(Tensor(np.array([[0.2, 0.4], [0.1, 0.3]]).astype(np.float32)), name="ms")
...         self.mom = Parameter(Tensor(np.array([[0.3, 0.1], [0.3, 0.6]]).astype(np.float32)), name="mom")
...     def construct(self, lr, grad, indices):
...         out = self.sparse_apply_r_m_s_prop(self.var, self.ms, self.mom, lr, grad, indices)
...         return out
...
>>> rho = 0.2
>>> momentum = 0.01
>>> epsilon = 1e-6
>>> net = SparseApplyRMSPropNet(rho, momentum, epsilon)
>>> lr = 0.01
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1], dtype=np.int32))
>>> out = net(lr, grad, indices)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.88035822e-01,  2.88811117e-01],
 [ 9.10239667e-02,  4.83422279e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.12000003e-01,  4.72000003e-01],
 [ 2.80000009e-02,  5.72000027e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.19641740e-02,  1.11888833e-02],
 [ 8.97603668e-03,  1.65777095e-02]]))
class tinyms.primitives.SparseGatherV2[source]

Returns a slice of input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor, must be in the range [0, input_params.shape[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises:

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> out = ops.SparseGatherV2()(input_params, input_indices, axis)
>>> print(out)
[[2. 7.]
 [4. 54.]
 [2. 55.]]
class tinyms.primitives.SparseSlice[source]

Slices a SparseTensor based on the start and size.

Inputs:
  • indices (Tensor) - A 2D Tensor of shape \((N, R)\), the indices of the SparseTensor. Support int64, each element value should be a non-negative int number. The shape is \((N, R)\).

  • values (Tensor) - A 1D Tensor, represents the value corresponding to the position in the indices. The shape should be \((N,)\).

  • shape (Tensor) - A 1D Tensor of type int64 which specifies the shape of sparsetensor, represent sparse tensor shape. The shape should be \((R,)\).

  • start (Tensor) - A 1D Tensor of type int64, represents the start of the slice. The shape should be \((R,)\).

  • size (Tensor) - A 1D Tensor of type int64, represents the size of the slice. The shape should be \((R,)\).

Outputs:

A SparseTensor objects resulting from splicing.

  • *y_indices: A Tensor of type int64.

  • *y_values: A Tensor. Has the same type as values.

  • *y_shape: A Tensor of type int64. Has the same size as size.

Raises:
  • TypeError – If the dtype of indices, shape, start, size are not int64.

  • ValueError – If indices is not 2-D tensor.

  • ValueError – If values, start, shape , size is not a 1-D tensor.

  • ValueError – If the number of indices is not corresponding to the number of values.

  • ValueError – If the shape of indices[1] is not corresponding to shape.

  • ValueError – If the shape of shape is not corresponding to start.

  • ValueError – If the shape of shape is not corresponding to size.

Supported Platforms:

Examples

>>> indices = Tensor(np.array([[0, 1], [1, 2], [1, 3], [2, 2]]).astype(np.int64))
>>> values = Tensor(np.array([1, 2, 3, 4]).astype(np.int64))
>>> shape = Tensor(np.array([3, 4]).astype(np.int64))
>>> start = Tensor(np.array([0, 1]).astype(np.int64))
>>> size = Tensor(np.array([2, 3]).astype(np.int64))
>>> sparseslice = ops.SparseSlice()
>>> output = sparseslice(indices, values, shape, start, size)
>>> print(output[0])
[[0 0]
 [1 1]
 [1 2]]
>>> print(output[1])
[1 2 3]
>>> print(output[2])
[2 3]
class tinyms.primitives.SparseSoftmaxCrossEntropyWithLogits(is_grad=False)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Sets input logits as X, input label as Y, output as loss. Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr 0, & j \neq y_i \end{cases} \\ loss = \sum_{ij} loss_{ij} \end{array}\end{split}\]
Parameters:

is_grad (bool) – If true, this operation returns the computed gradient. Default: False.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N)\). Data type must be int32 or int64.

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Raises:
  • TypeError – If is_grad is not a bool.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • TypeError – If dtype of labels is neither int32 nor int64.

  • ValueError – If \(logits.shape[0] != labels.shape[0]\).

Supported Platforms:

GPU CPU

Examples

>>> logits = Tensor([[2, 3, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([0, 1], mindspore.int32)
>>> sparse_softmax_cross = ops.SparseSoftmaxCrossEntropyWithLogits()
>>> loss = sparse_softmax_cross(logits, labels)
>>> print(loss)
3.4878292
>>> sparse_softmax_cross_grad = ops.SparseSoftmaxCrossEntropyWithLogits(is_grad=True)
>>> loss_grad = sparse_softmax_cross_grad(logits, labels)
>>> print(loss_grad)
[[-0.48415753  0.04306427  0.00582811  0.11706084  0.3182043 ]
 [ 0.04007946 -0.4852556   0.04007946  0.2961494   0.10894729]]
class tinyms.primitives.SparseTensorDenseAdd[source]

Add a sparse tensor and a dense tensor to get a dense tensor.

Inputs:
  • x1_indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, ndim)\).

  • x1_values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. The shape should be \((n,)\).

  • x1_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have ndim elements, represent sparse tensor shape is \((ndim,)\).

  • x2 (Tensor) - A dense Tensor, the dtype is same as values.

Outputs:

Tensor, add result of sparse tensor and dense tensor. The dtype is same as values, and the shape is x1_shape.

Raises:
  • TypeError – If the dtype of x1_indices and ‘x1_shape’ is neither int32 nor int64.

  • ValueError – If x1_shape, shape of x1_indices, shape of x1_values and shape of ‘x2’ don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> from mindspore.common import dtype as mstype
>>> x1_indices = Tensor([[0, 0], [0, 1]], dtype=mstype.int64)
>>> x1_values = Tensor([1, 1], dtype=mstype.float32)
>>> x1_shape = Tensor([3, 3], dtype=mstype.int64)
>>> x2= Tensor([[1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=mstype.float32)
>>> sparse_tensor_dense_add = ops.SparseTensorDenseAdd()
>>> out = sparse_tensor_dense_add(x1_indices, x1_values, x1_shape, x2)
>>> print(out)
[[2. 2. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
class tinyms.primitives.SparseTensorDenseMatmul(adjoint_st=False, adjoint_dt=False)[source]

Multiplies sparse matrix A by dense matrix B. The rank of sparse matrix and dense matrix must be equal to 2.

Parameters:
  • adjoint_st (bool) – If true, sparse tensor is transposed before multiplication. Default: False.

  • adjoint_dt (bool) – If true, dense tensor is transposed before multiplication. Default: False.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. Support float16, float32, float64, int32, int64, complex64, complex128. The shape should be \((n,)\).

  • sparse_shape (tuple(int) or (Tensor)) - A positive int tuple or tensor which specifies the shape of sparse tensor, and only constant value is allowed when sparse_shape is a tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

  • dense (Tensor) - A 2-D Tensor, the dtype is same as values. If adjoint_st is False and adjoint_dt is False, the shape must be \((C, M)\). If adjoint_st is False and adjoint_dt is True, the shape must be \((M, C)\). If adjoint_st is True and adjoint_dt is False, the shape must be \((N, M)\). If adjoint_st is True and adjoint_dt is True, the shape must be \((M, N)\).

Outputs:

Tensor, the dtype is the same as values. If adjoint_st is False, the shape is \((N, M)\). If adjoint_st is True, the shape is \((C, M)\).

Raises:
  • TypeError – If the type of adjoint_st or adjoint_dt is not bool, or the dtype of indices, dtype of values and dtype of dense don’t meet the parameter description.

  • ValueError – If sparse_shape, shape of indices, shape of values, and shape of dense don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> from mindspore.common import dtype as mstype
>>> indices = Tensor([[0, 1], [1, 2]], dtype=mindspore.int32)
>>> values = Tensor([1, 2], dtype=mindspore.float32)
>>> sparse_shape = (3, 4)
>>> dense = Tensor([[1, 1], [2, 2], [3, 3], [4, 4]], dtype=mindspore.float32)
>>> sparse_dense_matmul = ops.SparseTensorDenseMatmul()
>>> out = sparse_dense_matmul(indices, values, sparse_shape, dense)
>>> print(out)
[[2. 2.]
 [6. 6.]
 [0. 0.]]
class tinyms.primitives.SparseToDense[source]

Converts a sparse representation into a dense tensor.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. The shape should be \((n,)\).

  • sparse_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

Outputs:

Tensor, converted from sparse tensor. The dtype is same as values, and the shape is sparse_shape.

Raises:
  • TypeError – If the dtype of indices is neither int32 nor int64.

  • ValueError – If sparse_shape, shape of indices and shape of values don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=mindspore.float32)
>>> sparse_shape = (3, 4)
>>> sparse_to_dense = ops.SparseToDense()
>>> out = sparse_to_dense(indices, values, sparse_shape)
>>> print(out)
[[0. 1. 0. 0.]
 [0. 0. 2. 0.]
 [0. 0. 0. 0.]]
class tinyms.primitives.Split(axis=0, output_num=1)[source]

Splits the input tensor into output_num of tensors along the given axis and output numbers.

Refer to mindspore.ops.split() for more details.

Parameters:
  • axis (int) – Index of the split position. Default: 0.

  • output_num (int) – The number of output tensors. Must be positive int. Default: 1.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], the shape of each output tensor is the same, which is \((y_1, y_2, ..., y_S)\). And the data type is the same with input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> split = ops.Split(1, 2)
>>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), mindspore.int32)
>>> print(x)
[[1 1 1 1]
 [2 2 2 2]]
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]), Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]))
>>> split = ops.Split(1, 4)
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]))
class tinyms.primitives.SplitV(size_splits, split_dim, num_split)[source]

Splits the input tensor into num_split tensors along the given dimension.

The input_x tensor will be split into sub-tensors with individual shapes given by size_splits along the split dimension. This requires that input_x.shape(split_dim) is equal to the sum of size_splits.

The shape of input_x is \((x_1, x_2, ..., x_M, ..., x_R)\) whose rank is R. Set the given split_dim as M, and \(-R \le M < R\). Set the given num_split as N, the given size_splits as \((x_{m_1}, x_{m_2}, ..., x_{m_N})\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The output is a list of tensor objects, for the \(i\)-th tensor, it has the shape of \((x_1, x_2, ..., x_{m_i}, ..., x_R)\). \(x_{m_i}\) is the \(M\)-th dimension of the \(i\)-th tensor. Then, the shape of the output tensor is

\[((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\]
Parameters:
  • size_splits (Union[tuple, list]) – A tuple or list of sizes of each output tensor along the split dimension, and the sum of these sizes should equal to the dimension of the input tensor along split_dim. The list may also contain a single instance of the value -1, which indicates that the size of that dimension should be inferred.

  • split_dim (int) – An int indicates the dimension along which to split. Must be in the range [-len(input_x.shape), len(input_x.shape)).

  • num_split (int) – The number of output tensors. Must be positive int.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ...,x_M ..., x_R)\).

Outputs:

Tensor, a list of num_split Tensor objects with the shape \(((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The data type is the same with input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If size_splits is not a tuple or a list.

  • TypeError – If element of size_splits is not an int.

  • TypeError – If split_dim or num_split is not an int.

  • ValueError – If rank of the size_splits is not equal to num_split.

  • ValueError – If sum of the size_splits is not equal to the dimension of value along split_dim.

  • ValueError – If split_dim is out of the range [-len(input_x.shape), len(input_x.shape)).

  • ValueError – If the num_split is less than or equal to 0.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[1, -1], split_dim=1, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[3, 1], dtype=Int32, value=
[[1],
 [4],
 [7]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[2, 3],
 [5, 6],
 [8, 9]]))
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[2, 1], split_dim=0, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Int32, value=
[[1, 2, 3],
 [4, 5, 6]]), Tensor(shape=[1, 3], dtype=Int32, value=
[[7, 8, 9]]))
class tinyms.primitives.Sqrt[source]

Returns square root of a tensor element-wise.

Note

When there are some negative number, it will return a Tensor whose specific position is nan.

\[out_{i} = \sqrt{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor with a dtype of Number, the shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and data type as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> sqrt = ops.Sqrt()
>>> output = sqrt(x)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.Square[source]

Returns square of a tensor element-wise.

\[out_{i} = (x_{i})^2\]
Inputs:
  • x (Tensor) - The input tensor with a dtype of Number, its rank must be in [0, 7] inclusive.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> square = ops.Square()
>>> output = square(x)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.SquareSumAll[source]

Returns the square sum of a tensor element-wise.

\[\begin{split}\left\{\begin{matrix}out_{x} = {\textstyle \sum_{0}^{N}} (x_{i})^2 \\out_{y} = {\textstyle \sum_{0}^{N}} (y_{i})^2 \end{matrix}\right.\end{split}\]

Note

SquareSumAll only supports float16 and float32 data type.

Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The input tensor has the same type and shape as the x.

Outputs:
  • output_x (Tensor) - The same type as the x.

  • output_y (Tensor) - The same type as the x.

Raises:
  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([0, 0, 2, 0]), mindspore.float32)
>>> y = Tensor(np.array([0, 0, 2, 4]), mindspore.float32)
>>> square_sum_all = ops.SquareSumAll()
>>> output = square_sum_all(x, y)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4),
 Tensor(shape=[], dtype=Float32, value= 20))
class tinyms.primitives.SquaredDifference[source]

Subtracts the second input tensor from the first input tensor element-wise and returns square of it.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = (x_{i} - y_{i}) * (x_{i} - y_{i}) = (x_{i} - y_{i})^2\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – if x and y is not a Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 6.0]), mindspore.float32)
>>> squared_difference = ops.SquaredDifference()
>>> output = squared_difference(x, y)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.Squeeze(axis=())[source]

Return the Tensor after deleting the dimension of size 1 in the specified axis.

Refer to mindspore.ops.squeeze() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> squeeze = ops.Squeeze(2)
>>> output = squeeze(input_x)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
class tinyms.primitives.Stack(axis=0)[source]

Stacks a list of tensors in specified axis.

Refer to mindspore.ops.stack() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> stack = ops.Stack()
>>> output = stack([data1, data2])
>>> print(output)
[[0. 1.]
 [2. 3.]]
class tinyms.primitives.StandardLaplace(seed=0, seed2=0)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x) = \frac{1}{2}\exp(-|x|)\]
Parameters:
  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If seed or seed2 is not an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If seed or seed2 is not a non-negative int.

  • ValueError – If shape is a tuple containing non-positive items.

  • ValueError – If shape is a Tensor, and the rank of the Tensor is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (4, 16)
>>> stdlaplace = ops.StandardLaplace(seed=2)
>>> output = stdlaplace(shape)
>>> result = output.shape
>>> print(result)
(4, 16)
class tinyms.primitives.StandardNormal(seed=0, seed2=0)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Refer to mindspore.ops.standard_normal() for more details.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. A second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape is the same as the input shape. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (3, 4)
>>> stdnormal = ops.StandardNormal(seed=2)
>>> output = stdnormal(shape)
>>> print(output)
[[-1.3031056   0.64198005 -0.65207404 -1.767485  ]
 [-0.91792876  0.6508565  -0.9098478  -0.14092612]
 [ 0.7806437   1.1585592   1.9676613  -0.00440959]]
class tinyms.primitives.StopGradient[source]

StopGradient is used for eliminating the effect of a value on the gradient, such as truncating the gradient propagation from an output of a function.

Refer to mindspore.ops.stop_gradient() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> def net(x, y):
...     out1 = ops.MatMul()(x, y)
...     out2 = ops.MatMul()(x, y)
...     out2 = ops.StopGradient()(out2)
...     return out1, out2
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> grad_fn = ops.grad(net)
>>> output = grad_fn(x, y)
>>> print(output)
[[1.4100001 1.6       6.5999994]
 [1.4100001 1.6       6.5999994]]
class tinyms.primitives.StridedSlice(begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0)[source]

Extracts a strided slice of a tensor.

Refer to mindspore.ops.strided_slice() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> #         [[[1. 1. 1.]
>>> #           [2. 2. 2.]]
>>> #
>>> #          [[3. 3. 3.]
>>> #           [4. 4. 4.]]
>>> #
>>> #          [[5. 5. 5.]
>>> #           [6. 6. 6.]]]
>>> # In order to visually view the multi-dimensional array, write the above as follows:
>>> #         [
>>> #             [
>>> #                 [1,1,1]
>>> #                 [2,2,2]
>>> #             ]
>>> #             [
>>> #                 [3,3,3]
>>> #                 [4,4,4]
>>> #             ]
>>> #             [
>>> #                 [5,5,5]
>>> #                 [6,6,6]
>>> #             ]
>>> #         ]
>>> strided_slice = ops.StridedSlice()
>>> output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
>>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
>>> # start = [1, 0, 2] , end = [3, 1, 3], stride = [1, 1, 1], Find a segment of (start, end),
>>> # note that end is an open interval
>>> # To facilitate understanding, this operator can be divided into three steps:
>>> # Step 1: Calculation of the first dimension:
>>> # start = 1, end = 3, stride = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
>>> # output_1th =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #         [4,4,4]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #         [6,6,6]
>>> #     ]
>>> # ]
>>> # Step 2: Calculation of the second dimension
>>> # 2nd dimension, start = 0, end = 1, stride = 1. So only 0th rows can be taken, and the output at this time.
>>> # output_2nd =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #     ]
>>> # ]
>>> # Step 3: Calculation of the third dimension
>>> # 3nd dimension,start = 2, end = 3, stride = 1, So can take 2th cols,
>>> # and you get the final output at this time.
>>> # output_3ed =
>>> # [
>>> #     [
>>> #         [3]
>>> #     ]
>>> #     [
>>> #         [5]
>>> #     ]
>>> # ]
>>> # The final output after finishing is:
>>> print(output)
[[[3.]]
 [[5.]]]
>>> # another example like :
>>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
class tinyms.primitives.Sub[source]

Subtracts the second input tensor from the first input tensor element-wise.

Refer to mindspore.ops.sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> sub = ops.Sub()
>>> output = sub(x, y)
>>> print(output)
[-3 -3 -3]
class tinyms.primitives.SubAndFilter[source]

Dynamic kernel, sub an offset and return the elements which in range [0, max_num).

Inputs:
  • input_x (Tensor) - Input tensor.

  • max_num (Int) - The max value of element that after sub offset.

  • offset (int) - Specifies the offset value of this input_x.

Outputs:

tuple(Tensor), tuple of 2 tensors, filter_res and filter_idx. - filter_res (Tensor) - The result that input_x minus offset,

and return which in the range [0, max_num).

  • filter_idx (Tensor) - A tensor containing indices of elements in the input coressponding to the output tensor.

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([1, 3, 5, 8, 9, 16]), mindspore.int32)
>>> max_num = 10
>>> offset = 5
>>> output = ops.SubAndFilter()(x, max_num, offset)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [0, 3, 4]),
 Tensor(shape=[3], dtype=Int32, value= [2, 3, 4]))
class tinyms.primitives.Svd(full_matrices=False, compute_uv=True)[source]

Computes the singular value decompositions of one or more matrices.

Refer to mindspore.ops.svd() for more details.

Parameters:
  • full_matrices (bool, optional) – If true, compute full-sized \(U\) and \(V\). If false, compute only the leading P singular vectors, with P is the minimum of M and N. Default: False.

  • compute_uv (bool, optional) – If true, compute the left and right singular vectors. If false, compute only the singular values. Default: True.

Inputs:
  • input (Tensor) - Tensor of the matrices to be decomposed. The shape should be \((*, M, N)\), the supported dtype are float32 and float64.

Outputs:
  • s (Tensor) - Singular values. The shape is \((*, P)\).

  • u (Tensor) - Left singular vectors. If compute_uv is False, u will be zero value. The shape is \((*, M, P)\). If full_matrices is True, the shape will be \((*, M, M)\).

  • v (Tensor) - Right singular vectors. If compute_uv is False, v will be zero value. The shape is \((*, N, P)\). If full_matrices is True, the shape will be \((*, N, N)\).

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, set_context
>>> from mindspore import ops
>>> set_context(device_target="CPU")
>>> svd = ops.Svd(full_matrices=True, compute_uv=True)
>>> a = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
>>> s, u, v = svd(a)
>>> print(s)
[7.0652843 1.040081 ]
>>> print(u)
[[ 0.30821905 -0.48819482 0.81649697]
 [-0.90613353  0.11070572 0.40824813]
 [ 0.2896955   0.8656849  0.4082479 ]]
>>> print(v)
[[ 0.63863593 0.769509  ]
 [ 0.769509  -0.63863593]]
class tinyms.primitives.Tan[source]

Computes tangent of x element-wise.

Refer to mindspore.ops.tan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tan = ops.Tan()
>>> x = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = tan(x)
>>> print(output)
[-1.5574081 0. 1.5574081]
class tinyms.primitives.Tanh[source]

Computes hyperbolic tangent of input element-wise.

Refer to mindspore.ops.tanh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> tanh = ops.Tanh()
>>> output = tanh(input_x)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
class tinyms.primitives.TensorAdd[source]

Same as operator Add. TensorAdd will be deprecated in the future. Please use Add instead.

class tinyms.primitives.TensorScatterAdd[source]

Creates a new tensor by adding the values from the positions in input_x indicated by indices, with values from updates. When multiple values are given for the same index, the updated result will be the sum of all values. This operation is almost equivalent to using mindspore.ops.ScatterNdAdd, except that the updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterAdd()
>>> # 5, Perform the addition operation for the first time:
>>> #      first_input_x = input_x[0][0] + updates[0] = [[0.9, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the addition operation for the second time:
>>> #      second_input_x = input_x[0][0] + updates[1] = [[3.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 3.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterDiv[source]

Creates a new tensor by dividing the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will be to divided these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_div() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.0]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterDiv()
>>> # 5, Perform the division operation for the first time:
>>> #      first_input_x = input_x[0][0] / updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the division operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.05, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-0.05  0.3  3.6  ]
 [ 0.4   0.5  -3.2 ]]
class tinyms.primitives.TensorScatterElements(axis=0, reduction='none')[source]

Updates the value of the input Tensor through specified reduction operation.

Refer to mindspore.ops.tensor_scatter_elements() for more details.

Warning

If there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.TensorScatterElements(0, "none")
>>> data = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> indices = Tensor(np.array([[1, 0, 2], [0, 2, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[0, 0, 0], [0, 0, 0]]), mindspore.float32)
>>> output = op(data, indices, updates)
>>> print(output)
[[ 0.0  0.0  3.0]
 [ 0.0  5.0  0.0]
 [ 7.0  0.0  0.0]]
>>> op = ops.TensorScatterElements(1, "add")
>>> data = Tensor(np.array([[1, 2, 3, 4, 5]), mindspore.float32)
>>> indices = Tensor(np.array([[2, 4]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.float32)
>>> output = op(data, indices, updates)
>>> print(output)
[[ 1  2  11  4  13]]
class tinyms.primitives.TensorScatterMax[source]

By comparing the value at the position indicated by indices in x with the value in the updates, the value at the index will eventually be equal to the largest one to create a new tensor.

Refer to mindspore.ops.tensor_scatter_max() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMax()
>>> # 5, Perform the max operation for the first time:
>>> #      first_input_x = Max(input_x[0][0], updates[0]) = [[1.0, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the max operation for the second time:
>>> #      second_input_x = Max(input_x[0][0], updates[1]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 2.2  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMin[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor.

Refer to mindspore.ops.tensor_scatter_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMin()
>>> # 5, Perform the min operation for the first time:
>>> #      first_input_x = Min(input_x[0][0], updates[0]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the min operation for the second time:
>>> #      second_input_x = Min(input_x[0][0], updates[1]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ -0.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMul[source]

Creates a new tensor by multiplying the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to multiply these values respectively. The updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_mul() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMul()
>>> # 5, Perform the multiply operation for the first time:
>>> #      first_input_x = input_x[0][0] * updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the multiply operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.22, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-0.22  0.3   3.6  ]
 [ 0.4   0.5   -3.2 ]]
class tinyms.primitives.TensorScatterSub[source]

Creates a new tensor by subtracting the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using mindspore.ops.ScatterNdSub , except that the updates are applied on output Tensor instead of input Parameter. Refer to mindspore.ops.tensor_scatter_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterSub()
>>> # 5, Perform the subtract operation for the first time:
>>> #      first_input_x = input_x[0][0] - updates[0] = [[-1.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the subtract operation for the second time:
>>> #      second_input_x = input_x[0][0] - updates[1] = [[-3.3, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
 [ 0.4        0.5       -3.2      ]]
class tinyms.primitives.TensorScatterUpdate[source]

Creates a new tensor by updating the positions in input_x indicated by indices, with values from update. This operation is almost equivalent to using mindspore.ops.ScatterNdUpdate , except that the updates are applied on input_x instead of a zero tensor.

indices must have rank at least 2, the last axis is the depth of each index vectors. For each index vector, there must be a corresponding value in update. If the depth of each index tensor matches the rank of input_x, then each index vector corresponds to a scalar in input_x and each update updates a scalar. If the depth of each index tensor is less than the rank of input_x, then each index vector corresponds to a slice in input_x, and each update updates a slice.

The order in which updates are applied is nondeterministic, meaning that if there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1]. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions. The data type is Number.

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • update (Tensor) - The tensor to update the input tensor, has the same type as input, and \(update.shape = indices.shape[:-1]+input_x.shape[indices.shape[-1]:]\)

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • ValueError – If the value of input_x are not match with input indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.TensorScatterUpdate()
>>> output = op(input_x, indices, update)
>>> print(output)
[[ 1.   0.3  3.6]
 [ 0.4  2.2 -3.2]]
class tinyms.primitives.TensorShape[source]

Returns the shape of the input tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.TensorShape()
>>> output = shape(input_x)
>>> print(output)
[3 2 1]
class tinyms.primitives.TensorSummary[source]

This operator will put a tensor to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.TensorSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor([[1]]), Tensor([[2]]))
>>> print(summary)
[[3]]
class tinyms.primitives.Tile[source]

Replicates an input tensor with given multiples times.

Refer to mindspore.ops.tile() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tile = ops.Tile()
>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
>>> multiples = (2, 3, 2)
>>> output = tile(input_x, multiples)
>>> print(output)
[[[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]
 [[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]]
class tinyms.primitives.TopK(sorted=True)[source]

Finds values and indices of the k largest entries along the last dimension.

Warning

  • If sorted is set to False, it will use the aicpu operator, the performance may be reduced. In addition, due to different memory layout and traversal methods on different platforms, the display order of calculation results may be inconsistent when sorted is False.

If the input_x is a one-dimensional Tensor, finds the k largest entries in the Tensor, and outputs its value and index as a Tensor. values[k] is the k largest item in input_x, and its index is indices [k].

For a multi-dimensional matrix, calculates the first k entries in each row (corresponding vector along the last dimension), therefore:

\[values.shape = indices.shape = input.shape[:-1] + [k].\]

If the two compared elements are the same, the one with the smaller index value is returned first.

Parameters:

sorted (bool, optional) – If True, the obtained elements will be sorted by the values in descending order. If False, the obtained elements will not be sorted. Default: True.

Inputs:
  • input_x (Tensor) - Input to be computed, data type must be float16, float32 or int32 on CPU, and float16 or float32 on GPU.

  • k (int) - The number of top elements to be computed along the last dimension, constant input is needed.

Outputs:

A tuple consisting of values and indexes.

  • values (Tensor) - The k largest elements in each slice of the last dimension.

  • indices (Tensor) - The indices of values within the last dimension of input.

Raises:
  • TypeError – If sorted is not a bool.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If k is not an int.

  • TypeError – If dtype of input_x is not one of the following: float16, float32 or int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4, 5], mindspore.float16)
>>> k = 3
>>> values, indices = ops.TopK(sorted=True)(input_x, k)
>>> print((values, indices))
(Tensor(shape=[3], dtype=Float16, value= [ 5.0000e+00,  4.0000e+00,  3.0000e+00]), Tensor(shape=[3],
  dtype=Int32, value= [4, 3, 2]))
class tinyms.primitives.Trace[source]

Returns a new tensor that is the sum of the input trace.

Note

Input must be matrix, and complex number is not supported at present.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be two dimensional.

Outputs:

Tensor, 0D Tensor with 1 element, it has the same data type as input x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
15.0
>>> x = Tensor(np.arange(1, 13).reshape(3, 4), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
18.0
>>> x = Tensor(np.arange(12, 0, -1).reshape(4, 3), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
24.0
class tinyms.primitives.Transpose[source]

Permutes the dimensions of the input tensor according to input permutation.

Refer to mindspore.ops.transpose() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> transpose = ops.Transpose()
>>> output = transpose(input_x, input_perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
class tinyms.primitives.TridiagonalMatMul[source]

Return the result of a multiplication of two matrices, where the left one is a Tridiagonal Matrix.

Inputs:
  • superdiag (Tensor) - Superdiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\). Last element is ignored.

  • maindiag (Tensor) - Maindiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\).

  • subdiag (Tensor) - Subdiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\). First element is ignored.

  • rhs (Tensor) - MxN Matrices to the right of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\).

Outputs:

Tensor, with the same shape and data type as the rhs.

Raises:
  • TypeError – If dtypes of superdiag, maindiag, subdiag and rhs are not float16, float32, double, complex64, complex128.

  • ValueError – If the col of input superdiag, the col of input maindiag, the col of input subdiag and the row of input rhs are not equal.

  • ValueError – If the row of input superdiag, the row of input maindiag and the row of input subdiag are not 1.

  • ValueError – If the rank of input superdiag, the rank of input maindiag, the rank of input subdiag and rank row of input rhs are not equal to or greater than 2.

  • ValueError – If the shape of input superdiag, the shape of input maindiag and the shape of input subdiag are not same.

  • ValueError – If the shape of input superdiag ignoring the last two elements, the shape of input maindiag ignoring the last two elements, the shape of input subdiag ignoring the last two elements and the shape of input rhs ignoring the last two elements are not same.

Supported Platforms:

CPU

Examples

>>> tridiagonalmatmul = ops.TridiagonalMatMul()
>>> superdiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> maindiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> subdiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> rhs = Tensor(np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]).astype(np.float32))
>>> output = tridiagonalmatmul(superdiag,maindiag,subdiag,rhs)
>>> print(output)
[[ 2.  2.  2. ]
 [ 6.  6.  6.]
 [ 6.  6.  6.]]
class tinyms.primitives.Tril(diagonal=0)[source]

Returns the lower triangular portion of the 2-D matrix or the set of matrices in a batch. The remaining elements of the resulting Tensor are assigned a value of 0. The lower triangular section of the matrix comprises of the elements present on and below the main diagonal.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main didiagonal.

Inputs:
  • x (Tensor) - A Tensor with shape \((x_1, x_2, ..., x_R)\). The rank must be at least 2. Supporting all number types including bool.

Outputs:

Tensor, the same shape and data type as the input x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If diagonal is not an int.

  • TypeError – If the type of x is neither number nor bool.

  • ValueError – If the rank of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril()
>>> result = tril(x)
>>> print(result)
[[ 1  0  0  0]
 [ 5  6  0  0]
 [10 11 12  0]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril(diagonal=1)
>>> result = tril(x)
>>> print(result)
[[ 1  2  0  0]
 [ 5  6  7  0]
 [10 11 12 13]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril(diagonal=-1)
>>> result = tril(x)
>>> print(result)
[[ 0  0  0  0]
 [ 5  0  0  0]
 [10 11  0  0]
 [14 15 16  0]]
class tinyms.primitives.TrilIndices(row, col, offset=0, dtype=mindspore.int32)[source]

Calculates the indices of the lower triangular elements in a row * col matrix and returns them as a 2-by-N Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.tril_indices() for more details.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int32.

Outputs:
  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = ops.TrilIndices(4, 3, -1, mstype.int64)
>>> output = net()
>>> print(output)
[[1 2 2 3 3 3]
 [0 0 1 0 1 2]]
>>> print(output.dtype)
Int64
class tinyms.primitives.TripletMarginLoss(p=2, swap=False, eps=1e-06, reduction='mean')[source]

TripletMarginLoss operation.

Creates a criterion that measures the triplet loss given an input tensors \(x1\), \(x2\), \(x3\) and a margin with a value greater than \(0\). This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be \((N, D)\).

The distance swap is described in detail in the paper Learning local feature descriptors with triplets and shallow convolutional neural networks by V. Balntas, E. Riba et al.

The loss function for each sample in the mini-batch is:

\[L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]

where

\[d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p\]
Parameters:
  • p (int, optional) – The norm degree for pairwise distance. Default: 2.

  • eps (float, optional) – Default: 1e-06.

  • swap (bool, optional) – The distance swap. Default: False.

  • reduction (str, optional) – Apply specific reduction method to the output: “none”, “mean”, “sum”. Default: “mean”.

Inputs:
  • x (Tensor) - A sample randomly selected from the training set. Data type must be BasicType.

  • positive (Tensor) - A sample belonging to the same category as x, with the same type and shape as x.

  • negative (Tensor) - A sample belonging to the different class from x, with the same type and shape as x.

  • margin (Tensor) - Make a margin between the positive pair and the negative pair.

Outputs:

Union[Tensor, Scalar], if reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If x or positive or negative or margin is not a Tensor.

  • TypeError – If dtype of x or positive or negative is not BasicType.

  • TypeError – If dtype of x, positive and negative is not the same.

  • TypeError – If margin is not float32.

  • TypeError – If p is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If swap is not a bool.

  • ValueError – If dimensions of input x, positive and negative are less than or equal to 1 at the same time.

  • ValueError – If the dimension of input x or positive or negative is bigger than or equal to 8.

  • ValueError – If length of shape of margin is not 0.

  • ValueError – If shape of x, positive and negative cannot broadcast.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

GPU

Examples

>>> loss = ops.TripletMarginLoss()
>>> x = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> positive = Tensor(np.array([[0.4, 0.6], [0.4, 0.6]]), mindspore.float32)
>>> negative = Tensor(np.array([[0.2, 0.9], [0.3, 0.7]]), mindspore.float32)
>>> margin = Tensor(1.0, mindspore.float32)
>>> output = loss(x, positive, negative, margin)
>>> print(output)
0.8881968
class tinyms.primitives.Triu(diagonal=0)[source]

Returns the upper triangular portion of the 2-D matrix or the set of matrices in a batch. The remaining elements of the resulting Tensor are assigned a value of 0. The upper triangular section of the matrix comprises of the elements present on and above the main diagonal.

Parameters:

diagonal (int, optional) – The index of diagonal. Default: 0, indicating the main diagonal.

Inputs:
  • x (Tensor) - The input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions. The data type is Number.

Outputs:
  • y (Tensor) - A tensor has the same shape and data type as input.

Raises:
Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu()
>>> result = triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 0  6  7  8]
 [ 0  0 12 13]
 [ 0  0  0 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu(diagonal=1)
>>> result = triu(x)
>>> print(result)
[[ 0  2  3  4]
 [ 0  0  7  8]
 [ 0  0  0 13]
 [ 0  0  0  0]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu(diagonal=-1)
>>> result = triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 5  6  7  8]
 [ 0 11 12 13]
 [ 0  0 16 17]]
class tinyms.primitives.TriuIndices(row, col, offset=0, dtype=mindspore.int32)[source]

Calculates the indices of the upper triangular elements in a row * col matrix and returns them as a 2-by-N Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.triu_indices() for more details.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int32.

Outputs:
  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = ops.TriuIndices(5, 4, 2, mstype.int64)
>>> output = net()
>>> print(output)
[[0 0 1]
 [2 3 3]]
>>> print(output.dtype)
Int64
class tinyms.primitives.Trunc[source]

Returns a new tensor with the truncated integer values of the elements of input.

Refer to mindspore.ops.trunc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]), mindspore.float32)
>>> output = ops.Trunc()(x)
>>> print(output)
[ 3.  0. -0. -3.]
class tinyms.primitives.TruncateDiv[source]

Divides the first input tensor by the second input tensor element-wise and rounds the results of division towards zero. Equivalent to C-style integer division.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

Broadcasting is supported.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_div = ops.TruncateDiv()
>>> output = truncate_div(x, y)
>>> print(output)
[0 1 0]
class tinyms.primitives.TruncateMod[source]

Returns the remainder of division element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048, the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1, D2, ..., Dn)\), then \(D1*D2... *DN<=1000000,n<=8\).

Inputs:
  • x (Union[Tensor, numbers.Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, numbers.Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_mod = ops.TruncateMod()
>>> output = truncate_mod(x, y)
>>> print(output)
[ 2  1 -1]
class tinyms.primitives.TruncatedNormal(dtype=mindspore.float32, seed=0, seed2=0)[source]

Returns a Tensor of the specified shape filled with truncated normal values.

The generated values conform to a Gaussian distribution.

Note

  • The value of shape must be greater than zero. The output length can not exceed 1000000.

  • When seed or seed2 is assigned a non-zero value, that value will be used as the seed. Otherwise, a random seed will be used instead.

Parameters:
  • seed (int, optional) – Random number seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

  • dtype (mindspore.dtype, optional) – Specified output data type. Must be one of the following types: mindspore.float16, mindspore.float32 and mindspore.float64. Default: mindspore.float32.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. Its type must be one of the following types: mindspore.int32 and mindspore.int64.

Outputs:

Tensor. Its shape is specified by the input shape. Its type is specified by dtype. Its values are in [-2,2].

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If data type of dtype and shape are not allowed.

  • TypeError – If seed is not an integer.

  • ValueError – If shape elements are not positive.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor(np.array([2, 2]), mstype.int32)
>>> seed = 0
>>> seed2 = 0
>>> truncated_normal = ops.TruncatedNormal(seed=seed, seed2=seed2)
>>> output = truncated_normal(shape)
>>> print(output)
[[ -1.303105  0.641905 ]
 [ -0.917926  0.650655 ]]
class tinyms.primitives.TupleToArray[source]

Converts a tuple to a tensor.

Refer to mindspore.ops.tuple_to_array() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = (1,2,3)
>>> print(type(input_x))
<class 'tuple'>
>>> output = ops.TupleToArray()(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
[1 2 3]
class tinyms.primitives.UniformCandidateSampler(num_true, num_sampled, unique, range_max, seed=0, remove_accidental_hits=False)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution.

Refer to mindspore.ops.uniform_candidate_sampler() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sampler = ops.UniformCandidateSampler(1, 3, False, 4, 1)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int64)))
>>> print(output1.shape)
(3,)
>>> print(output2.shape)
(5, 1)
>>> print(output3.shape)
(3,)
class tinyms.primitives.UniformInt(seed=0, seed2=0)[source]

Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function:

\[\text{P}(i|a,b) = \frac{1}{b-a+1},\]

where the \(a\) indicates the min distribution parameter, the \(b\) indicates the max distribution parameter.

Note

  • The number in tensor minval must be strictly less than maxval at any position after broadcasting.

  • If neither seed nor seed2 is assigned a non-zero value, a randomly generated seed is used instead.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. A second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of random tensor to be generated. Only constant value is allowed.

  • minval (Tensor) - The distribution parameter, \(a\). It defines the minimum possibly generated value, with int32 data type. Only one number is supported.

  • maxval (Tensor) - The distribution parameter, \(b\). It defines the maximum possibly generated value, with int32 data type. Only one number is supported.

Outputs:

Tensor. The shape is the same as the input ‘shape’, and the data type is int32.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If neither minval nor maxval is a Tensor.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 4)
>>> minval = Tensor(1, mstype.int32)
>>> maxval = Tensor(5, mstype.int32)
>>> uniform_int = ops.UniformInt(seed=10)
>>> output = uniform_int(shape, minval, maxval)
>>> result = output.shape
>>> print(result)
(2, 4)
class tinyms.primitives.UniformReal(seed=0, seed2=0)[source]

Produces random floating-point values, uniformly distributed to the interval [0, 1).

Parameters:
  • seed (int) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int) – The global random seed and it will combile with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Note

  • Global random seed and operator-level random seed are not set: Use a randomly generated seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If seed or seed2 is not an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 2)
>>> uniformreal = ops.UniformReal(seed=2)
>>> output = uniformreal(shape)
>>> result = output.shape
>>> print(result)
(2, 2)
class tinyms.primitives.Unique[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

The output contains Tensor y and Tensor idx, the format is probably similar to (y, idx). The shape of Tensor y and Tensor idx is different in most cases, because Tensor y will be duplicated, and the shape of Tensor idx is consistent with the input.

To get the same shape between idx and y, please refer to mindspore.ops.UniqueWithPad.

Inputs:
  • input_x (Tensor) - The input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tuple, containing Tensor objects (y, idx), y is a tensor with the same type as input_x, and contains the unique elements in x. idx is a tensor containing indices of elements in the input corresponding to the output tensor.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.Unique()(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>> y = output[0]
>>> print(y)
[1 2 5]
>>> idx = output[1]
>>> print(idx)
[0 1 2 1]
>>> # As can be seen from the above, y and idx shape
>>> # note that for GPU, this operator must be wrapped inside a model, and executed in graph mode.
>>> class UniqueNet(nn.Cell):
...     def __init__(self):
...         super(UniqueNet, self).__init__()
...         self.unique_op = ops.Unique()
...
...     def construct(self, x):
...         output, indices = self.unique_op(x)
...         return output, indices
...
>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> net = UniqueNet()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
class tinyms.primitives.UniqueConsecutive(return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.unique_consecutive() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> unique_consecutive = ops.UniqueConsecutive(True, True, None)
>>> output, idx, counts = unique_consecutive(x)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
class tinyms.primitives.UniqueWithPad[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function. The returned tuple(y, idx) after the input Tensor x is processed by the unique operator, in which the shapes of y and idx are mostly not equal. Therefore, in order to solve the above situation, the UniqueWithPad operator will fill the y Tensor with the pad_num specified by the user to make it have the same shape as the Tensor idx.

Refer to mindspore.ops.unique_with_pad() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 3, 4, 5]), mindspore.int32)
>>> pad_num = 8
>>> output = ops.UniqueWithPad()(x, pad_num)
>>> print(output)
(Tensor(shape=[8], dtype=Int32, value= [1, 2, 3, 4, 5, 8, 8, 8]),
 Tensor(shape=[8], dtype=Int32, value= [0, 0, 1, 1, 2, 2, 3, 4]))
class tinyms.primitives.Unpack(axis=0)[source]

Same as operator Unstack. Unpack will be deprecated in the future. Please use Unstack instead.

class tinyms.primitives.UnravelIndex[source]

Transforms an array consisting of flattened indices into a tuple that contains coordinate arrays.

Inputs:
  • indices (Tensor) - The input Tensor, containing indices that will be transformed into the flattened form of an array with dimensions specified by dims. The dimension of indices must be 0-D or 1-D. Must be one of the following types: int32, int64.

  • dims (Tensor) - The shape of the array to use for unraveling indices. The dimension of dims must be 1-D. Must have the same type as indices.

Outputs:
  • y (Tensor) - Tensor, it should be 2-D or 1-D(if indices is 0D) and has the same type as indices.

Raises:
  • TypeError – If the data type of indices and dims are different.

  • TypeError – If the data type of indices and dims is not int32 or int64.

  • ValueError – If the dimension of dims is not 1 or dimension of indices is not 1 or 0.

  • ValueError – If indices contains negative elements.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([2, 5]), mindspore.int32)
>>> dims = Tensor(np.array([3, 3]), mindspore.int32)
>>> output = ops.UnravelIndex()(indices, dims)
>>> print(output)
[[0 2]
 [1 2]]
class tinyms.primitives.UnsortedSegmentMax[source]

Computes the maximum along segments of a tensor.

Refer to mindspore.ops.unsorted_segment_max() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: Only have two num_segments, where is 0 and 1, and segment_ids=[0, 1, 1]
>>> # num_segments = 2 indicates that there are two types of segment_id,
>>> # the first number '0' in [0, 1, 1] indicates input_x[0],
>>> # the second number '1' in [0, 1, 1] indicates input_x[1],
>>> # the third number '1' in [0, 1, 1] indicates input_x[2],
>>> # input_x[0], which is [1, 2, 3] will not be compared to other segment_id.
>>> # Only the same segment_id will be compared.
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
>>>
>>> # case 2: The segment_ids=[0, 0, 1, 1].
>>> # [1, 2, 3] will compare with [4, 2, 0],
>>> # and [4, 5, 6] will compare with [4, 2, 1].
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (4, 3)
>>> print(output)
    [[4. 2. 3.]
     [4. 5. 6.]]
>>> # case 3: If the input_x have three dimensions even more, what will happen?
>>> # The shape of input_x is (2, 4, 3),
>>> # and the length of segment_ids should be the same as the first dimension of input_x.
>>> # Because the segment_ids are different, input_x[0] will not be compared to input_x[1].
>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]],
...                            [[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (2, 4, 3)
>>> print(output)
    [[[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]
     [[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]]
>>> # case 4: It has the same input with the 3rd case.
>>> # Because num_segments is equal to 2, there are two segment_ids, but currently only one 0 is used.
>>> # the segment_id i is absent in the segment_ids, then output[i] will be filled with
>>> # the smallest possible value of the input_x's type.
>>> segment_ids = Tensor(np.array([0, 0]).astype(np.int32))
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
    [[[ 1.0000000e+00  2.0000000e+00  3.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  0.0000000e+00]
      [ 4.0000000e+00  5.0000000e+00  6.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  1.0000000e+00]]
     [[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]]]
class tinyms.primitives.UnsortedSegmentMin[source]

Computes the minimum of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_min = ops.UnsortedSegmentMin()
>>> output = unsorted_segment_min(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
class tinyms.primitives.UnsortedSegmentProd[source]

Computes the product of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_prod() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_prod = ops.UnsortedSegmentProd()
>>> output = unsorted_segment_prod(input_x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
class tinyms.primitives.UnsortedSegmentSum[source]

Computes the sum of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_sum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
>>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
>>> num_segments = 6
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 2. 5. 0.]
class tinyms.primitives.Unstack(axis=0, num=None)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters:
  • axis (int) – Dimension along which to unpack. Default: 0. Negative values wrap around. The range is [-R, R).

  • num (Union[None, int]) – The number of output tensors. Automatically inferred by input_x and axis if None. Default: None.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

Outputs:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> unstack = ops.Unstack()
>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = unstack(input_x)
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
class tinyms.primitives.UpdateState[source]

UpdateState is used for update side-effect state.

Inputs:
  • value (State) - the state value to be updated.

  • expr (Expression) - the expression to evaluate before state changes.

Outputs:

State, the updated state value.

class tinyms.primitives.UpperBound(out_type=mindspore.int32)[source]

Returns a tensor that contains the index for finding the upper bound of the value of the input values element in the input sorted_x.

Parameters:

out_type (mindspore.dtype, optional) – Specified output type. Supported types: mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • sorted_x (Tensor) - The input tensor whose dtype is real number. The rank must be 2. Each row of the sorted_x needs to be sorted in ascending order.

  • values (Tensor) - The input tensor whose dtype is the same as sorted_x. The rank must be 2. The shape[0] of the two inputs must be consistent.

Outputs:

Tensor, whose dtype is determined by out_type and whose shape is consistent with values.

Raises:
  • TypeError – If sorted_x is not a Tensor.

  • TypeError – If values is not a Tensor.

  • TypeError – If the type of sorted_x is not the same as that of values.

  • ValueError – If rank of the sorted_x is not equal to 2.

  • ValueError – If rank of the values is not equal to 2.

  • ValueError – If the number of rows of sorted_x is not consistent with that of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> upperbound = ops.UpperBound(out_type = mindspore.int32)
>>> sorted_x = Tensor(np.arange(12).reshape(3, 4).astype(np.int8))
>>> values = Tensor(np.array([[3], [6], [9]]).astype(np.int8))
>>> output = upperbound(sorted_x, values)
>>> print(output)
[[4]
 [3]
 [2]]
class tinyms.primitives.UpsampleNearest3D(output_size=None, scales=None)[source]

Performs nearest neighbor upsampling operation.

This operator scale up the volumetric input with specified output_size or scales factors, using nearest neighbor algorithm.

One of output_size or scales must be given, and cannot specify both.

Parameters:
  • output_size (Union[tuple[int], list[int]], optional) – A tuple or list of int specifying the output volumetric size. Default: None.

  • scales (Union[tuple[float], list[float]], optional) – A tuple or list of float specifying the upsampling factors. Default: None.

Inputs:
  • x (Tensor) - 5D tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Must be one of the following types: [float16, float32, float64].

Outputs:
  • y (Tensor) - Upsampled output with the same data type as x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – When output_size is not None and output_size is not list[int] or tuple[int].

  • TypeError – When scales is not None and scales is not list[float] or tuple[float].

  • TypeError – If dtype of x is not int [float16, float32, float64].

  • ValueError – If any value of output_size is negative or zero when output_size is not empty.

  • ValueError – If any value of scales is negative or zero when scales is not empty.

  • ValueError – If shape of x is not 5D.

  • ValueError – If none of scales and output_size is specified or both specified.

  • ValueError – If size of scales is not equal 3 when scales is specified.

  • ValueError – If size of output_size is not equal 3 when output_size is specified.

Supported Platforms:

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...       .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> output_size = [3, 4, 5]
>>> net = ops.UpsampleNearest3D(output_size = output_size)
>>> output = net(x)
>>> print(output)
[[[[[ 1.  1.  2.  3.  4.]
    [ 1.  1.  2.  3.  4.]
    [ 5.  5.  6.  7.  8.]
    [ 5.  5.  6.  7.  8.]]
   [[ 1.  1.  2.  3.  4.]
    [ 1.  1.  2.  3.  4.]
    [ 5.  5.  6.  7.  8.]
    [ 5.  5.  6.  7.  8.]]
   [[ 9.  9. 10. 11. 12.]
    [ 9.  9. 10. 11. 12.]
    [13. 13. 14. 15. 16.]
    [13. 13. 14. 15. 16.]]]]]
class tinyms.primitives.UpsampleTrilinear3D(output_size=None, scales=None, align_corners=False)[source]

Performs upsampling with trilinear interpolation across 3dims for 5dim input Tensor.

This operator scale up the volumetric input with specified output_size or scales factors, using trilinear upscaling algorithm.

Note

One of scales and output_size MUST be specified and it is an error if both are specified.

Parameters:
  • output_size (Union[tuple[int], list[int]], optional) – A tuple or list of 3 int elements \((output\_depth, output\_height, output\_width)\). Defaults to None. Only one of scales and output_size can be specified.

  • scales (Union[tuple[float], list[float]], optional) – A tuple or list of 3 float elements \((scale\_depth, scale\_height, scale\_width)\). Defaults to None.

  • align_corners (bool, optional) – An optional bool. Defaults to false. If True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation use edge value padding for out of boundary values.

Inputs:
  • x (Tensor) - A 5-D input tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Must be one of the following types: float16, float32, float64.

Outputs:
  • y (Tensor) - Upsampled output with the same data type as x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – When output_size is not None and output_size is not list[int] or tuple[int].

  • TypeError – When scales is not None and scales is not list[float] or tuple[float].

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • TypeError – If type of align_corners is not bool.

  • ValueError – If any value of output_size is negative or zero when output_size is not empty.

  • ValueError – If any value of scales is negative or zero when scales is not empty.

  • ValueError – If shape of x is not 5D.

  • ValueError – If none of scales and output_size is specified or both specified.

  • ValueError – If size of scales is not equal 3 when scales is specified.

  • ValueError – If size of output_size is not equal 3 when output_size is specified.

Supported Platforms:

Examples

>>> net = ops.UpsampleTrilinear3D(output_size=[4, 64, 48])
>>> in_x = Tensor(input_data=np.random.randn(2, 3, 4, 512, 256))
>>> out = net(in_x)
>>> print(out.shape)
(2, 3, 4, 64, 48)
>>>
>>> net = ops.UpsampleTrilinear3D(output_size=[2, 4, 4])
>>> in_x = Tensor(np.arange(1, 5, dtype=np.float32).reshape((1, 1, 1, 2, 2)))
>>> out = net(in_x)
>>> print(out)
[[[[[1.   1.25 1.75 2.  ]
    [1.5  1.75 2.25 2.5 ]
    [2.5  2.75 3.25 3.5 ]
    [3.   3.25 3.75 4.  ]]
[[1. 1.25 1.75 2. ]

[1.5 1.75 2.25 2.5 ] [2.5 2.75 3.25 3.5 ] [3. 3.25 3.75 4. ]]]]]

class tinyms.primitives.Xdivy[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32, float64, complex64, complex128 or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is float16, float32, float64, complex64, complex128 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If dtype of x and ‘y’ is not in [float16, float32, float64, complex64, complex128, bool].

  • ValueError – If x could not be broadcast to a tensor with shape of y.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xdivy = ops.Xdivy()
>>> output = xdivy(x, y)
>>> print(output)
[ 1.   2.  -0.5]
infer_dtype(x_dtype, y_dtype)[source]

Infer type for output of Xdivy :param x_dtype: input type of x :param y_dtype: input type of y :return:

infer_shape(x_shape, y_shape)[source]

Infer shape for output of Xdivy :param x_shape: input shape of x :param y_shape: input shape of y :return:

infer_value(x, y)[source]

Infer value for constant folding :param x: :param y: :return:

class tinyms.primitives.Xlogy[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when x is zero.

Refer to mindspore.ops.xlogy() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xlogy = ops.Xlogy()
>>> output = xlogy(x, y)
>>> print(output)
[-3.465736   0.        2.7725887]
class tinyms.primitives.Zeros[source]

Zeros will be deprecated in the future. Please use class mindspore.ops.zeros instead.

Creates a tensor filled with value zeros.

Creates a tensor with shape described by the first argument and fills it with value zeros in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor.

  • type (mindspore.dtype) - The specified type of output tensor.

Outputs:

Tensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If shape is neither int nor tuple.

  • TypeError – If shape is a tuple whose elements are not all int.

Supported Platforms:

Deprecated

Examples

>>> zeros = ops.Zeros()
>>> output = zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.ZerosLike[source]

Returns a Tensor with a value of 0 and its shape and data type is the same as the input.

Inputs:
  • input_x (Tensor) - Input Tensor of any dimension. The data type is Number.

Outputs:

Tensor, has the same shape and data type as input_x but filled with zeros.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> zeroslike = ops.ZerosLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = zeroslike(input_x)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.Zeta[source]

Compute the Hurwitz zeta function ζ(x,q) of input Tensor.

Warning

This is an experimental API that is subject to change or deletion.

\[\zeta \left ( x,q \right )= \textstyle \sum_{n=0} ^ {\infty} \left ( q+n\right )^{-x}\]
Inputs:
  • x (Tensor) - A Tensor, types: float32, float64.

  • q (Tensor) - A Tensor, must have the same shape and type as x.

Outputs:

Tensor, has the same dtype and shape as the x.

Raises:
  • TypeError – If either of x and q is not tensor.

  • TypeError – If dtype of x is neither float32 nor float64.

  • TypeError – If dtype of q is neither float32 nor float64.

  • ValueError – If shape of x is not same as the q.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([10.]), mindspore.float32)
>>> q = Tensor(np.array([1.]), mindspore.float32)
>>> zeta = ops.Zeta()
>>> z = zeta(x, q)
>>> print(z)
[1.0009946]
tinyms.primitives.kernel(fn=None, reg_info=None, compile_attrs=None)[source]

The decorator of the Hybrid DSL function for the Custom Op. When a function written by the Hybrid DSL is decorated by kernel, it can be run as a usual Python function. Also, this function can be used in the api Custom and to create mindspore.ops.Custom, with func_type “hybrid” or “pyfunc”. Creating mindspore.ops.Custom with mode “hybrid” by the Hybrid DSL function will enjoy the automatic dtype/shape infer for free.

Parameters:
  • fn (Function) – The Python function that will be run as a custom operator. Default: None.

  • reg_info (tuple[str, dict]) – Each item represents registration information in json format. Default: None.

  • compile_attrs (Dict) – The Python object is used to distinguish the compiled function. Default: None.

Returns:

Function, if fn is not None, returns a callable function that will execute the Hybrid DSL function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import ops, Tensor
>>> from mindspore.ops import kernel, DataType, CustomRegOp
...
>>> # Create a dict for the compile flags.
>>> attrs = {
...     "test1": True,
...     "test2": "good",
...     "test3": 12,
... }
>>> # Create the reg info json string.
>>> op_gpu_info = CustomRegOp() \
...     .input(0, "a") \
...     .input(0, "b") \
...     .output(0, "y") \
...     .dtype_format(DataType.F32_None, DataType.F32_None, DataType.F32_None) \
...     .target("GPU") \
...     .get_op_info()
>>>
>>> # Create inputs for the custom op.
>>> input_x = np.ones([4, 4]).astype(np.float32)
>>> input_y = np.ones([4, 4]).astype(np.float32)
...
>>> # Write a Hybrid DSL function through the decorator @kernel.
>>> # We can also pass the compile attrs and the reg info through the decorator.
>>> @kernel(reg_info=op_gpu_info, compile_attrs=attrs)
... def outer_product(a, b):
...     c = output_tensor(a.shape, a.dtype)
...
...     with block_realize(c):
...         for i0 in range(a.shape[0]):
...             for i1 in range(b.shape[1]):
...                 c[i0, i1] = 0.0
...                 for i2 in range(a.shape[1]):
...                     c[i0, i1] = c[i0, i1] + (a[i0, i2] * b[i2, i1])
...     return c
...
>>> # We can use the function directly as a python function.
>>> # In this case, the inputs should be numpy arrays.
>>> result = outer_product(input_x, input_y)
...
>>> # Create a custom op with mode "hybrid" (default value) by the Hybrid DSL function.
>>> # In this case, we will enjoy the automatic dtype/shape infer for free.
>>> # The inputs should be mindspore tensors.
>>> test_op_hybrid = ops.Custom(outer_product)
>>> output = test_op_hybrid(Tensor(input_x), Tensor(input_y))
tinyms.primitives.ms_kernel(fn=None, reg_info=None, compile_attrs=None)[source]

Same as docarator kernel. ms_hybrid will be deprecated in the future. Please use kernel instead.

Supported Platforms:

Deprecated

class tinyms.primitives.AdaptiveMaxPool2D(output_size)[source]

Performs 2D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool2d() for more details.

Parameters:

output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Inputs:
  • input_x (Tensor) - The input of AdaptiveMaxPool2D, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((None, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D(2)
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((1, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
class tinyms.primitives.Median(global_median=False, axis=0, keep_dims=False)[source]

Computes the median and its corresponding indices of input tensor in the axis dimension. If global_median is True, computes the median of all elements of tensor.

Warning

When attr global_median is True, the value of the second output tensor indices is meaningless.

Parameters:
  • global_median (bool, optional) – Whether the output tensor is the median of all input tensor elements or not. Default: Fasle.

  • axis (int, optional) – The specified dimension to compute median. Default: 0.

  • keep_dims (bool, optional) – Whether the output tensor need to retain axis dimension or not. Default: False.

Inputs:
  • x (Tensor) - A Tensor to calculate median with. Supported dtype:int16, int32, int64, float32 or float64.

Outputs:
  • y (Tensor) - Median, has the same dtype as the x.

    • If global_median is True, the y has only one element.

    • If keep_dims is True, the y has the same shape as the x except the size of y in dimension axis is 1.

    • Otherwise, the y lacks axis dimension than input.

  • indices (Tensor) - Indices, Has the same shape as the y, with dtype int64.

Raises:
  • TypeError – If dtype of x is not one of the following: int16, int32, int64, float32, float64.

  • TypeError – If input x is not a Tensor.

  • TypeError – If global_median or keep_dims is assigned a nonboolean value.

  • TypeError – If axis is not int.

  • ValueError – If axis is not in range of [-x.dim, x.dim-1].

Supported Platforms:

GPU CPU

Examples

>>> # case 1 : common median compute
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[5, 1, 2],[3, 5, 7], [1, 6, 4]]).astype(np.int64))
>>> median = ops.Median(global_median=False, axis=0, keep_dims=False)
>>> y = median(x)
>>> print(y)
(Tensor(shape=[3], dtype=Int64, value= [3, 5, 4]), Tensor(shape=[3], dtype=Int64, value= [1, 1, 2]))
>>> # case 2 : global median compute
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 7, 6],[5, 1, 3],[9, 17, 1]]).astype(np.int32))
>>> median = ops.Median(global_median=True)
>>> y = median(x)
>>> print(y)
(Tensor(shape=[], dtype=Int32, value= 5), Tensor(shape=[], dtype=Int64, value= 0))
class tinyms.primitives.Roll(shift, axis)[source]

Rolls the elements of a tensor along an axis.

Refer to mindspore.ops.roll() for more details.

Parameters:
  • shift (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • axis (Union[list(int), tuple(int), int]) – Specifies the dimension indexes of shape to be rolled.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> op = ops.Roll(shift=2, axis=0)
>>> output = op(input_x)
>>> print(output)
[3. 4. 0. 1. 2.]
>>> input_x = Tensor(np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]).astype(np.float32))
>>> op = ops.Roll(shift=-1, axis=0)
>>> output = op(input_x)
>>> print(output)
[[5. 6. 7. 8. 9.]
 [0. 1. 2. 3. 4.]]
class tinyms.primitives.UniqueConsecutive(return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.unique_consecutive() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> unique_consecutive = ops.UniqueConsecutive(True, True, None)
>>> output, idx, counts = unique_consecutive(x)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
tinyms.primitives.abs(input)[source]

Returns absolute value of a tensor element-wise.

\[out_i = |input_i|\]
Parameters:

input (Tensor) – The input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> output = ops.abs(input)
>>> print(output)
[1. 1. 0.]
tinyms.primitives.absolute(input)[source]

Alias for mindspore.ops.abs() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.accumulate_n(x)[source]

Computes accumulation of all input tensors element-wise.

mindspore.ops.accumulate_n() is similar to mindspore.ops.addn(), but there is a significant difference between them: accumulate_n will not wait for all of its inputs to be ready before summing. That is to say, accumulate_n is able to save memory when inputs are ready at different time since the minimum temporary storage is proportional to the output size rather than the input size.

Parameters:

x (Union(tuple[Tensor], list[Tensor])) – The input tuple or list is made up of multiple tensors whose dtype is number to be added together. Each element of tuple or list should have the same shape.

Returns:

Tensor, has the same shape and dtype as each entry of x.

Raises:
  • TypeError – If x is neither tuple nor list.

  • ValueError – If there is an input element with a different shape.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = ops.accumulate_n([x, y, x, y])
>>> print(output)
[10. 14. 18.]
tinyms.primitives.acos(input)[source]

Computes arccosine of input tensors element-wise.

\[out_i = cos^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = ops.acos(input)
>>> print(output)
[0.737726  1.5307857 1.2661036 0.9764105]
tinyms.primitives.acosh(input)[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]

Warning

Given an input tensor input, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf].

Parameters:

input (Tensor) – The input tensor of inverse hyperbolic cosine function.

Returns:

Tensor, has the same shape and type as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = ops.acosh(x)
>>> print(output)
[0.        0.9624237 1.7627472 5.298292 ]
tinyms.primitives.adaptive_avg_pool1d(input, output_size)[source]

Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N, C, L_{in})\), adaptive_avg_pool1d outputs regional average in the \(L_{in}\)-dimension. The output is of shape \((N, C, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, L_{in})\), with float16 or float32 data type.

  • output_size (int) – the target output size \(L_{out}\).

Returns:

Tensor of shape \((N, C, L_{out})\), has the same type as input.

Raises:
  • TypeError – If output_size is not an int.

  • TypeError – If input is neither float16 nor float32.

  • ValueError – If output_size is less than 1.

  • ValueError – If length of shape of input is not equal to 3.

  • ValueError – If the last dimension of input is smaller than output_size.

  • ValueError – If the last dimension of input is not divisible by output_size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_avg_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
tinyms.primitives.adaptive_avg_pool2d(input, output_size)[source]

Performs 2D adaptive average pooling on a multi-plane input signal. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input features.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

For adaptive average pooling for 2D:

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= \frac{\sum Input[h_{start}:h_{end}, w_{start}:w_{end}]}{(h_{end}- h_{start}) * (w_{end}- w_{start})} \end{align}\end{split}\]
Parameters:
  • input (Tensor) – The input of adaptive_avg_pool2d, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Returns:

Tensor, with the same type as the input.

Shape of the output is input_shape[:len(input_shape) - len(out_shape)] + out_shape.

\[\begin{split}out\_shape = \begin{cases} input\_x\_shape[-2] + output\_size[1], & \text{if output_size is (None, w);}\\ output\_size[0] + input\_x\_shape[-1], & \text{if output_size is (h, None);}\\ input\_x\_shape[-2:], & \text{if output_size is (None, None);}\\ (h, h), & \text{if output_size is h;}\\ (h, w), & \text{if output_size is (h, w)} \end{cases}\end{split}\]
Raises:
  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is less than or equal to the dimension of output_size.

Supported Platforms:

GPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> output = ops.adaptive_avg_pool2d(input, (None, 2))
>>> print(output)
[[[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_avg_pool2d(input, 2)
>>> print(output)
[[[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_avg_pool2d(input, (1, 2))
>>> print(output)
[[[4.5 5.5]]
 [[4.5 5.5]]
 [[4.5 5.5]]]
tinyms.primitives.adaptive_avg_pool3d(input, output_size)[source]

Performs 3D adaptive average pooling on a multi-plane input signal. That is, for any input size, the size of the specified output is \((D, H, W)\). The number of output features is equal to the number of input planes.

Suppose the last 3 dimension size of x is \((inD, inH, inW)\), the last 3 dimension size of output is \((outD, outH, outW)\).

\[\begin{split}\begin{array}{ll} \\ \forall \quad od \in [0,outD-1], oh \in [0,outH-1], ow \in [0,outW-1]\\ output[od,oh,ow] = \\ \qquad mean(x[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\ where,\\ \qquad istartD= \left\lceil \frac{od * inD}{outD} \right\rceil \\ \qquad iendD=\left\lfloor \frac{(od+1)* inD}{outD} \right\rfloor \\ \qquad istartH=\left\lceil \frac{oh * inH}{outH} \right\rceil \\ \qquad iendH=\left\lfloor \frac{(oh+1) * inH}{outH} \right\rfloor \\ \qquad istartW=\left\lceil \frac{ow * inW}{outW} \right\rceil \\ \qquad iendW=\left\lfloor \frac{(ow+1) * inW}{outW} \right\rfloor \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input of adaptive_avg_pool3d, which is a 5D or 4D Tensor.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((D, H, W)\), or an int D for \((D, D, D)\). \(D\), \(H\) and \(W\) can be int or None which means the output size is the same as that of the input.

Returns:

Tensor, with the same type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is not 4D or 5D.

  • ValueError – If output_size value is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(3, 3, 4)
>>> output_size=(3, 3, 4)
>>> input_val = np.random.randn(4, 3, 5, 6, 7)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 3, 3, 3, 4)
>>> # case 2: output_size=4
>>> output_size=5
>>> input_val = np.random.randn(2, 3, 8, 6, 12)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(2, 3, 5, 5, 5)
>>> # case 3: output_size=(None, 4, 5)
>>> output_size=(None, 4, 5)
>>> input_val = np.random.randn(4, 1, 9, 10, 8)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 1, 9, 4, 5)
tinyms.primitives.adaptive_max_pool1d(input, output_size)[source]

Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N, C, L_{in})\), adaptive_max_pool1d outputs regional maximum in the \(L_{in}\)-dimension. The output is of shape \((N, C, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, L_{in})\), with float16 or float32 data type.

  • output_size (int) – the target output size \(L_{out}\).

Returns:

Tensor of shape \((N, C, L_{out})\), has the same type as input.

Raises:
  • TypeError – If input is neither float16 nor float32.

  • TypeError – If output_size is not an int.

  • ValueError – If output_size is less than 1.

  • ValueError – If the last dimension of input is smaller than output_size.

  • ValueError – If the last dimension of input is not divisible by output_size.

  • ValueError – If length of shape of input is not equal to 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_max_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
tinyms.primitives.adaptive_max_pool2d(input, output_size, return_indices=False)[source]

This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= {\max Input[h_{start}:h_{end}, w_{start}:w_{end}]} \end{align}\end{split}\]

Note

Ascend platform only supports float16 type for input.

Parameters:
  • input (Tensor) – A 3D or 4D tensor, with float16, float32 or float64 data type.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

  • return_indices (bool) – If return_indices is True , the indices of max value would be output. Default: False .

Returns:

Tensor, with the same shape and dtype as the input.

Raises:
  • TypeError – If output_size is not int or tuple.

  • TypeError – If input is not a tensor.

  • TypeError – If return_indices is not a bool.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • ValueError – If the data format of input is not “NCHW” or “CHW”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> output = ops.adaptive_max_pool2d(input, (None, 2))
>>> print(output)
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_max_pool2d(input, 2)
>>> print(output)
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_max_pool2d(input, (1, 2))
>>> print(output)
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
tinyms.primitives.adaptive_max_pool3d(input, output_size, return_indices=False)[source]

Calculates the 3D adaptive max pooling for an input Tensor.

Parameters:
  • input (Tensor) – Tensor, with shape \((C, D, H, W)\) or \((N, C, D, H, W)\).

  • output_size (Union[int, tuple]) – The specified output size, which is an int or Tuple(int) that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. If it is None, the output size and input size of the corresponding dimension are the same.

  • return_indices (bool, optional) – If return_indices is True, the indices of max value would be output, Otherwise, it will not be output. Default: False.

Returns:

  • y (Tensor) - Tensor, with the same number of dims and data type as the input.

  • argmax (Tensor) - Tensor, the indices of max value, which has the same shape as the y and it’s data type is int32. It will output only when return_indices is True.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the dimensions number of input is not 4 or 5.

  • TypeError – If dtype of input is not int or float.

  • ValueError – If output_size is neither an int nor a tuple with shape (3,).

Supported Platforms:

GPU CPU

Examples

>>> input = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> output_size = (1, 1, 2)
>>> output = ops.adaptive_max_pool3d(input, output_size, True)
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
[[[[33 35]]]]
tinyms.primitives.add(input, other)[source]

Adds other value to input Tensor.

\[out_{i} = input_{i} + other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one of the input input , other after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If input and other is not one of the following: Tensor, number.Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: x and y are both Tensor.
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = ops.add(x, y)
>>> print(output)
[5. 7. 9.]
>>> # case 2: x is a scalar and y is a Tensor
>>> x = Tensor(1, mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = ops.add(x, y)
>>> print(output)
[5. 6. 7.]
>>> # the data type of x is int32, the data type of y is float32,
>>> # and the output is the data format of higher precision float32.
>>> print(output.dtype)
Float32
tinyms.primitives.addbmm(input, batch1, batch2, *, beta=1, alpha=1)[source]

Applies batch matrix multiplication to batch1 and batch2, with a reduced add step and add input to the result.

The optional values alpha and beta are the matrix-matrix product between batch1 and batch2 and the scale factor for the added tensor input respectively. If beta is 0, then input will be ignored.

\[output = \beta input + \alpha (\sum_{i=0}^{b-1} {batch1_i @ batch2_i})\]
Parameters:
  • input (Tensor) – Tensor to be added.

  • batch1 (Tensor) – The first batch of tensor to be multiplied.

  • batch2 (Tensor) – The second batch of tensor to be multiplied.

Keyword Arguments:
  • beta (Union[int, float], optional) – Multiplier for input. Default: 1.

  • alpha (Union[int, float], optional) – Multiplier for batch1 @ batch2. Default: 1.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If alpha or beta is not an int or float.

  • ValueError – If batch1, batch2 cannot apply batch matrix multiplication.

Supported Platforms:

Ascend GPU CPU

Examples

>>> m = np.ones((3, 3)).astype(np.float32)
>>> arr1 = np.arange(24).astype(np.float32).reshape((2, 3, 4))
>>> arr2 = np.arange(24).astype(np.float32).reshape((2, 4, 3))
>>> a = Tensor(arr1)
>>> b = Tensor(arr2)
>>> c = Tensor(m)
>>> output = ops.addbmm(c, a, b)
>>> print(output)
[[ 949. 1009. 1069.]
 [1285. 1377. 1469.]
 [1621. 1745. 1869.]]
tinyms.primitives.addcdiv(input, tensor1, tensor2, value=1)[source]

Performs the element-wise division of tensor tensor1 by tensor tensor2, multiply the result by the scalar value and add it to input_data.

\[y[i] = input[i] + value[i] * (tensor1[i] / tensor2[i])\]
Parameters:
  • input (Tensor) – The tensor to be added.

  • tensor1 (Tensor) – The numerator tensor.

  • tensor2 (Tensor) – The denominator tensor.

  • value (Union[Tensor, Number]) – The multiplier for tensor1/tensor2. Default: 1.

Returns:

Tensor, has the same shape and dtype as tensor1/tensor2.

Raises:
  • TypeError – If dtype of tensor1, tensor2, input is not tensor.

  • ValueError – If tensor1 could not be broadcast to a tensor with shape of tensor2.

  • ValueError – If value could not be broadcast to tensors with shapes of tensor1/tensor2.

  • ValueError – If input could not be broadcast to tensors with shapes of value*(tensor1/tensor2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([1, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.array([4, 3, 2, 1]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = ops.addcdiv(input_data, x1, x2, value)
>>> print(y)
[1.25      1.6666667 2.5       5.       ]
tinyms.primitives.addcmul(input, tensor1, tensor2, value=1)[source]

Performs the element-wise product of tensor tensor1 and tensor tensor2, multiply the result by the scalar value and add it to input_data.

\[output[i] = input[i] + value[i] * (tensor1[i] * tensor2[i])\]
Parameters:
  • input (Tensor) – The tensor to be added.

  • tensor1 (Tensor) – The tensor to be multiplied.

  • tensor2 (Tensor) – The tensor to be multiplied.

  • value (Union[Tensor, Number]) – The multiplier for tensor1*tensor2. Default: 1.

Returns:

Tensor, has the same shape and dtype as x1*x2.

Raises:
  • TypeError – If dtype of tensor1, tensor2, input is not Tensor.

  • TypeError – If dtype of input is not one of: float32, float16, int32.

  • TypeError – If dtype of tensor1 or tensor2 is not one of: float32, float16, int32.

  • TypeError – If dtype of value is not one of: float32, float16, int32.

  • ValueError – If tensor1 could not be broadcast to a tensor with shape of tensor2.

  • ValueError – If value could not be broadcast to tensors with shapes of tensor1 * tensor2.

  • ValueError – If input could not be broadcast to tensors with shapes of value*(tensor1*tensor2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> x2 = Tensor(np.array([[1, 2, 3]]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = ops.addcmul(input_data, x1, x2, value)
>>> print(y)
[[ 2.  3.  4.]
 [ 3.  5.  7.]
 [ 4.  7. 10.]]
tinyms.primitives.addmm(input, mat1, mat2, *, beta=1, alpha=1)[source]

Multiplies matrix mat1 and matrix mat2. The matrix input is added to the final result.

\[output = \beta input + \alpha (mat1 @ mat2)\]
Parameters:
  • input (Tensor) – Tensor to be added.

  • mat1 (Tensor) – The first tensor to be multiplied.

  • mat2 (Tensor) – The second tensor to be multiplied.

Keyword Arguments:
  • beta (Union[int, float], optional) – Multiplier for input. Default: 1.

  • alpha (Union[int, float], optional) – Multiplier for mat1 @ mat2. Default: 1.

Returns:

Tensor, has the same dtype as input.

Raises:

ValueError – If mat1, mat2 cannot apply matrix multiplication.

Supported Platforms:

Ascend GPU CPU

Examples

>>> m = np.ones((3, 3)).astype(np.float32)
>>> arr1 = np.arange(12).astype(np.float32).reshape((3, 4))
>>> arr2 = np.arange(12).astype(np.float32).reshape((4, 3))
>>> a = Tensor(arr1)
>>> b = Tensor(arr2)
>>> c = Tensor(m)
>>> output = ops.addmm(c, a, b)
>>> print(output)
[[ 43.  49.  55.]
 [115. 137. 159.]
 [187. 225. 263.]]
tinyms.primitives.addmv(x, mat, vec, *, beta=1, alpha=1)[source]

Multiplies matrix mat and vector vec. The vector x is added to the final result.

If mat is a \((N, M)\) tensor, vec is a 1-D tensor of size \(M\), then x must be broadcastable with a 1-D tensor of size \(N\).In this case out will be 1-D tensor of size \(N\).

The optional values beta and alpha are the matrix-vector product between mat and vec and the scale factor for the added Tensor x respectively. If beta is 0, then x will be ignored.

\[output = β x + α (mat @ vec)\]
Parameters:
  • x (Tensor) – Vector to be added. The shape of the tensor is \((N,)\).

  • mat (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((N, M)\).

  • vec (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((M,)\).

Keyword Arguments:
  • beta (scalar[int, float, bool], optional) – Multiplier for x (β). The beta must be int or float or bool. Default: 1.

  • alpha (scalar[int, float, bool], optional) – Multiplier for mat @ vec (α). The alpha must be int or float or bool. Default: 1.

Returns:

Tensor, the shape of the output tensor is \((N,)\), has the same dtype as x.

Raises:
  • TypeError – If mat, vec, x is not a Tensor.

  • TypeError – If inputs mat, ‘vec’ are not the same dtype.

  • ValueError – If mat is not a 2-D Tensor.

  • ValueError – If vec is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2., 3.]).astype(np.float32))
>>> mat = Tensor(np.array([[2., 5., 3.], [4., 2., 2.]]).astype(np.float32))
>>> vec = Tensor(np.array([3., 2., 4.]).astype(np.float32))
>>> output = ops.addmv(x, mat, vec)
>>> print(output)
[30. 27.]
tinyms.primitives.addn(x)[source]

Computes addition of all input tensors element-wise.

All input tensors must have the same shape.

Parameters:

x (Union(tuple[Tensor], list[Tensor])) – A tuple or list composed of Tensor.

Returns:

Tensor, has the same shape and dtype as each Tensor of x.

Raises:
  • TypeError – If x is neither tuple nor list.

  • ValueError – If there are Tensors with different shapes in x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = ops.addn([x, y, x, y])
>>> print(output)
[10. 14. 18.]
tinyms.primitives.addr(x, vec1, vec2, *, beta=1, alpha=1)[source]

Computes the outer product of two vector vec1 and vec2, and adds the resulting matrix to x.

Given vec1 and vec2 of sizes \(N\) and \(M\), x must be able to broadcast to a matrix of shape \((N, M)\).

beta and alpha are optional scaling factors for the outer product of vec1 and vec2, and the matrix x respectively. Setting beta to 0 will exclude x from the computation.

\[output = β x + α (vec1 ⊗ vec2)\]
Parameters:
  • x (Tensor) – Vector to be added. The shape of the tensor is \((N, M)\).

  • vec1 (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((N,)\).

  • vec2 (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((M,)\).

Keyword Arguments:
  • beta (scalar[int, float, bool], optional) – Multiplier for x (β). The beta must be int or float or bool, Default: 1.

  • alpha (scalar[int, float, bool], optional) – Multiplier for vec1vec2 (α). The alpha must be int or float or bool, Default: 1.

Returns:

Tensor, the shape of the output tensor is \((N, M)\), has the same dtype as x.

Raises:
  • TypeError – If x, vec1, vec2 is not a Tensor.

  • TypeError – If inputs vec1, vec2 are not the same dtype.

  • ValueError – If vec1, vec2 is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2., 2.], [3., 2.], [3., 4.]], np.float32))
>>> vec1 = Tensor(np.array([2., 3., 2.], np.float32))
>>> vec2 = Tensor(np.array([3, 4], np.float32))
>>> output = ops.addr(x, vec1, vec2)
>>> print(output)
[[ 8. 10.]
 [12. 14.]
 [ 9. 12.]]
tinyms.primitives.adjoint(x)[source]

Calculates the conjugation of Tensor element by element, and transposes the last two dimensions.

Parameters:

x (Tensor) – Input Tensor.

Returns:

Tensor, the calculated result.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([[0. + 0.j, 1. + 1.j], [2. + 2.j, 3. + 3.j]]), mindspore.complex128)
>>> output = ops.adjoint(a)
>>> print(output)
[[0.-0.j 2.-2.j]
 [1.-1.j 3.-3.j]]
tinyms.primitives.affine_grid(theta, size, align_corners=False)[source]

Returns a 2D or 3D flow field (sampling grid) based on theta, a batch of affine matrices.

Parameters:
  • theta (Tensor) – The input tensor of flow field whose dtype is float16, float32. Input batch of affine matrices with shape \((N, 2, 3)\) for 2D grid or \((N, 3, 4)\) for 3D grid.

  • size (tuple[int]) – The target output image size. The value of target output with format \((N, C, H, W)\) for 2D grid or \((N, C, D, H, W)\) for 3D grid.

  • align_corners (bool, optional) – Geometrically, each pixel of input is viewed as a squqre instead of dot. If True, consider extremum -1 and 1 referring to the centers of the pixels rather than pixel corners. The default value is False, extremum -1 and 1 refer to the corners of the pixels, so that sampling is irrelevant to resolution of the image. Default: False.

Returns:

Tensor, a tensor whose data type is same as ‘theta’, and the shape is \((N, H, W, 2)\) for 2D grid or \((N, D, H, W, 3)\) for 3D grid.

Raises:
  • TypeError – If theta is not a Tensor or size is not a tuple.

  • ValueError – If the shape of theta is not \((N, 2, 3)\) or \((N, 3, 4)\).

  • ValueError – If the size of size is not 4 or 5.

  • ValueError – If the shape of theta is \((N, 2, 3)\), the size of size is not 4; If the shape of theta is \((N, 3, 4)\), the size of size is not 5.

  • ValueError – If the size[0] is not equal to the shape[0] of theta.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> theta = Tensor([[[0.8, 0.5, 0],[-0.5, 0.8, 0]]], mindspore.float32)
>>> out_size = (1, 3, 2, 3)
>>> output = ops.affine_grid(theta, out_size, False)
>>> print(output)
[[[[-0.78333336 -0.06666666]
[-0.25       -0.4       ]
[ 0.28333336 -0.73333335]]
[[-0.28333336  0.73333335]
[ 0.25        0.4       ]
[ 0.78333336  0.06666666]]]]
tinyms.primitives.all(input, axis=None, keep_dims=False)[source]

Reduces a dimension of input by the “logical AND” of all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[bool]) – The input Tensor. The dtype of the Tensor is bool. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Suppose the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keep_dims (bool, optional) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Returns:

Tensor, the dtype is bool.

  • If axis is None, and keep_dims is False, the output is a 0-D Tensor representing the “logical AND” of all elements in the input Tensor.

  • If axis is int, such as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), such as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> # case 1: Reduces a dimension by the "logicalAND" of all elements in the dimension.
>>> output = ops.all(x, keep_dims=True)
>>> print(output)
[[False]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.all(x, axis=0)
>>> print(output)
[True False]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.all(x, axis=1)
>>> print(output)
[False True]
tinyms.primitives.amax(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Reduces all dimensions of a tensor by returning the maximum value in input, by default. And also can reduce a dimension of input along specified axis. keepdims determines whether the dimensions of output and input are the same.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r).

  • keepdims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keepdims is False, the output is a 0-D tensor representing the product of all

    elements in the input tensor.

  • If axis is int, set as 1, and keepdims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keepdims is False, the shape of output is

    \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.amax(x, 1, keepdims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the maximum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.amax(x)
>>> print(output)
9.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.amax(x, 0, True)
>>> print(output)
[[[7. 7. 7. 7. 7. 7.]
  [8. 8. 8. 8. 8. 8.]
  [9. 9. 9. 9. 9. 9.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.amax(x, 1, True)
>>> print(output)
[[[3. 3. 3. 3. 3. 3.]]
 [[6. 6. 6. 6. 6. 6.]]
 [[9. 9. 9. 9. 9. 9.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.amax(x, 2, True)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
tinyms.primitives.amin(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Reduces all dimensions of a tensor by returning the minimum value in input, by default. And also can reduce a dimension of input along specified axis. keepdims determines whether the dimensions of output and input are the same.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r)..

  • keepdims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keepdims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keepdims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keepdims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.amin(x, 1, keepdims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the minimum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.amin(x)
>>> print(output)
1.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.amin(x, 0, True)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]
  [2. 2. 2. 2. 2. 2.]
  [3. 3. 3. 3. 3. 3.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.amin(x, 1, True)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]]
 [[4. 4. 4. 4. 4. 4.]]
 [[7. 7. 7. 7. 7. 7.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.amin(x, 2, True)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
tinyms.primitives.aminmax(input, *, axis=0, keepdims=False)[source]

It returns the minimum and maximum value along the given axis of input tensor.

Parameters:

input (Tensor) – The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\) .

Keyword Arguments:
  • axis (int, optional) – The dimension to reduce. The value range of axis is [-rank, rank), where “rank” is the dimension of input. Default: 0.

  • keepdims (bool, optional) – Whether to maintain dimension. When set to True, the output will keep the same dimension as the input, or the dimension specified by axis is reduced. Default: False.

Returns:

tuple (Tensor), containing the minimum value and maximum value of the input tensor.

  • If keepdims is True, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\).

  • If keepdims is False, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output0, output1 = ops.aminmax(x)
>>> print(output0, output1)
0.0 0.7
>>> output2, output3 = ops.aminmax(x, axis=-1, keepdims=True)
>>> print(output2, output3)
[0.] [0.7]
tinyms.primitives.angle(input)[source]

Returns the element-wise argument of a complex tensor. The elements in input are considered to be complex numbers of the form a+bj, where a is the real part and b is the imaginary part. The argument returned by this function is of the form \(atan2(b, a)\).

Parameters:

input (Tensor) – The input tensor. types: complex64, complex128.

Returns:

Tensor, has the float32 or float64 type and the same shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is not one of: complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([-1.5 + 7.8j, 3 + 5.75j], mindspore.complex64)
>>> output = ops.angle(input)
>>> print(output)
[1.7607845 1.0899091]
tinyms.primitives.any(input, axis=None, keep_dims=False)[source]

Reduces a dimension of input by the “logical OR” of all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[bool]) – The input Tensor. The dtype of the Tensor is bool. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Suppose the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keep_dims (bool, optional) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Returns:

Tensor, the dtype is bool.

  • If axis is None, and keep_dims is False, the output is a 0-D Tensor representing the “logical OR” of all elements in the input Tensor.

  • If axis is int, such as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), such as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> # case 1: Reduces a dimension by the "logical OR" of all elements in the dimension.
>>> output = ops.any(x, keep_dims=True)
>>> print(output)
[[ True]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.any(x, axis=0)
>>> print(output)
[True True]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.any(x, axis=1)
>>> print(output)
[True True]
tinyms.primitives.approximate_equal(x, y, tolerance=1e-05)[source]

Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False.

\[\begin{split}out_i = \begin{cases} & \text{ if } \left | x_{i} - y_{i} \right | < \text{tolerance},\ \ True \\ & \text{ if } \left | x_{i} - y_{i} \right | \ge \text{tolerance},\ \ False \end{cases}\end{split}\]

where tolerance indicates Acceptable maximum tolerance.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to the relatively highest precision data type.

Parameters:
  • x (Tensor) – A tensor. Must be one of the following types: float32, float16. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) – A tensor of the same type and shape as x.

  • tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Returns:

Tensor, the shape is the same as the shape of x, and the data type is bool.

Raises:
  • TypeError – If tolerance is not a float.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tol = 1.5
>>> x = Tensor(np.array([1, 2, 3]), mstype.float32)
>>> y = Tensor(np.array([2, 4, 6]), mstype.float32)
>>> output = ops.approximate_equal(Tensor(x), Tensor(y), tol)
>>> print(output)
[ True  False  False]
tinyms.primitives.arange(start=0, end=None, step=1, *, dtype=None)[source]

Creates a sequence of numbers that begins at start and extends by increments of step up to but not including end.

Parameters:
  • start (Union[float, int, Tensor], optional) – The start of the interval. If Tensor, the shape must be (). Default: 0.

  • end (Union[float, int, Tensor], optional) – The end of the interval, exclusive. If Tensor, the shape must be (). Default: None. If None, it defaults to the value of start, and 0 is used as the starting value.

  • step (Union[float, int, Tensor], optional) – Number that increments start. If Tensor, the shape must be (). Default: 1.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The required data type of returned Tensor. Default: None. If the value is not specified or is None, the type with the highest precision in the start, end, and step parameters is inferred.

Returns:

A 1-D Tensor, with the same type as the inputs.

Raises:
  • TypeError – If start, end or step is not an int or a float or a TensorScalar(Special Tensor with shape ()) in valid dtypes.

  • ValueError – If step = 0.

  • ValueError – If start >= end when step > 0.

  • ValueError – If start <= end when step < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> output = ops.arange(1, 6)
>>> print(output)
[1 2 3 4 5]
>>> print(output.dtype)
Int64
>>> output = ops.arange(0, 3, 1.2)
>>> print(output)
[0.  1.2 2.4]
>>> print(output.dtype)
Float32
>>> output = ops.arange(7, 1, -2)
>>> print(output)
[7 5 3]
>>> print(output.dtype)
Int64
>>> output = ops.arange(ms.Tensor(12.0, dtype=ms.float64), 2, ms.Tensor(-1.0, dtype=ms.float32))
>>> print(output)
[12. 11. 10.  9.  8.  7.  6.  5.  4.  3.]
>>> print(output.dtype)
Float64
tinyms.primitives.arccos(input)[source]

Alias for mindspore.ops.acos() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arccosh(input)[source]

For details, please refer to mindspore.ops.acosh().

tinyms.primitives.arcsin(x)[source]

Alias for mindspore.ops.asin().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arcsinh(input)[source]

Alias for mindspore.ops.asinh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arctan(input)[source]

For details, please refer to mindspore.ops.atan().

tinyms.primitives.arctan2(input, other)[source]

For details, please refer to mindspore.ops.atan2().

tinyms.primitives.arctanh(input)[source]

Alias for mindspore.ops.atanh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.argmax(input, dim=None, keepdim=False)[source]

Return the indices of the maximum values of a tensor across a dimension.

Parameters:
  • input (Tensor) – Input tensor.

  • dim (Union[int, None], optional) – The dimension to reduce. If dim is None, the indices of the maximum value within the flattened input will be returned. Default: None.

  • keepdim (bool, optional) – Whether the output tensor retains the specified dimension. Ignored if dim is None. Default: False.

Returns:

Tensor, indices of the maximum values across a dimension.

Raises:

ValueError – If dim is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.argmax(x, dim=-1)
>>> print(output)
[1 0 0]
tinyms.primitives.argmin(input, axis=None, keepdims=False)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters:
  • input (Tensor) – Input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, None], optional) – Axis where the Argmin operation applies to. Default: None.

  • keepdims (bool, optional) – Whether the output tensor retains the specified dimension. Ignored if axis is None. Default: False.

Returns:

Tensor, indices of the min value of input tensor across the axis.

Raises:

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.argmin(input_x)
>>> print(index)
2
tinyms.primitives.argsort(input, axis=-1, descending=False)[source]

Sorts the input tensor along the given dimension in specified order and return the sorted indices.

Parameters:
  • input (Tensor) – The input tensor to sort.

  • axis (int) – The axis to sort along. Default: -1, means the last axis

  • descending (bool) – The sort order. If descending is True then the elements are sorted in descending order by value. Otherwise sort in ascending order. Default: False.

Returns:

Tensor, the indices of sorted input tensor. Data type is int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.argsort(x)
>>> print(sort)
[[2 1 0]
 [2 0 1]
 [0 1 2]]
tinyms.primitives.argwhere(input)[source]

Return a Tensor of the positions of all non-zero values.

Parameters:

input (Tensor) – The input tensor. The data type is Number or Bool.

Returns:

Tensor, a 2-D Tensor whose data type is int64, containing the positions of all non-zero values of the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> output = ops.argwhere(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
tinyms.primitives.asin(input)[source]

Computes arcsine of input tensors element-wise.

\[out_i = sin^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32, float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = ops.asin(x)
>>> print(output)
[0.8330704  0.04001067 0.30469266 0.5943858 ]
tinyms.primitives.asinh(x)[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(x_i)\]
Parameters:

x (Tensor) – The input tensor of inverse hyperbolic sine function.

Returns:

Tensor, has the same shape and type as x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = ops.asinh(x)
>>> print(output)
[-2.3124382  1.1947632  1.8184465  5.298342 ]
tinyms.primitives.assign(variable, value)[source]

Assigns Parameter with a value.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be assigned, has the same shape with variable.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If variable is not a Parameter.

  • TypeError – If value is not a Tensor.

  • RuntimeError – If the data type of variable and value conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> value = Tensor([2.0], mindspore.float32)
>>> variable = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="variable")
>>> ops.assign(variable, value)
>>> print(variable.asnumpy())
[2.]
tinyms.primitives.assign_add(variable, value)[source]

Updates a Parameter by adding a value to it.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be added to the variable. It must have the same shape as variable. it is recommended to use the same data type when using this operator.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If value is neither Number nor Tensor.

  • RuntimeError – If the data type of variable and value conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
>>> value = Tensor(np.ones([1]).astype(np.int32) * 100)
>>> ops.assign_add(variable, value)
>>> print(variable.asnumpy())
[101]
tinyms.primitives.assign_sub(variable, value)[source]

Updates a Parameter by subtracting a value from it.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be subtracted from the variable. It must have the same shape as variable. it is recommended to use the same data type when using this operator.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If value is neither Number nor Tensor.

  • RuntimeError – If the data type of x, y conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
>>> value = Tensor(np.ones([1]).astype(np.int32) * 100)
>>> ops.assign_sub(variable, value)
>>> print(variable.asnumpy())
[-99]
tinyms.primitives.atan(input)[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A Tensor, has the same type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> output = ops.atan(x)
>>> print(output)
[0.7853982 0.       ]
tinyms.primitives.atan2(input, other)[source]

Returns arctangent of input/other element-wise.

It returns \(\theta\ \in\ [-\pi, \pi]\) such that \(input = r*\sin(\theta), other = r*\cos(\theta)\), where \(r = \sqrt{input^2 + other^2}\).

Note

  • Arg input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to relatively the highest precision data type.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Tensor, Number.number) – The input tensor or scalar. \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32, float64

  • other (Tensor, Number.number) – The input tensor or scalar. It has the same shape with input.

Note

At least one of the input args should be Tensor.

Returns:

Tensor or scalar, the shape is the same as the one after broadcasting,and the data type is same as input.

Raises:
  • TypeError – If input or other is not a Tensor or scalar.

  • RuntimeError – If the data type of input and other conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 1]), mindspore.float32)
>>> other = Tensor(np.array([1, 1]), mindspore.float32)
>>> output = ops.atan2(input, other)
>>> print(output)
[0.        0.7853982]
tinyms.primitives.atanh(x)[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = \tanh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A Tensor, has the same type as the input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -0.5]), mindspore.float32)
>>> output = ops.atanh(x)
>>> print(output)
[ 0.         -0.54930615]
tinyms.primitives.atleast_1d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least one dimension after this operation.

Scalar is converted to a 1-D Tensor, input tensor with one or more dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 1.

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.ones((2, 3)))
>>> x2 = Tensor(np.ones(()))
>>> x3 = Tensor(np.ones(5))
>>> out = ops.atleast_1d([x1, x2, x3])
>>> print(out[0].asnumpy())
[[1. 1. 1.]
 [1. 1. 1.]]
>>> print(out[1].asnumpy())
[1.]
>>> print(out[2].asnumpy())
[1. 1. 1. 1. 1.]
tinyms.primitives.atleast_2d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least 2 dimension after this operation.

Scalar or 1-D Tensor is converted to 2-D Tensor, tensor with higher dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 2 .

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> from mindspore import ops
>>> x1 = np.ones((2, 3))
>>> x2 = np.ones(())
>>> x3 = np.ones(5)
>>> out = ops.atleast_2d([x1, x2, x3])
>>> print(out)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[ 1.00000000e+00]]), Tensor(shape=[1, 5], dtype=Float32, value=
[[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]))
tinyms.primitives.atleast_3d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least 3 dimension after this operation.

Scalar, 1-D or 2-D Tensor is converted to 3-D Tensor, tensor with higher dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 3. For example, a 1-D Tensor of shape \((N,)\) becomes a Tensor of shape \((1, N, 1)\), and a 2-D Tensor of shape \((M, N)\) becomes a tensor of shape \((M, N, 1)\).

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.ones((2, 3)))
>>> x2 = Tensor(np.ones(()))
>>> x3 = Tensor(np.ones(5))
>>> out = ops.atleast_3d([x1, x2, x3])
>>> print(out[0].asnumpy())
[[[1.]
  [1.]
  [1.]]

 [[1.]
  [1.]
  [1.]]]
>>> print(out[1].asnumpy())
[[[1.]]]
>>> print(out[2].asnumpy())
[[[1.]
  [1.]
  [1.]
  [1.]
  [1.]]]
tinyms.primitives.avg_pool1d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True)[source]

Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically the input is of shape \((N_{in}, C_{in}, L_{in})\), avg_pool1d outputs regional average in the \((L_{in})\)-dimension. Given kernel size \(ks = l_{ker}\) and stride \(s = s_0\), the operation is as follows.

\[\text{output}(N_i, C_j, l) = \frac{1}{l_{ker}} \sum_{n=0}^{l_{ker}-1} \text{input}(N_i, C_j, s_0 \times l + n)\]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C_{in}, L_{in})\).

  • kernel_size (int) – The size of kernel window used to take the average value. Default: 1.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int])) – The pad value to be filled. If padding is an integer, the paddings of left and right are the same, equal to pad. If padding is a tuple of 2 integers, the padding of left and right equal to padding[0] and padding[1] correspondingly. Default: 0.

  • ceil_mode (bool) – If True, apply ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, include the zero-padding in the averaging calculation. Default: True.

Returns:

Tensor of shape \((N, C_{out}, L_{out})\).

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size or stride is not an int.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • ValueError – If length of shape of input_x is not equal to 3.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If padding is not int nor a tuple whose length is equal to 2.

  • ValueError – If value(s) of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.avg_pool1d(input_x, kernel_size=6, stride=1)
>>> print(output.shape)
(1, 3, 1)
tinyms.primitives.avg_pool2d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=0)[source]

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \((k_{h}, k_{w})\) and strides , the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \frac{1}{k_{h} * k_{w}} \sum_{m=0}^{k_{h}-1} \sum_{n=0}^{k_{w}-1} \text{input}(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)\]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value. It is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If padding is a tuple of 4 integers, the padding of top, bottom, left and right equal to padding[0], padding[1], padding[2] and padding[3] correspondingly. Default: 0.

  • ceil_mode (bool) – If True, apply ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, include the zero-padding in the averaging calculation. Default: True.

  • divisor_override (int) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

Returns:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size or stride is neither int nor tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If divisor_override is not an int.

  • ValueError – If length of shape of input_x is not equal to 4.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 2.

  • ValueError – If padding is not int nor a tuple whose length is equal to 4.

  • ValueError – If value(s) of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> output = ops.avg_pool2d(x, kernel_size=2, stride=1)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
tinyms.primitives.avg_pool3d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=0)[source]

Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), avg_pool3d outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[ \begin{align}\begin{aligned}\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1}\\\text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\end{aligned}\end{align} \]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

  • kernel_size (Union[int, tuple[int]], optional) – The size of kernel used to take the average value, is an int number that represents depth, height and width are both kernel_size, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int]), optional) – The pad value to be filled. If padding is an integer, the addings of head, tail, top, bottom, left and right are the same, equal to pad. If padding is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] correspondingly. Default: 0

  • ceil_mode (bool, optional) – If True, ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool, optional) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int, optional) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

Returns:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the same data type with input_x.

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size, stride or padding is neither an int not a tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If divisor_override is not an int.

  • ValueError – If length of shape of input_x is not equal to 5.

  • ValueError – If numbers in kernel_size or stride are not positive.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 3.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If element of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float16)
>>> output = ops.avg_pool3d(input_x, kernel_size=2, stride=1)
>>> print(output)
[[[[[ 5.  6.]]]
  [[[17. 18.]]]]]
tinyms.primitives.baddbmm(input, batch1, batch2, beta=1, alpha=1)[source]

The result is the sum of the input and a batch matrix-matrix product of matrices in batch1 and batch2. The formula is defined as follows:

\[\text{out}_{i} = \beta \text{input}_{i} + \alpha (\text{batch1}_{i} \mathbin{@} \text{batch2}_{i})\]
Parameters:
  • input (Tensor) – The input Tensor. When batch1 is a \((C, W, T)\) Tensor and batch2 is a \((C, T, H)\) Tensor, input must be broadcastable with \((C, W, H)\) Tensor.

  • batch1 (Tensor) – \(batch1\) in the above formula. Must be 3-D Tensor, dtype is same as input.

  • batch2 (Tensor) – \(batch2\) in the above formula. Must be 3-D Tensor, dtype is same as input.

  • beta (Union[float, int], optional) – multiplier for input. The default is 1.

  • alpha (Union[float, int], optional) – multiplier for \(batch1 @ batch2\). The default is 1. Arguments beta and alpha must be integers when inputs of type not FloatTensor, otherwise they should be a real number.

Returns:

Tensor, has the same dtype as input, shape will be \((C, W, H)\).

Raises:
  • TypeError – The type of input, batch1, batch2 is not Tensor.

  • TypeError – The types of input, batch1, batch2 are different.

  • TypeError – For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha not be real numbers, otherwise not be integers.

  • TypeError – For Baddbmm, attributes alpha and beta are not real numbers

  • ValueError – If batch1 and batch2 are not 3-D tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([1, 3, 3]).astype(np.float32))
>>> batch1 = Tensor(np.ones([1, 3, 4]).astype(np.float32))
>>> batch2 = Tensor(np.ones([1, 4, 3]).astype(np.float32))
>>> output = ops.baddbmm(input, batch1, batch2)
>>> print(output)
[[[5. 5. 5.]
  [5. 5. 5.]
  [5. 5. 5.]]]
tinyms.primitives.bartlett_window(window_length, periodic=True, *, dtype=None)[source]

Bartlett window function is a triangular-shaped weighting function used for smoothing or frequency analysis of signals in digital signal processing.

The window_length is a input tensor which determines the returned window size, and its data should be an integer. In particular, if window_length is equal to 1, only a single value 1 exists in the returned window.

Attr periodic determines whether the returned window removes the last duplicate value from the symmetric window and prepares to be a periodic window with functions. Therefore, if attr periodic is true, the \(N\) in formula is \(window\_length + 1\).

\[\begin{split}w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases},\end{split}\]

where N is the full window size.

Parameters:
  • window_length (Tensor) – The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

  • periodic (bool, optional) – Indicates whether to returns a window to be used as periodic function or a symmetric window. Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The datatype of returned tensor. Only float16, float32 and float64 are allowed. Default: None.

Returns:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype. If dtype is None, output datatype is float32.

Raises:
  • TypeError – If window_length is not a Tensor.

  • TypeError – If the type of window_length is not one of: int32, int64.

  • TypeError – If periodic is not a bool.

  • TypeError – If dtype is not one of: float16, float32, float64.

  • ValueError – If the value range of window_length is not [0, 1000000].

  • ValueError – If the dimension of window_length is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(5, mstype.int32)
>>> output = ops.bartlett_window(window_length, periodic=True, dtype=mstype.float32)
>>> print(output)
[0. 0.4 0.8 0.8 0.4]
tinyms.primitives.batch_norm(input_x, running_mean, running_var, weight, bias, training=False, momentum=0.1, eps=1e-05)[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over inputs to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is weight, \(\beta\) is bias, \(\epsilon\) is eps, \(mean\) is the mean of x, \(variance\) is the variance of x.

Warning

  • For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Note

  • If training is False, weight, bias, running_mean and running_var are Tensors.

  • If training is True, weight, bias, running_mean and running_var are Parameters.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C)\), with float16 or float32 data type.

  • running_mean (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • running_var (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • weight (Union[Tensor, Parameter]) – The shape \((C,)\), with float16 or float32 data type.

  • bias (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • training (bool, optional) – If training is True, mean and variance are computed during training. If training is False, they’re loaded from checkpoint during inference. Default: False.

  • momentum (float, optional) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = (1 - momentum) * running\_mean + momentum * current\_mean\)). Momentum value must be [0, 1]. Default: 0.1.

  • eps (float, optional) – A small value added for numerical stability. Default: 1e-5.

Returns:

output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

Raises:
  • TypeError – If training is not a bool.

  • TypeError – If dtype of eps or momentum is not float.

  • TypeError – If input_x, weight, bias, running_mean or running_var is not a Tensor.

  • TypeError – If dtype of input_x, weight is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1.0, 2.0], [3.0, 4.0]], dtype.float32)
>>> running_mean = Tensor([0.5, 1.5], dtype.float32)
>>> running_var = Tensor([0.1, 0.2], dtype.float32)
>>> weight = Tensor([2.0, 2.0], dtype.float32)
>>> bias = Tensor([-1.0, -1.0], dtype.float32)
>>> output = ops.batch_norm(input_x, running_mean, running_var, weight, bias)
>>> print(output)
[[ 2.1621194  1.2360122]
 [14.810596  10.180061 ]]
tinyms.primitives.batch_to_space_nd(input_x, block_shape, crops)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_shape, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s \(w_1, ..., w_M\) dimension is the product of original \(w_1, ..., w_M\) dimension and block_shape with given amount to crop from dimension, respectively.

If the input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\), the output shape is \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n//(block\_shape[0]*...*block\_shape[M-1]) \\ w'_i = w_i*block\_shape[i-1]-crops[i-1][0]-crops[i-1][1] \end{array}\end{split}\]
Parameters:
  • input_x (Tensor) – The input tensor. It must be greater or equal to 2-D tensor(equal to 4-D tensor on Ascend), batch dimension must be divisible by product of block_shape.

  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than or equal to 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is an int, the block size of M dimensions are the same, equal to block_shape. In this case of Ascend, M must be 2.

  • crops (Union[list(int), tuple(int)]) – The crops values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be >= 0. crops[i] specifies the crops values for spatial dimension i, which corresponds to input dimension i + offset,where offset = N-M, and N is the number of input dimensions. It is required that \(input\_shape[i+offset]*block\_shape[i] > crops[i][0]+crops[i][1]\)

Returns:

Tensor, the output tensor with the same type as input.

Raises:
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If block_shape is not one dimensional when block_shape is a list or tuple.

  • ValueError – If the length of block_shape is not 2 on Ascend.

  • ValueError – If the element of block_shape is not an integer larger than or euqal to 1.

  • ValueError – If shape of crops is not (M, 2), where M is the length of block_shape.

  • ValueError – If the element of crops is not an integer larger than or euqal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_shape = [2, 2]
>>> crops = [[0, 0], [0, 0]]
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = ops.batch_to_space_nd(input_x, block_shape, crops)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
tinyms.primitives.bernoulli(input, p=0.5, seed=None)[source]

Randomly set the elements of output to 0 or 1 with the probability of p which follows the Bernoulli distribution.

\[out_{i} \sim Bernoulli(p_{i})\]
Parameters:
  • input (Tensor) – Input Tensor. Data type must be int8, uint8, int16, int32, int64, bool, float32 or float64.

  • p (Union[Tensor, float], optional) – Success probability, representing the probability of setting 1 for the corresponding position of the current Tensor. It has the same shape as input, the value of p must be in the range [0, 1]. Default: 0.5.

  • seed (Union[int, None], optional) – The seed value for random generating. The value of seed must be -1 or a positive integer, and -1 means using the current timestamp. Default: None, which will be treated as 0.

Returns:

output (Tensor), with the same shape and type as input .

Raises:
  • TypeError – If dtype of input is not one of: int8, uint8, int16, int32, int64, bool, float32, float64.

  • TypeError – If dtype of p is not one of: float32, float64.

  • TypeError – If dtype of seed is not int or None.

  • ValueError – If p is not in range [0, 1].

  • ValueError – If seed is less than 0 and not -1.

  • ValueError – If p is a Tensor but has different shape than input.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int8)
>>> output = ops.bernoulli(input_x, p=1.0)
>>> print(output)
[1 1 1]
>>> input_p = Tensor(np.array([0.0, 1.0, 1.0]), mindspore.float32)
>>> output = ops.bernoulli(input_x, input_p)
>>> print(output)
[0 1 1]
tinyms.primitives.bessel_i0(x)[source]

Computes the Bessel i0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i0(x)
>>> print(output)
[1.266066  1.0634835 1.0634835 1.266066]
tinyms.primitives.bessel_i0e(x)[source]

Computes the Bessel i0e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i0e(x)
>>> print(output)
[0.46575961  0.64503527  0.64503527  0.46575961]
tinyms.primitives.bessel_i1(x)[source]

Computes the Bessel i1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i1(x)
>>> print(output)
[-0.5651591  -0.25789431  0.25789431  0.5651591]
tinyms.primitives.bessel_i1e(x)[source]

Computes the Bessel i1e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i1e(x)
>>> print(output)
[-0.20791042  -0.15642083  0.15642083  0.20791042]
tinyms.primitives.bessel_j0(x)[source]

Computes the Bessel j0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_j0(x)
>>> print(output)
[0.93846981  0.76519769  0.22389078  -0.39714981]
tinyms.primitives.bessel_j1(x)[source]

Computes the Bessel j1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_j1(x)
>>> print(output)
[0.24226846  0.44005059  0.57672481 -0.06604333]
tinyms.primitives.bessel_k0(x)[source]

Computes the Bessel k0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k0(x)
>>> print(output)
[0.92441907  0.42102444  0.11389387  0.01115968]
tinyms.primitives.bessel_k0e(x)[source]

Computes the Bessel k0e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k0e(x)
>>> print(output)
[1.52410939  1.14446308  0.84156822  0.60929767]
tinyms.primitives.bessel_k1(x)[source]

Computes the Bessel k1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k1(x)
>>> print(output)
[1.65644112  0.60190723  0.13986588  0.0124835]
tinyms.primitives.bessel_k1e(x)[source]

Computes the Bessel k1e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k1e(x)
>>> print(output)
[2.73100971  1.63615349  1.03347685  0.68157595]
tinyms.primitives.bessel_y0(x)[source]

Computes the Bessel y0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_y0(x)
>>> print(output)
[-0.44451874  0.08825696  0.51037567  -0.01694074]
tinyms.primitives.bessel_y1(x)[source]

Computes the Bessel y1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_y1(x)
>>> print(output)
[-1.47147239  -0.78121282  -0.10703243  0.39792571]
tinyms.primitives.bias_add(input_x, bias)[source]

Returns the sum of the input_x and the bias Tensor. Before adding, the bias Tensor will be broadcasted to be consistent with the shape of the input_x Tensor.

Parameters:
  • input_x (Tensor) – The input tensor. The shape can be 2-5 dimensions.

  • bias (Tensor) – The bias tensor, with shape \((C)\). C must be the same as channel dimension C of input_x.

Returns:

Tensor, with the same shape and data type as input_x.

Raises:
  • TypeError – If input_x or bias is not a Tensor.

  • TypeError – If dtype of input_x or bias is inconsistent.

  • TypeError – If dimension of input_x is not in the range [2, 5].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3)), mindspore.float32)
>>> output = ops.bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
tinyms.primitives.binary_cross_entropy(logits, labels, weight=None, reduction='mean')[source]

Computes the binary cross entropy(Measure the difference information between two probability distributions) between predictive value logits and target value labels.

Set logits as \(x\), labels as \(y\), output as \(\ell(x, y)\), the weight of nth batch of binary cross entropy is \(w_n\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

In which, \(L\) indicates the loss of all batch_size, \(l\) indicates the loss of one batch_size, and \(n\) indicates one batch_size in the \(1-N\) range. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

  • The value of logits must range from 0 to l.

Parameters:
  • logits (Tensor) – The predictive value whose data type must be float16 or float32.

  • labels (Tensor) – The target value which has the same shape and data type as logits.

  • weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. Its shape must be able to broadcast to that of logits and labels. And it must have the same shape and data type as logits. Default: None. If set to None, the loss function will not consider any sample weights, and each sample will be treated as having equal importance when calculating the loss.

  • reduction (str, optional) – Specify the protocol calculation method used to output the results. Its value must be one of ‘none’, ‘mean’ or ‘sum’, respectively indicate that no calculation method is specified, using the average value for calculation, and using summation for calculation, not case-sensitive. Default: ‘mean’.

Returns:

Tensor or Scalar. Returns Tensor that has the same dtype and shape as logits if reduction is ‘none’. Otherwise, returns a scalar Tensor.

Raises:
  • TypeError – If logits, labels or weight is not a Tensor.

  • TypeError – If dtype of logits, labels or weight (if given) is neither float16 nor float32.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If shape of labels is not the same as logits or weight (if given).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = ops.binary_cross_entropy(logits, labels, weight)
>>> print(output)
0.38240486
tinyms.primitives.binary_cross_entropy_with_logits(logits, label, weight, pos_weight, reduction='mean')[source]

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

Sets input logits as \(X\), input label as \(Y\), input weight as \(W\), output as \(L\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})] \end{array}\end{split}\]

\(i\) indicates the \(i^{th}\) sample, \(j\) indicates the category. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

\(\ell\) indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor \(weight\) assigns different weights to each piece of data in the batch, and the tensor \(pos_weight\) adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

\[\begin{split}\begin{array}{ll} \\ p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\ L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})] \end{array}\end{split}\]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and \(P_c\) is the weight of the positive answer for the class c. \(P_c>1\) increases the recall, \(P_c<1\) increases the precision.

Parameters:
  • logits (Tensor) – Input logits. Data type must be float16 or float32.

  • label (Tensor) – Ground truth label, has the same shape as logits. Data type must be float16 or float32.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • pos_weight (Tensor) – A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’, not case sensitive. If ‘none’, do not perform reduction. Default: ‘mean’.

Returns:

Tensor or Scalar, if reduction is ‘none’, it’s a tensor with the same shape and type as input logits. Otherwise, the output is a scalar.

Raises:
  • TypeError – If input logits, label, weight, pos_weight is not Tensor.

  • TypeError – If data type of input logits, label, weight, pos_weight is neither float16 nor float32.

  • TypeError – If data type of input reduction is not string.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> label = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> output = ops.binary_cross_entropy_with_logits(logits, label, weight, pos_weight)
>>> print(output)
0.3463612
tinyms.primitives.bincount(input, weights=None, minlength=0)[source]

Counts the number of occurrences of each value in input.

If you don’t specify minlength, the length of output Tensor will be the maximum value of the input input plus one.

If minlength is specified, the length of output Tensor is the value of maximum of input plus 1 and minlength.

Each value in the output Tensor marks the number of occurrences of that index in input. If ‘weights’ is specified, the output results are weighted, i.e out[n] += weight[i] instead of out[n] += 1.

Parameters:
  • input (Tensor) – 1-d input tensor.

  • weights (Tensor, optional) – Weights, a tensor of the same shape as input. Defaults to None.

  • minlength (int, optional) – A minimum number of bins for the output tensor. Defaults to 0.

Returns:

Tensor, a tensor of shape [max(input)+1] if input is non-empty, otherwise, the shape is [0].

Raises:
  • TypeError – if input or weights is not a tensor.

  • ValueError – If input is not one-dimensional, or if input and weights do not have the same shape.

  • ValueError – If minlength is a negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([2, 4, 1, 0, 0], dtype=mstype.int64)
>>> print(ops.bincount(x, minlength=7))
[2. 1. 1. 0. 1. 0. 0.]
>>> weights = Tensor([0, 0.25, 0.5, 0.75, 1], dtype=mstype.float32)
>>> print(ops.bincount(x, weights=weights))
[1.75 0.5  0.   0.   0.25]
tinyms.primitives.bitwise_and(input, other)[source]

Returns bitwise and of two tensors element-wise.

\[out_i = input_{i} \wedge other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_and(input, other)
>>> print(output)
[ 0  0  1 -1  1  0  1]
tinyms.primitives.bitwise_left_shift(input, other)[source]

Perform a left bitwise shift operation on the input element-wise, where the number of bits to shift is specified by other.

\[\begin{aligned} &out_{i} =input_{i} << other_{i} \end{aligned}\]
Parameters:
  • input (Union[Tensor, Scalar]) – The input to be left shifted.

  • other (Union[Tensor, Scalar]) – The number of bit to be applied on left arithmetic shift.

Returns:

Tensor, the result after bitwise left shift.

Raises:
  • TypeError – If neither input nor other is a tensor.

  • TypeError – If either input or other is not an int or a tensor of dtype: int or uint.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1024, 2]), mindspore.int16)
>>> other = Tensor(np.array([2]), mindspore.int16)
>>> output = ops.bitwise_left_shift(input, other)
>>> print(output)
[4096    8]
tinyms.primitives.bitwise_or(input, other)[source]

Returns bitwise or of two tensors element-wise.

\[out_i = input_{i} \mid other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_or(input, other)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
tinyms.primitives.bitwise_right_shift(input, other)[source]

Perform a right bitwise shift operation on the input element-wise, where the number of bits to shift is specified by other.

\[\begin{aligned} &out_{i} =input_{i} >> other_{i} \end{aligned}\]
Parameters:
  • input (Union[Tensor, Scalar]) – The input to be right shifted.

  • other (Union[Tensor, Scalar]) – The number of bit to be applied on right arithmetic shift.

Returns:

Tensor, the result after bitwise right shift.

Raises:
  • TypeError – If neither input nor other is a tensor.

  • TypeError – If either input or other is not an int or a tensor of dtype: int or uint.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1024, 2]), mindspore.int16)
>>> other = Tensor(np.array([2]), mindspore.int16)
>>> output = ops.bitwise_right_shift(input, other)
>>> print(output)
[256   0]
tinyms.primitives.bitwise_xor(input, other)[source]

Returns bitwise xor of two tensors element-wise.

\[out_i = input_{i} \oplus other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_xor(input, other)
>>> print(output)
[ 0  1  0  0 -2  3  2]
tinyms.primitives.blackman_window(window_length, periodic=True, *, dtype=None)[source]

Blackman window function, usually used to extract finite signal segment for FFT.

The window_length is a input tensor which determines the returned window size, and its data should be an integer. In particular, if window_length is equal to 1, only a single value 1 exists in the returned window.

Attr periodic determines whether the returned window removes the last duplicate value from the symmetric window and prepares to be a periodic window with functions. Therefore, if attr periodic is true, the \(N\) in formula is \(window\_length + 1\).

\[w[n] = 0.42 - 0.5 cos(\frac{2\pi n}{N - 1}) + 0.08 cos(\frac{4\pi n}{N - 1})\]

where \(N\) is the full window size, and n is natural number less than \(N\) :[0, 1, …, N-1].

Parameters:
  • window_length (Tensor) – The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

  • periodic (bool, optional) – Indicates whether to returns a window to be used as periodic function or a symmetric window. Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The data type of returned tensor. Only float16, float32 and float64 is allowed. Default: None.

Returns:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype. If ‘dtype’ is None, output datatype is float32.

Raises:
  • TypeError – If window_length is not a Tensor.

  • TypeError – If periodic is not a bool.

  • TypeError – If dtype is not one of: float16, float32, float64.

  • TypeError – If the type of window_length is not one of: int32, int64.

  • ValueError – If the value range of window_length is not [0, 1000000].

  • ValueError – If the dimension of window_length is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(10, mindspore.int32)
>>> output = ops.blackman_window(window_length, periodic=True, dtype=mindspore.float32)
>>> print(output)
[-2.9802322e-08  4.0212840e-02  2.0077014e-01  5.0978714e-01
  8.4922993e-01  1.0000000e+00  8.4922981e-01  5.0978690e-01
  2.0077008e-01  4.0212870e-02]
tinyms.primitives.block_diag(*inputs)[source]

Creates a block diagonal matrix from the provided Tensor.

Parameters:

inputs (Tensor) – One or more tensors, the dimension of Tensor should be 0, 1 or 2.

Returns:

Tensor, two-dimensional with all input tensors arranged in order so that their top left and bottom right corners are diagonally adjacent. All other elements are set to 0.

Raises:
  • TypeError – If the input is not a Tensor.

  • ValueError – If the dimension of Tensor is not 0, 1 or 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([[4], [3], [2]], mstype.int32)
>>> x2 = Tensor([7, 6, 5], mstype.int32)
>>> x3 = Tensor(1, mstype.int32)
>>> x4 = Tensor([[5, 4, 3], [2, 1, 0]], mstype.int32)
>>> x5 = Tensor([[8, 7], [7, 8]], mstype.int32)
>>> out = ops.block_diag(x1, x2, x3, x4, x5)
>>> print(out.asnumpy())
[[4 0 0 0 0 0 0 0 0 0]
 [3 0 0 0 0 0 0 0 0 0]
 [2 0 0 0 0 0 0 0 0 0]
 [0 7 6 5 0 0 0 0 0 0]
 [0 0 0 0 1 0 0 0 0 0]
 [0 0 0 0 0 5 4 3 0 0]
 [0 0 0 0 0 2 1 0 0 0]
 [0 0 0 0 0 0 0 0 8 7]
 [0 0 0 0 0 0 0 0 7 8]]
tinyms.primitives.bmm(input_x, mat2)[source]

Computes matrix multiplication between two tensors by batch.

\[ext{output[..., :, :]} = ext{matrix}(input_x[..., :, :]) * ext{matrix}(mat2[..., :, :])\]

The dim of input_x can not be less than 3 and the dim of mat2 can not be less than 2.

Parameters:
  • input_x (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions.

  • mat2 (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\).

Returns:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Raises:
  • ValueError – If dim of input_x is less than 3 or dim of mat2 is less than 2.

  • ValueError – If the length of the third dim of input_x is not equal to the length of the second dim of mat2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> input_x = Tensor(np.arange(24).reshape((2, 4, 1, 3)), ms.float32)
>>> mat2 = Tensor(np.arange(72).reshape((2, 4, 3, 3)), ms.float32)
>>> output = ops.bmm(input_x, mat2)
>>> print(output)
[[[[  15.   18.   21.]]
  [[ 150.  162.  174.]]
  [[ 447.  468.  489.]]
  [[ 906.  936.  966.]]]
 [[[1527. 1566. 1605.]]
  [[2310. 2358. 2406.]]
  [[3255. 3312. 3369.]]
  [[4362. 4428. 4494.]]]]
tinyms.primitives.bounding_box_decode(anchor_box, deltas, max_shape, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)[source]

Decode the bounding box locations, calculate the offset, and convert the offset into a Bbox, which is used to mark the target in the subsequent images, etc.

Parameters:
  • anchor_box (Tensor) – Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • deltas (Tensor) – Delta of boxes. Which has the same shape with anchor_box.

  • max_shape (tuple) – The max size limit for decoding box calculation.

  • means (tuple, optional) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple, optional) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • wh_ratio_clip (float, optional) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Returns:

Tensor, decoded boxes. It has the same data type and shape as anchor_box.

Raises:
  • TypeError – If means, stds or max_shape is not a tuple.

  • TypeError – If wh_ratio_clip is not a float.

  • TypeError – If anchor_box or deltas is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> output = ops.bounding_box_decode(anchor_box, deltas, max_shape=(768, 1280), means=(0.0, 0.0, 0.0, 0.0),
...                                  stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
tinyms.primitives.bounding_box_encode(anchor_box, groundtruth_box, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))[source]

Encode the bounding box locations, calculate the offset between the predicted bounding boxes and the real bounding boxes, and the offset will be used as a variable for the loss.

Parameters:
  • anchor_box (Tensor) – Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • groundtruth_box (Tensor) – Ground truth boxes. Which has the same shape with anchor_box.

  • means (tuple, optional) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple, optional) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Returns:

Tensor, encoded bounding boxes. It has the same data type and shape as input anchor_box.

Raises:
  • TypeError – If means or stds is not a tuple.

  • TypeError – If anchor_box or groundtruth_box is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> output = ops.bounding_box_encode(anchor_box, groundtruth_box, means=(0.0, 0.0, 0.0, 0.0),
...                                  stds=(1.0, 1.0, 1.0, 1.0))
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
tinyms.primitives.broadcast_to(input, shape)[source]

Broadcasts input tensor to a given shape. The dim of input shape must be smaller than or equal to that of target shape. Suppose input shape is \((x_1, x_2, ..., x_m)\), target shape is \((*, y_1, y_2, ..., y_m)\), where \(*\) means any additional dimension. The broadcast rules are as follows:

Compare the value of \(x_m\) and \(y_m\), \(x_{m-1}\) and \(y_{m-1}\), …, \(x_1\) and \(y_1\) consecutively and decide whether these shapes are broadcastable and what the broadcast result is.

If the value pairs at a specific dim are equal, then that value goes right into that dim of output shape. With an input shape \((2, 3)\), target shape \((2, 3)\) , the inferred output shape is \((2, 3)\).

If the value pairs are unequal, there are three cases:

Case 1: If the value of the target shape in the dimension is -1, the value of the output shape in the dimension is the value of the corresponding input shape in the dimension. With an input shape \((3, 3)\), target shape \((-1, 3)\), the output shape is \((3, 3)\).

Case 2: If the value of target shape in the dimension is not -1, but the corresponding value in the input shape is 1, then the corresponding value of the output shape is that of the target shape. With an input shape \((1, 3)\), target shape \((8, 3)\), the output shape is \((8, 3)\).

Case 3: If the corresponding values of the two shapes do not satisfy the above cases, it means that broadcasting from the input shape to the target shape is not supported.

So far we got the last m dims of the outshape, now focus on the first \(*\) dims, there are two cases:

If the first \(*\) dims of output shape does not have -1 in it, then fill the input shape with ones until their length are the same, and then refer to Case 2 mentioned above to calculate the output shape. With target shape \((3, 1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), the filled input shape will be \((1, 1, 1, 1, 5, 9)\) and thus the output shape is \((3, 1, 4, 1, 5, 9)\).

If the first \(*\) dims of output shape have -1 in it, it implies this -1 is corresponding to a non-existing dim so they’re not broadcastable. With target shape \((3, -1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), instead of operating the dim-filling process first, it raises errors directly.

Parameters:
  • input (Tensor) – The input Tensor. Supported types are: float16, float32, int32, int8, uint8, bool.

  • shape (tuple) – The target shape to broadcast. Can be fully specified, or have -1 in one position where it will be substituted by the input tensor’s shape in that position, see example.

Returns:

Tensor, with the given shape and the same data type as input.

Raises:
  • TypeError – If shape is not a tuple.

  • ValueError – If the target and input shapes are incompatible, or if a - 1 in the target shape is in an invalid location.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 3)
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>> shape = (-1, 2)
>>> x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 1.]
 [2. 2.]]
tinyms.primitives.cartesian_prod(*inputs)[source]

Performs a Cartesian product for a given tensor sequence. The behavior is similar to Python’s itertools.product.

Parameters:

inputs (List[Tensor]) – Tensor sequence.

Returns:

Tensor, a Cartesian product for a given tensor sequence.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([1, 2])
>>> x2 = Tensor([5])
>>> out = ops.cartesian_prod(x1, x2)
>>> print(out.asnumpy())
[[1 5]
 [2 5]]
>>> x1 = Tensor([1, 2, 3, 4])
>>> x2 = Tensor([5, 6, 7])
>>> x3 = Tensor([8, 9, 0, 1, 2])
>>> out = ops.cartesian_prod(x1, x2, x3)
>>> print(len(out))
60
tinyms.primitives.cat(tensors, axis=0)[source]

Connect input tensors along with the given axis.

The input data is a tuple or a list of tensors. These tensors have the same rank \(R\). Set the given axis as \(m\), and \(0 \le m < R\). Set the number of input tensors as \(N\). For the \(i\)-th tensor \(t_i\), it has the shape of \((x_1, x_2, ..., x_{mi}, ..., x_R)\). \(x_{mi}\) is the \(m\)-th dimension of the \(t_i\). Then, the shape of the output tensor is

\[(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\]
Parameters:
  • tensors (Union[tuple, list]) – A tuple or a list of input tensors. Suppose there are two tensors in this tuple or list, namely t1 and t2. To perform concat in the axis 0 direction, except for the \(0\)-th axis, all other dimensions should be equal, that is, \(t1.shape[1] = t2.shape[1], t1.shape[2] = t2.shape[2], ..., t1.shape[R-1] = t2.shape[R-1]\), where \(R\) represents the rank of tensor.

  • axis (int) – The specified axis, whose value is in range \([-R, R)\). Default: 0.

Returns:

Tensor, the shape is \((x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\).

The data type is the same with tensors.

Raises:
  • TypeError – If axis is not an int.

  • ValueError – If tensors have different dimension of tensor.

  • ValueError – If axis not in range \([-R, R)\).

  • RuntimeError – If tensor’s shape in tensors except for axis are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = ops.cat((input_x1, input_x2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
>>> output = ops.cat((input_x1, input_x2), 1)
>>> print(output)
[[0. 1. 0. 1.]
 [2. 1. 2. 1.]]
tinyms.primitives.cdist(x1, x2, p=2.0)[source]

Computes p-norm distance between each pair of row vectors of two input Tensors.

Parameters:
  • x1 (Tensor) – Input tensor of shape \((B, P, M)\). Letter \(B\) represents 0 or positive int number. When \(B\) is equal to 0, it means this dimension can be ignored, i.e. shape of the tensor is \((P, M)\). The supported dtype is [float32, float64] on GPU, or [float32] on CPU.

  • x2 (Tensor) – Input tensor of shape \((B, R, M)\), has the same dtype as x1.

  • p (float, optional) – P value for the p-norm distance to calculate between each vector pair, P ∈ [0,∞]. Default: 2.0.

Returns:

Tensor, p-norm distance, has the same dtype as x1, its shape is \((B, P, R)\).

Raises:
  • TypeError – If x1 or x2 is not Tensor.

  • TypeError – If dtype of x1 or x2 is not in [float32, float64] on GPU, or is not in [float32] on CPU.

  • TypeError – If p is not float32.

  • ValueError – If p is negative.

  • ValueError – If dimension of x1 is not the same as x2.

  • ValueError – If dimension of x1 or x2 is neither 2 nor 3.

  • ValueError – If the batch shape of x1 is not the same as the shape of x2.

  • ValueError – If the number of columns of x1 is not the same as the number of x2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[1.0, 1.0], [2.0, 2.0]]]).astype(np.float32))
>>> y = Tensor(np.array([[[3.0, 3.0], [3.0, 3.0]]]).astype(np.float32))
>>> output = ops.cdist(x, y, 2.0)
>>> print(output)
[[[2.8284273 2.8284273]
  [1.4142137 1.4142137]]]
tinyms.primitives.ceil(input)[source]

Rounds a tensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16 or float32.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> output = ops.ceil(x)
>>> print(output)
[ 2.  3. -1.]
tinyms.primitives.celu(x, alpha=1.0)[source]

celu activation function, computes celu (Continuously differentiable exponential linear units) of input tensors element-wise. The formula is defined as follows:

\[\text{CeLU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))\]

For more details, please refer to celu.

Parameters:
  • x (Tensor) – The input of celu with data type of float16 or float32.

  • alpha (float, optional) – The \(\alpha\) value for the Celu formulation. Default: 1.0

Returns:

Tensor, has the same data type and shape as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
>>> output = ops.celu(x, alpha=1.0)
>>> print(output)
[-0.86466473 -0.63212055  1.          2.        ]
tinyms.primitives.channel_shuffle(x, groups)[source]

Divide the channels in a tensor of shape \((*, C, H, W)\) into g groups and rearrange them as \((*, \frac{C}{g}, g, H*W)\), while keeping the original tensor shapes.

Parameters:
  • x (Tensor) – Tensor to be divided, it has shape \((*, C, H, W)\), with float16, float32, int8, int16, int32, int64, uint8, uint16, uint32, uint64 data type.

  • groups (int) – Number of groups to divide channels in.

Returns:

A Tensor, has the same type as the x, and has the shape \((*, C, H, W)\).

Raises:
  • TypeError – If data type of x is not one of the following: float16, float32, int8, int16, int32, int64, uint8, uint16, uint32, uint64.

  • TypeError – If dim of x is < 4.

  • TypeError – If groups is not a positive number.

  • ValueError – If channel number of x is not divisible by groups.

Supported Platforms:

Ascend CPU

Examples

>>> group = 2
>>> x = Tensor(np.arange(1* 4 * 2 * 2).reshape(1, 4, 2, 2).astype(np.int16))
>>> y = mindspore.ops.channel_shuffle(x, group)
>>> print(y)
[[[[ 0  1]
   [ 2  3]]
   [[ 8  9]
   [10 11]]
   [[ 4  5]
   [ 6  7]]
   [[12 13]
   [14 15]]]]
tinyms.primitives.check_valid(bboxes, img_metas)[source]

Checks whether the bounding box is in the image.

bboxes contain several sets of bounding boxes, each represented by two abscissa points \((x0, x1)\) and two ordinate points \((y0, y1)\) . img_metas provides information about the original image, including three parameters \((height, width, ratio)\) , which specify the valid boundary of the image.

when the following conditions are met:

\(x0 >= 0\)

\(y0 >= 0\)

\(x1 <= width * ratio - 1\)

\(y1 <= height * ratio - 1\)

the bounding box is considered to be within the image.

Warning

The bounding box specified by bboxes and the image information specified by img_metas need to be valid, i.e.: \(x0 <= x1\) , \(y0 <= y1\) , and \((height, width, ratio)\) are all positive.

Parameters:
  • bboxes (Tensor) – Bounding boxes tensor with shape \((N, 4)\) . \(N\) indicates the number of bounding boxes, the value 4 indicates four coordinate points \((x0, y0, x1, y1)\) . Data type must be float16 or float32.

  • img_metas (Tensor) – Raw image size information with the format of \((height, width, ratio)\) , specifying the valid boundary \((height * ratio - 1, width * ratio - 1)\) . Data type must be float16 or float32.

Returns:

Tensor, with shape of \((N,)\) and dtype of bool, specifying whether the bounding boxes is in the image. True indicates valid, while False indicates invalid.

Raises:
  • TypeError – If bboxes or img_metas is not a Tensor.

  • TypeError – If dtype of bboxes or img_metas is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> output = ops.check_valid(bboxes, img_metas)
>>> print(output)
[ True False False]
tinyms.primitives.choice_with_mask(input_x, count=256, seed=None)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

The input_x must be a tensor whose dimension is not less than 1. If its dimension is greater than or equal to 2, the first dimension specifies the number of samples. The returned index tensor denotes the index of the nonzero sample, the mask tensor denotes which elements in the index tensor are valid.

Parameters:
  • input_x (Tensor[bool]) – The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

  • count (int, optional) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = ops.choice_with_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
tinyms.primitives.cholesky(input_x, upper=False)[source]

Returns the Cholesky decomposition of zero or more batch dimensions consisting of symmetric positive-definite matrices.

If upper is True, returns an upper-triangular matrix, \(U\), and the decomposition has the form:

\[A = U^TU\]

If upper is False, returns a lower-triangular matrix, \(L\), and the decomposition has the form:

\[A = LL^T\]

where A is the symmetric positive-definite matrix.

Parameters:
  • input_x (Tensor) – Tensor of shape \((*, N, N)\), where \(*\) is zero or more batch dimensions consisting of symmetric positive-definite matrices, with float32 or float64 data type.

  • upper (bool) – If upper is True, returns an upper-triangular matrix. If upper is False, returns a lower-triangular matrix. Default: False.

Returns:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If upper is not a bool.

  • TypeError – If dtype of input_x is not one of: float64, float32.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If input_x is not a or a batch of square matrix.

  • ValueError – If input_x is not symmetric positive definite.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 1.0], [1.0, 2.0]]), mindspore.float32)
>>> output = ops.cholesky(input_x, upper=False)
>>> print(output)
[[1. 0.]
 [1. 1.]]
tinyms.primitives.cholesky_inverse(input_x, upper=False)[source]

Returns the inverse of the positive definite matrix using cholesky matrix factorization by its Cholesky factor.

If upper is True, \(U\) is an upper triangular such that the output tensor is

\[inv = (U^{T}U)^{-1}\]

If upper is False, \(U\) is a lower triangular such that the output tensor is

\[inv = (UU^{T})^{-1}\]

Note

The input must be either an upper-triangular matrix or a lower-triangular matrix from Cholesky decomposition.

Parameters:
  • input_x (Tensor) – The input tensor with a rank of 2. Supported dtypes: float32, float64.

  • upper (bool) – If upper is True, return an upper triangular matrix. If upper is False, return a lower-triangular matrix. Default: False.

Returns:

Tensor, has the same shape and dtype as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not one of: float32, float64.

  • ValueError – If the dimension of input_x is not equal to 2.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[2,0,0], [4,1,0], [-1,1,2]]), mindspore.float32)
>>> output = ops.cholesky_inverse(input_x)
>>> print(output)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
tinyms.primitives.chunk(input, chunks, axis=0)[source]

Cut the input Tensor into chunks sub-tensors along the specified axis.

Note

This function may return less then the specified number of chunks!

Parameters:
  • input (Tensor) – A Tensor to be cut.

  • chunks (int) – Number of sub-tensors to cut.

  • axis (int, optional) – Specify the dimensions that you want to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – The sum of chunks is not int.

  • TypeError – If argument axis is not int.

  • ValueError – If argument axis is out of range of \([-input.ndim, input.ndim)\) .

  • ValueError – If argument chunks is not positive number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.chunk(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.clamp(input, min=None, max=None)[source]

Clamps tensor values between the specified minimum value and maximum value.

Limits the value of \(input\) to a range, whose lower limit is min and upper limit is max .

\[\begin{split}out_i= \left\{ \begin{array}{align} max & \text{ if } x_i\ge max \\ x_i & \text{ if } min \lt x_i \lt max \\ min & \text{ if } x_i \le min \\ \end{array}\right.\end{split}\]

Note

  • min and max cannot be None at the same time;

  • When min is None and max is not None, the elements in Tensor larger than max will become max;

  • When min is not None and max is None, the elements in Tensor smaller than min will become min;

  • If min is greater than max, the value of all elements in Tensor will be set to max;

  • The data type of input, min and max should support implicit type conversion and cannot be bool type.

Parameters:
  • input (Union(Tensor, list[Tensor], tuple[Tensor])) – Input data, which type is Tensor or a list or tuple of Tensor. Tensors of arbitrary dimensions are supported.

  • min (Union(Tensor, float, int), optional) – The minimum value. Default: None.

  • max (Union(Tensor, float, int), optional) – The maximum value. Default: None.

Returns:

Union(Tensor, tuple[Tensor], list[Tensor]), a clipped Tensor or a tuple or a list of clipped Tensor. The data type and shape are the same as input.

Raises:
  • ValueError – If both min and max are None.

  • TypeError – If the type of input is not in Tensor or list[Tensor] or tuple[Tensor].

  • TypeError – If the type of min is not in None, Tensor, float or int.

  • TypeError – If the type of max is not in None, Tensor, float or int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: the data type of x is Tensor
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> min_value = Tensor(5, mindspore.float32)
>>> max_value = Tensor(20, mindspore.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clamp(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
>>> # case 2: the data type of x is list[Tensor]
>>> min_value = 5
>>> max_value = 20
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> y = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clamp([x,y], min_value, max_value)
>>> for out in output:
...     print(out)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.clip(x, min=None, max=None)[source]

Alias for mindspore.ops.clamp() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.clip_by_global_norm(x, clip_norm=1.0, use_norm=None)[source]

Clips tensor values by the ratio of the sum of their norms.

Note

  • Input x should be a tuple or list of tensors. Otherwise, it will raise an error.

  • On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input x is the gradient, the gradient norm values on all devices will be automatically aggregated by allreduce inserted after the local square sum of the gradients.

Parameters:
  • x (Union(tuple[Tensor], list[Tensor])) – Input data to clip.

  • clip_norm (Union(float, int)) – The clipping ratio, it should be greater than 0. Default: 1.0

  • use_norm (None) – The global norm. Default: None. Currently only none is supported.

Returns:

tuple[Tensor], a clipped Tensor. It has the same data type as x and each Tensor in the output tuple is the same as the original input shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x1 = np.array([[2., 3.], [1., 2.]]).astype(np.float32)
>>> x2 = np.array([[1., 4.], [3., 1.]]).astype(np.float32)
>>> input_x = (Tensor(x1), Tensor(x2))
>>> out = ops.clip_by_global_norm(input_x, 1.0)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.98142403e-01,  4.47213590e-01],
 [ 1.49071202e-01,  2.98142403e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.49071202e-01,  5.96284807e-01],
 [ 4.47213590e-01,  1.49071202e-01]]))
tinyms.primitives.clip_by_value(x, clip_value_min=None, clip_value_max=None)[source]

Clips tensor values to a specified min and max.

Limits the value of \(x\) to a range, whose lower limit is clip_value_min and upper limit is clip_value_max .

\[\begin{split}out_i= \left\{ \begin{array}{align} clip\_value\_max & \text{ if } x_i\ge clip\_value\_max \\ x_i & \text{ if } clip\_value\_min \lt x_i \lt clip\_value\_max \\ clip\_value\_min & \text{ if } x_i \le clip\_value\_min \\ \end{array}\right.\end{split}\]

Note

  • clip_value_min and clip_value_max cannot be None at the same time;

  • When clip_value_min is None and clip_value_max is not None, the elements in Tensor larger than clip_value_max will become clip_value_max;

  • When clip_value_min is not None and clip_value_max is None, the elements in Tensor smaller than clip_value_min will become clip_value_min;

  • If clip_value_min is greater than clip_value_max, the value of all elements in Tensor will be set to clip_value_max;

  • The data type of x, clip_value_min and clip_value_max should support implicit type conversion and cannot be bool type.

Parameters:
  • x (Union(Tensor, list[Tensor], tuple[Tensor])) – Input data, which type is Tensor or a list or tuple of Tensor. Tensors of arbitrary dimensions are supported.

  • clip_value_min (Union(Tensor, float, int)) – The minimum value. Default: None.

  • clip_value_max (Union(Tensor, float, int)) – The maximum value. Default: None.

Returns:

(Union(Tensor, tuple[Tensor], list[Tensor])), a clipped Tensor or a tuple or a list of clipped Tensor. The data type and shape are the same as x.

Raises:
  • ValueError – If both clip_value_min and clip_value_max are None.

  • TypeError – If the type of x is not in Tensor or list[Tensor] or tuple[Tensor].

  • TypeError – If the type of clip_value_min is not in None, Tensor, float or int.

  • TypeError – If the type of clip_value_max is not in None, Tensor, float or int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: the data type of x is Tensor
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> min_value = Tensor(5, mindspore.float32)
>>> max_value = Tensor(20, mindspore.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clip_by_value(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
>>> # case 2: the data type of x is list[Tensor]
>>> min_value = 5
>>> max_value = 20
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> y = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clip_by_value([x,y], min_value, max_value)
>>> for out in output:
...     print(out)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.coalesce(x_indices: mindspore.common.tensor.Tensor, x_values: mindspore.common.tensor.Tensor, x_shape: mindspore.common.tensor.Tensor) → Tuple[mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor][source]

Returns the coalesced sparse tensor of the input.

Parameters:
  • x_indices (-) – Supported data type is int64. It’s elements should be non-negative. The shape is \((y, x)\).

  • x_values (-) – Supported data types are float16 and float32. The shape is \((x,)\).

  • x_shape (-) – Supported data type is int64. The shape is \((y,)\).

Returns:

  • y_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Data type is int64. It’s elements are non-negative. The shape is \((y, z)\). z represents the number of different indices in x_indices.

  • y_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in y_indices. Data type is the same as x_values’s. The shape is \((z,)\).

  • y_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Data type is int64. The shape is \((y,)\).

Raises:
  • TypeError – If the data type of x_values is neither float32 nor float16.

  • TypeError – If any of the data types of x_indices and x_shape is not int64.

  • ValueError – If any of x_values and x_shape is not a 1-D tensor.

  • ValueError – If x_indices is not a 2-D tensor.

  • ValueError – If sizes of second dimension of x_indices and first dimension of x_values are not the same.

  • ValueError – If sizes of first dimension of x_indices and first dimension of x_shape are not the same.

  • ValueError – If any of the values of elements of x_indices is negative.

  • ValueError – If any of the values of elements of x_indices exceed the limit set by x_shape.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x_indices = Tensor([[0, 0, 1], [1, 1, 2]], dtype=ms.int64)
>>> x_values = Tensor([1, 5, 4], dtype=ms.float32)
>>> x_shape = Tensor([3, 3], dtype=ms.int64)
>>> y_indices, y_values, y_shape = ops.Coalesce()(x_indices, x_values, x_shape)
>>> print(y_indices)
[[0 1]
 [1 2]]
>>> print(y_values)
[6. 4.]
>>> print(y_shape)
[3 3]
tinyms.primitives.col2im(input_x, output_size, kernel_size, dilation, padding_value, stride)[source]

Combines an array of sliding local blocks into a large containing tensor.

Parameters:
  • input_x (Tensor) – 4D tensor with data type float16 or float.

  • output_size (Tensor) – 1D tensor with 2 elements of data type int.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]]) – The size of the dilation, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding_value (Union[int, tuple[int], list[int]]) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • stride (Union[int, tuple[int], list[int]]) – The size of the stride, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

Returns:

A 4D Tensor, with same type as ‘input_x’.

Raises:
  • TypeError – If kernel_size, dilation, padding_value, stride data type is not in Union[int, tuple[int], list[int]].

  • ValueError – If kernel_size, dilation, padding_value, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding_value value is less than zero or elements number more than 2.

  • ValueError – If input_x.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If input_x.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> output = ops.col2im(x, output_size, [2, 2], [2, 2], [2, 2], [2, 2])
>>> print(output.shape)
(16, 16, 8, 8)
tinyms.primitives.column_stack(tensors)[source]

Stacks 1-D tensors as columns into a 2-D tensor. 2-D tensors are stacked as-is, like ops.hstack.

Parameters:

tensors (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. All of them must have the same shape except the axis to be concatenated.

Returns:

2-D Tensor, formed by stacking the given tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> x1 = Tensor([1, 1, 1])
>>> x2 = Tensor([2, 2, 2])
>>> output = ops.column_stack((x1, x2))
>>> print(output)
[[1 2]
 [1 2]
 [1 2]]
tinyms.primitives.combinations(x, r=2, with_replacement=False)[source]

Returns all r-length subsequences of input Tensor.

When with_replacement is set to False, it works similar to Python’s itertools.combinations, and when with_replacement is set to True, it behaves like itertools.combinations_with_replacement.

Parameters:
  • x (Tensor) – One-dimensional tensors.

  • r (int, optional) – Number of elements to perform combination. Default: 2.

  • with_replacement (bool, optional) – Allow duplication or not. Default: False.

Returns:

Tensor, contains all possible combinations of elements sampled from input Tensor.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 3, -1, 0, 4])
>>> output = ops.combinations(x)
>>> print(output.asnumpy())
[[ 1  3]
 [ 1 -1]
 [ 1  0]
 [ 1  4]
 [ 3 -1]
 [ 3  0]
 [ 3  4]
 [-1  0]
 [-1  4]
 [ 0  4]]
tinyms.primitives.concat(tensors, axis=0)[source]

Alias for mindspore.ops.cat()

tinyms.primitives.conj(input)[source]

Returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form a + bj, where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form a - bj.

If input is real, it is returned unchanged.

Parameters:

input (Tensor) – The input tensor to compute to. Must have numeric type.

Returns:

Tensor, has the same dtype as the input.

Raises:
  • TypeError – If the dtype of input is not a numeric type.

  • TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> output = ops.conj(x)
>>> print(output)
(1.3-0.4j)
tinyms.primitives.conv1d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 1D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is channel number, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{j}, X_i) + b_j,\]

where \(ccor\) is the cross-correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{j}\) is a slice of kernel, and it has shape \((\text{kernal_size})\), where \(\text{kernel_size}\) is the width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{groups}, \text{kernel_size})\), where groups is the group number to split the input in the channel dimension.

If the pad_mode is set to be “valid”, the output width will be \(\left \lfloor{ 1 + \frac{W_{in} + \text{padding[0]} - \text{kernel_size} - (\text{kernel_size} - 1) \times(\text{dilation} - 1)} {\text { stride }}} \right \rfloor\).

where \(dilation\) is spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input. For output width on other pad_mode, please refer to formula on mindspore.nn.Conv1d.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: ConvNets .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when groups>1, condition C_{in} = C_{out} = groups must be satisfied.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, W_{in})\).

  • weight (Tensor) – Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size})\), then the size of kernel is \((\text{kernel_size})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number or a tuple of one int that represents width of movement. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in left and right possiblily. Otherwise, the last extra padding will be calculated from the right side. If this mode is set, padding must be 0.

    • valid: Adopts the way of discarding. The possible largest width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int]), optional) – Implicit paddings on both sides of input, meaning the paddings of left and right are the same, equal to padding or padding[0] when padding is a tuple of 1 integer. Default: 0.

  • dilation (Union(int, tuple[int]), optional) – Gaps between kernel elements. The data type is int or a tuple of 1 integer. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the width of input. Default: 1.

  • groups (int, optional) – Splits input into groups. Default: 1.

Returns:

Tensor, the value that applied 1D convolution. The shape is \((N, C_{out}, W_{out})\).

Raises:
  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeErrorgroups is not an int.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \((C_{out})\) .

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 1.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(64).reshape((4, 4, 4)), mindspore.float32)
>>> weight = Tensor(np.arange(8).rehspe((2, 2, 2)), mindspore.float32)
>>> bias = Tensor([-0.12345, 2.7683], ms.float32)
>>> output = ops.conv1d(x, weight, pad_mode='pad', padding=(1,), bias=bias, groups=2)
>>> print(output.shape)
(4, 2, 5)
tinyms.primitives.conv2d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 2D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is height, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross-correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel, and it has shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{ kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where groups is the group number to split the input in the channel dimension.

If the pad_mode is set to be “valid”, the output height and width will be \(\left \lfloor{ 1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} - (\text{kernel_size[0]} - 1) \times(\text{dilation[0]} - 1)} {\text { stride[0] }}} \right \rfloor\) and

\(\left \lfloor{1 + \frac{W_{in} + \text{padding[2]} + \text{padding[3]} - \text{kernel_size[1]} - (\text{kernel_size[1]} - 1) \times(\text{dilation[1]} - 1)} {\text { stride[1] }}} \right \rfloor\) respectively.

where \(dilation\) is spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input. For output height and width on other pad_mode, please refer to formula on mindspore.nn.Conv2d.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: ConvNets .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when groups>1, condition C_{in} = C_{out} = groups must be satisfied.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) – Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in top and bottom, left and right possiblily. Otherwise, the last extra padding will be calculated from the bottom and the right side. If this mode is set, padding must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int]), optional) – Implicit paddings on both sides of the input x. If padding is one integer, the paddings of top, bottom, left and right are the same, equal to padding. If padding is a tuple with two integers, the padding of top adn bottom is padding[0], and the padding of left and right is padding[1]. Default: 0.

  • dilation (Union(int, tuple[int]), optional) – Gaps between kernel elements.The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • groups (int, optional) – Splits input into groups. Default: 1.

Returns:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeErrorgroups is not an int.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \(C_{out}\) .

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 2.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> output = ops.conv2d(x, weight)
>>> print(output.shape)
(10, 32, 30, 30)
tinyms.primitives.conv3d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 3D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H, W\) is feature height and width respectively. the output value of a layer is calculated as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation , \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the jth channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where groups is the number of groups to split input in the channel dimension.

For more details, please refer to the paper Gradient Based Learning Applied to Document Recognition .

Note

  1. On Ascend platform, \(groups = 1\) must be satisfied.

  2. On Ascend dilation on depth only supports the case of 1.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\).

  • weight (Tensor) – Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[1]})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • padding (Union[int, tuple[int]], optional) – The pad value to be filled. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of 3 integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[0], pad[1], pad[1], pad[2] and pad[2] correspondingly. Default: 0.

  • dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. The value ranges for the depth, height, and width dimensions are [1, D], [1, H], and [1, W], respectively. Default: 1.

  • groups (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default: 1.

Returns:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

Raises:
  • TypeError – If out_channel or groups is not an int.

  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \(C_{out}\).

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> output = ops.conv3d(x, weight, pad_mode="same", padding=0, stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 10, 32, 32)
>>> output = ops.conv3d(x, weight, pad_mode="valid", padding=0, stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 7, 30, 30)
>>> output = ops.conv3d(x, weight, pad_mode="pad", padding=(2, 1, 1), stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 11, 32, 32)
tinyms.primitives.conv3d_transpose(inputs, weight, pad_mode='valid', padding=0, stride=1, dilation=1, group=1, output_padding=0)[source]

Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Parameters:
  • inputs (Tensor) – The gradients with respect to the output of the convolution. The shape conforms to the default. data_format \((N, C_{in}, D_{out}, H_{out}, W_{out})\). Currently dout data type only supports float16 and float32.

  • weight (Tensor) – Set size of kernel is \((K_d, K_h, K_w)\), then the shape is \((C_{in}, C_{out}//group, K_d, K_h, K_w)\). Where \(group\) is the Args parameter, \(//\) is the symbol for integer division. Currently weight data type only supports float16 and float32.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possibility. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad and output_padding must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • padding (Union(int, tuple[int])) – The padding value to be filled. Default: 0. If padding is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If padding is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] correspondingly.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – Add extra size to each dimension of the output. Default: 0.

Outputs:

Tensor, the gradients with respect to the input of convolution 3D. Tensor of shape \((N, C_{out}//group, D_{out}, H_{out}, W_{out})\), where \(group\) is the Args parameter.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If group is not an int.

  • TypeError – If stride, padding , dilation or output_padding is neither an int not a tuple.

  • ValueError – If the rank of inputs, weight is not equal to 5.

  • ValueError – If stride or dilation is less than 1.

  • ValueError – if inputs[1], weight[1] and weight[2:5] i.e. in_channel, out_channel and kernel_size is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ nor ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘padding’ and padding is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

  • TypeError – If data type of dout and weight is not float16.

Examples

>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
>>> output = conv3d_transpose(dout, weight)
>>> print(output.shape)
(32, 3, 13, 37, 33)
tinyms.primitives.coo_abs(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns coo_absolute value of a COOTensor element-wise.

\[out_i = |x_i|\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_abs(x)
>>> print(output.values)
[1. 2.]
tinyms.primitives.coo_acos(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes arccosine of input coo_tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_acos(x)
>>> print(output.values)
[3.1415927       nan]
tinyms.primitives.coo_acosh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]

Warning

Given an input COOTensor x, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf].

Parameters:

x (COOTensor) – The input COOTensor of inverse hyperbolic cosine function.

Returns:

COOTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_acosh(x)
>>> print(output.values)
[     nan 1.316958]
tinyms.primitives.coo_add(x1: mindspore.common.sparse_tensor.COOTensor, x2: mindspore.common.sparse_tensor.COOTensor, thresh: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes the sum of x1(COOTensor) and x2(COOTensor), and return a new COOTensor based on the computed result and thresh.

Parameters:
  • x1 (COOTensor) – the first COOTensor to sum.

  • x2 (COOTensor) – the second COOTensor to sum.

  • thresh (Tensor) – A 0-D Tensor, represents the magnitude threshold that determines if an output value/index pair take place. Its dtype should match that of the values if they are real. If output’s value is less than the thresh, it will vanish.

Returns:

A COOTensor, the result of sum.

Raises:
  • ValueError – If any input(x1/x2)’s indices’s dim is not equal to 2.

  • ValueError – If any input(x1/x2)’s values’s dim is not equal to 1.

  • ValueError – If any input(x1/x2)’s shape’s dim is not equal to 1.

  • ValueError – If thresh’s dim is not equal to 0.

  • TypeError – If any input(x1/x2)’s indices’s type is not equal to int64.

  • TypeError – If any input(x1/x2)’s shape’s type is not equal to int64.

  • ValueError – If any input(x1/x2)’s indices’s length is not equal to its values’s length.

  • TypeError – If any input(x1/x2)’s values’s type is not equal to anf of (int8/int16/int32/int64/float32/float64/complex64/complex128).

  • TypeError – If thresh’s type is not equal to anf of (int8/int16/int32/int64/float32/float64).

  • TypeError – If x1’s indices’s type is not equal to x2’s indices’s type.

  • TypeError – If x1’s values’s type is not equal to x2’s values’s type.

  • TypeError – If x1’s shape’s type is not equal to x2’s shape’s type.

  • TypeError – If (x1/x2)’s value’s type is not matched with thresh’s type.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, COOTensor
>>> from mindspore import dtype as mstype
>>> from mindspore import context
>>> from mindspore import ops
>>> indics0 = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values0 = Tensor([1, 2], dtype=mstype.int32)
>>> shape0 = (3, 4)
>>> input0 = COOTensor(indics0, values0, shape0)
>>> indics1 = Tensor([[0, 0], [1, 1]], dtype=mstype.int64)
>>> values1 = Tensor([3, 4], dtype=mstype.int32)
>>> shape1 = (3, 4)
>>> input1 = COOTensor(indics1, values1, shape1)
>>> thres = Tensor(0, dtype=mstype.int32)
>>> out = ops.coo_add(input0, input1, thres)
>>> print(out)
COOTensor(shape=[3, 4], dtype=Int32, indices=Tensor(shape=[4, 2], dtype=Int64, value=
[[0 0]
 [0 1]
 [1 1]
 [1 2]]), values=Tensor(shape=[4], dtype=Int32, value=[3 1 4 2]))
tinyms.primitives.coo_asin(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes arcsine of input coo_tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor. The shape of COOTensor is \((N,*)\) , where \(*\) means,any number of additional dimensions. The data type should be one of the following types: float16, float32, float64, complex64, complex128.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32, float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_asin(x)
>>> print(output.values)
[-1.5707964        nan]
tinyms.primitives.coo_asinh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(input_i)\]
Parameters:

x (COOTensor) – The input COOTensor of inverse hyperbolic sine function.

Returns:

COOTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_asinh(x)
>>> print(output.values)
[-0.8813736  1.4436355]
tinyms.primitives.coo_atan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Parameters:

x (COOTensor) – The data type should be one of the following types: float16, float32.

Returns:

A COOTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_atan(x)
>>> print(output.values)
[-0.7853982  1.1071488]
tinyms.primitives.coo_atanh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = tanh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (COOTensor) – Input COOTensor. The data type should be one of the following types: float16, float32.

Returns:

A COOTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_atanh(x)
>>> print(output.values)
[-inf  nan]
tinyms.primitives.coo_ceil(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Rounds a COOTensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of float16 or float32.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_ceil(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_concat(sp_input, concat_dim=0)[source]

concatenates the input SparseTensor(COO format) along the specified dimension.

Warning

This is an experimental API that is subjected to change or deletion. Only supported on CPU now.

Parameters:
  • sp_input (Union[list(COOTensor), tuple(COOTensor)]) – for COOTensor input.

  • concat_dim (scalar) – decide the dimension to concatenation along. The value must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. Default is 0.

Returns:

  • output (COOtensor) - the result of concatenates the input SparseTensor along the specified dimension. OutShape: OutShape[non concat_dim] is equal to InShape[non concat_dim] and OutShape[concat_dim] is all input concat_dim axis shape accumulate.

Raises:
  • ValueError – If only one sparse tensor input.

  • ValueError – If Input COOTensor shape dim > 3. COOtensor shape dim size must be 2 now

Supported Platforms:

CPU

Examples

>>> indices0 = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values0 = Tensor([1, 2], dtype=mstype.int32)
>>> shape0 = (3, 4)
>>> input0 = COOTensor(indices0, values0, shape0)
>>> indices1 = Tensor([[0, 0], [1, 1]], dtype=mstype.int64)
>>> values1 = Tensor([3, 4], dtype=mstype.int32)
>>> shape1 = (3, 4)
>>> input1 = COOTensor(indices1, values1, shape1)
>>> concat_dim = 1
>>> out = ops.coo_concat((input0, input1), concat_dim)
>>> print(out)
COOTensor(shape=[3, 8], dtype=Int32, indices=Tensor(shape=[4, 2], dtype=Int64, value=
[[0 1]
 [0 4]
 [1 2]
 [1 5]]), values=Tensor(shape=[4], dtype=Int32, value=[1 3 2 4]))
tinyms.primitives.coo_cos(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

If use Float64, there may be a problem of missing precision.

Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_cos(x)
>>> print(output.values)
[ 0.5403023  -0.41614684]
tinyms.primitives.coo_cosh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = \cosh(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

COOTensor, has the same shape as x.

Raises:
  • TypeError – If the dtype of x is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_cosh(x)
>>> print(output.values)
[1.5430807 3.7621956]
tinyms.primitives.coo_exp(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the element-wise exponential of a COOTensor.

\[out_i = e^{x_i}\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_exp(x)
>>> print(output.values)
[0.36787948 7.3890557 ]
tinyms.primitives.coo_expm1(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns exponential then minus 1 of a COOTensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of float16 or float32.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_expm1(x)
>>> print(output.values)
[-0.63212055  6.389056  ]
tinyms.primitives.coo_floor(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Rounds a COOTensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

x (COOTensor) – The input COOTensor, its data type must be float16, float32 or float64.

Returns:

COOTensor, has the same shape as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_floor(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_inv(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes Reciprocal of input COOTensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (COOTensor) – Input COOTensor. Must be one of the following types: float16, float32 or int32.

Returns:

COOTensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_inv(x)
>>> print(output.values)
[-1.   0.5]
tinyms.primitives.coo_isfinite(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True\ \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isfinite(x)
>>> print(output.values)
[ True  True]
tinyms.primitives.coo_isinf(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Inf},\ \ True \\ & \text{ if } x_{i} \ne \text{Inf},\ \ False \end{cases}\end{split}\]

where \(Inf\) means infinitity or negative infinitity.

Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isinf(x)
>>> print(output.values)
[False False]
tinyms.primitives.coo_isnan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isnan(x)
>>> print(output.values)
[False False]
tinyms.primitives.coo_log(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the natural logarithm of a COOTensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

x (COOTensor) – The value must be greater than 0.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64 on GPU and CPU.

  • TypeError – If dtype of x is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_log(x)
>>> print(output.values)
[       nan 0.69314575]
tinyms.primitives.coo_log1p(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the natural logarithm of one plus the input COOTensor element-wise.

\[out_i = {log_e}(x_i + 1)\]
Parameters:

x (COOTensor) – The input COOTensor, should have dtype of float16 or float32 and its value should be greater than -1.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_log1p(x)
>>> print(output.values)
[     -inf 1.0986123]
tinyms.primitives.coo_neg(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns a COOTensor with coo_negative values of the input COOTensor element-wise.

\[out_{i} = - x_{i}\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as input.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_neg(x)
>>> print(output.values)
[ 1. -2.]
tinyms.primitives.coo_relu(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes ReLU (Rectified Linear Unit activation function) of input coo_tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(x) = (x)^+ = max(0, x)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

x (COOTensor) –

Input COOTensor with shape \((N, *)\), where \(*\) means any number of additional dimensions. Its dtype is number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_relu(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.coo_relu6(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input coo_tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_relu6(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.coo_round(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns half to even of a COOTensor element-wise.

\[out_i \approx x_i\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape and type as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_round(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_sigmoid(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{coo_sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)}\]

where \(x_i\) is an element of the x.

Parameters:

x (COOTensor) – Input COOTensor, the data type is float16, float32, float64, complex64 or complex128.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is not float16, float32, float64, complex64 or complex128.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sigmoid(x)
>>> print(output.values)
[0.26894143 0.8807971 ]
tinyms.primitives.coo_sin(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes sine of the input element-wise.

\[out_i = sin(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sin(x)
>>> print(output.values)
[-0.84147096  0.9092974 ]
tinyms.primitives.coo_sinh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor of hyperbolic sine function.

Returns:

COOTensor, has the same shape as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sinh(x)
>>> print(output.values)
[-1.1752012  3.6268604]
tinyms.primitives.coo_softsign(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_softsign(x)
>>> print(output.values)
[-0.5        0.6666667]
tinyms.primitives.coo_sqrt(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns sqrt of a COOTensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sqrt(x)
>>> print(output.values)
[      nan 1.4142135]
tinyms.primitives.coo_square(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns square of a COOTensor element-wise.

\[out_{i} = (x_{i})^2\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_square(x)
>>> print(output.values)
[1. 4.]
tinyms.primitives.coo_tan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes tangent of x element-wise.

\[out_i = tan(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_tan(x)
>>> print(output.values)
[-1.5574077 -2.1850398]
tinyms.primitives.coo_tanh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input COOTensor.

Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_tanh(x)
>>> print(output.values)
[-0.7615942  0.9640276]
tinyms.primitives.copysign(x, other)[source]

Create a new floating-point tensor with the magnitude of x and the sign of other, element-wise.

Parameters:
  • x (Union[Tensor]) – Values to change the sign of.

  • other (Union[int, float, Tensor]) – The sign of other is copied to x. If x.shape != other.shape, other must be broadcastable to the shape of x (which is also the shape of the output).

Returns:

Tensor. The dtype of the tensor is float. The values of x with the sign of other, the shape is the same as x.

Raises:

TypeError – If dtype of the input is not in the given types or the input can not be converted to tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> import mindspore.ops as ops
>>> x = np.array([[0.3, -0.7], [0.5, 0.5]])
>>> other = np.array([[-0.4, 0.6], [0.4, -0.6]])
>>> out = ops.copysign(x, other)
>>> print(out)
[[-0.3  0.7]
 [ 0.5 -0.5]]
tinyms.primitives.cos(input)[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

Supported dtypes are float16 and float32, and using float64 may cause a problem of missing precision.

Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = ops.cos(x)
>>> print(output)
[0.971338 0.6748758 0.95233357 0.9959527]
tinyms.primitives.cosh(input)[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = cosh(input_i)\]
Parameters:

input (Tensor) – The input tensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

Tensor, has the same shape as input.

Raises:
  • TypeError – If the dtype of input is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = ops.cosh(x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
tinyms.primitives.cosine_embedding_loss(input1, input2, target, margin=0.0, reduction='mean')[source]

CosineEmbeddingLoss creates a criterion to measure the similarity between two tensors using cosine distance.

Given two tensors \(input1\), \(input2\), and a Tensor label \(target\) with values 1 or -1:

\[\begin{split}loss(input1, input2, target) = \begin{cases} 1-cos(input1, input2), & \text{if } target = 1\\ max(0, cos(input1, input2)-margin), & \text{if } target = -1\\ \end{cases}\end{split}\]
Parameters:
  • input1 (Tensor) – Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • input2 (Tensor) – Tensor of shape \((N, *)\), same shape and dtype as input1.

  • target (Tensor) – Contains value 1 or -1. Suppose the shape of input1 is \((x_1, x_2, x_3, ..., x_R)\), then the shape of target must be \((x_1, x_3, x_4, ..., x_R)\).

  • margin (float, optional) – Should be in [-1.0, 1.0]. Default 0.0.

  • reduction (str, optional) – Specifies which reduction to be applied to the output. It must be one of “none”, “mean”, and “sum”, meaning no reduction, reduce mean and sum on output, respectively. Default “mean”.

Returns:

Tensor or Scalar, if reduction is “none”, its shape is the same as target. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If margin is not a float.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • ValueError – If margin is not in range [-1, 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> intput1 = Tensor(np.array([[0.3, 0.8], [0.4, 0.3]]), mindspore.float32)
>>> intput2 = Tensor(np.array([[0.4, 1.2], [-0.4, -0.9]]), mindspore.float32)
>>> target = Tensor(np.array([1, -1]), mindspore.int32)
>>> output = ops.cosine_embedding_loss(intput1, intput2, target)
>>> print(output)
0.0003425479
tinyms.primitives.cosine_similarity(x1, x2, dim=1, eps=1e-08)[source]

Calculate cosine similarity between x1 and x2 along the axis, dim.

\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}\]

Note

Currently, broadcast of input is not supported.

Parameters:
  • x1 (Tensor) – The first input Tensor.

  • x2 (Tensor) – The second input Tensor.

  • dim (int, optional) – Axis for calculating cosine similarity. Default: 1.

  • eps (float, optional) – Minimal value to avoid division by zero. Default: 1e-8.

Returns:

Tensor, cosine similarity between x1 and x2.

Raises:

TypeError – If the dtype of x1 or x2 is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x1 = ms.Tensor([[-0.0256, 0.0127, -0.2475, 0.2316, 0.8037],
...                 [0.5809, -1.2712, -0.7038, -0.2558, 0.7494]], dtype=ms.float32)
>>> x2 = ms.Tensor([[-0.6115, -0.1965, -0.8484, 0.2389, 0.2409],
...                 [1.8940, -2.1997, 0.1915, 0.0856, 0.7542]], dtype=ms.float32)
>>> output = ops.cosine_similarity(x1, x2)
>>> print(output)
[0.4843164  0.81647635]
tinyms.primitives.cov(input, *, correction=1, fweights=None, aweights=None)[source]

Given the input and weights, returns the covariance matrix (the square matrix of the covariance of each pair of variables) of input, where the input row is the variable and the column is the observation value.

The diagonal contains each variable and its own covariance. If input is a scalar or 1D vector of a single variable, its variance will be returned.

The unbiased sample covariance of the variables \(a\) and \(b\) is given by the following formula:

\[\text{cov}_w(a,b) = \frac{\sum^{N}_{i = 1}(a_{i} - \bar{a})(b_{i} - \bar{b})}{N~-~1}\]

where \(\bar{a}\) and \(\bar{b}\) are the simple means of the \(a\) and \(b\) respectively.

If fweights and/or aweights are provided, the unbiased weighted covariance is calculated, which is given by:

\[\text{cov}_w(a,b) = \frac{\sum^{N}_{i = 1}w_i(a_{i} - \mu_a^*)(b_{i} - \mu_b^*)}{\sum^{N}_{i = 1}w_i~-~1}\]

where \(w\) denotes fweights or aweights based on whichever is provided, or \(w = fweights \times aweights\) if both are provided, and \(\mu_x^* = \frac{\sum^{N}_{i = 1}w_ix_{i} }{\sum^{N}_{i = 1}w_i}\) is the weighted mean of the variable.

Warning

The values of fweights and aweights cannot be negative, and the negative weight scene result is undefined.

Note

Currently, complex number is not supported.

Parameters:

input (Tensor) – A 2D matrix, or a scalar or 1D vector of a single variable

Keyword Arguments:
  • correction (int, optional) – The difference between sample size and sample degrees of freedom. Defaults to Bessel’s correction, correction = 1 which returns the unbiased estimate, even if both fweights and aweights are specified. correction = 0 will return the simple average. Default: 1.

  • fweights (Tensor, optional) – Scalar or one-dimensional Tensor containing integer frequency weight, indicating the number of repetition of each observation vector. Its numel must equal the number of columns of input. Ignored if None. Default: None.

  • aweights (Tensor, optional) – A scalar or 1D Tensor containing float observation weights represents the importance of each observation vector. The higher the importance, the greater the corresponding value. Its numel must equal the number of columns of input. Must have floating point dtype. Ignored if None. Default: None.

Returns:

Tensor, The covariance matrix Tensor of input.

Raises:
  • ValueError – If the dimensions of input is greater than 2.

  • ValueError – If the dimensions of fweights is greater than 1.

  • ValueError – If the numel of fweights not equal the number of columns of input.

  • ValueError – If the numel of aweights not equal the number of columns of input.

  • ValueError – If the dimensions of aweights is greater than 1.

  • TypeError – If the dtype of input is bool.

  • TypeError – If the dtype of fweights is not an integer type.

  • TypeError – If the dtype of aweights is not a floating point type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x = ms.Tensor([[0., 3.], [5., 5.], [7., 0.]]).T
>>> print(x)
[[0. 5. 7.]
 [3. 5. 0.]]
>>> print(ops.cov(x))
[[13.        -3.5      ]
 [-3.5        6.3333335]]
>>> print(ops.cov(x, correction=0))
[[ 8.666667  -2.3333333]
 [-2.3333333  4.2222223]]
>>> fw = ms.Tensor([5, 2, 4], dtype=ms.int64)
>>> aw = ms.Tensor([0.4588, 0.9083, 0.7616], ms.float32)
>>> print(ops.cov(x, fweights=fw, aweights=aw))
[[10.146146 -3.47241 ]
 [-3.47241   4.716825]]
tinyms.primitives.crop_and_resize(image, boxes, box_indices, crop_size, method='bilinear', extrapolation_value=0.0)[source]

Extracts crops from the input image Tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant. For now, the backward of the operator only support bilinear method, for other methods, will return 0.

Parameters:
  • image (Tensor) – A 4-D Tensor representing a batch of images. It has shape \((batch, image\_height, image\_width, depth)\).

  • boxes (Tensor) – A 2-D Tensor with shape \((num\_boxes, 4)\) representing the normalized coordinates of the boxes to be cropped. The coordinates are specified in the form \([y1, x1, y2, x2]\), where \((y1, x1)\) is the first corner and \((y2, x2)\) is the second corner of the box. If \(y1 > y2\), the sampled crop is inverted upside down, the width dimensionis treated similarly when \(x1 > x2\). If normalized coordinates are not in range \([0, 1]\), extrapolated input image values are used instead. Supported data type: float32.

  • box_indices (Tensor) – A 1-D Tensor of shape \(\text{num\_boxes}\) representing the batch index for each box. Supported type: int32.

  • crop_size (Tuple[int]) – A tuple of two elements: (crop_height, crop_width), representing the output size of the cropped and resized images. Only positive values are supported. Supported type: int32.

  • method (str, optional) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. “nearest” is the nearest neighbor interpolation algorithm. Default: “bilinear”.

  • extrapolation_value (float, optional) – An optional float value used extrapolation, if applicable. Default: 0.0.

Returns:

A 4-D tensor of shape \((num_boxes, crop_height, crop_width, depth)\) with type(float32).

Raises:
  • TypeError – If image or boxes or box_indices is not a Tensor.

  • TypeError – If crop_size is not a Tuple with two int32 elements.

  • TypeError – If dtype of boxes is not float or that of box_indices is not int.

  • TypeError – If method is not a str.

  • TypeError – If extrapolation_value is not a float.

  • ValueError – If the shape rank of image is not 4.

  • ValueError – If the shape rank of boxes is not 2.

  • ValueError – If the second dim of boxes is not 4.

  • ValueError – If the shape rank of box_indices is not 1.

  • ValueError – If the first dim of box_indices is not equal to that of boxes.

  • ValueError – If existing element in box_indices is out of range [0, batch).

  • ValueError – If the data of crop_size is not positive.

  • ValueError – If method is not one of ‘bilinear’, ‘nearest’, ‘bilinear_v2’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_indices = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> output = ops.crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_indices), crop_size)
>>> print(output.shape)
 (5, 24, 24, 3)
tinyms.primitives.cross(input, other, dim=None)[source]

Computes the cross product of input and other in dimension dim. input and other must have the same shape, and the size of their dim dimension should be 3. If dim is not specified, it is set to be the first dimension found with the size 3.

Parameters:
  • input (Tensor) – input is a tensor.

  • other (Tensor) – The other Tensor, other must have the same shape and type as input input, and the size of their dim dimension should be 3.

  • dim (int, optional) – dimension to apply cross product in. if dim is None, it is set to be the first dimension found with the size 3. Default: None.

Returns:

Tensor, has the same shape and type as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If other is not a Tensor.

  • TypeError – If the type of input is not the same as that of other.

  • ValueError – If input and other not have the same size, and the size of their dim dimension not be 3.

  • ValueError – If input and other not have the same shape.

  • ValueError – If dim is out of range, dim should be [-len(input.shape), len(input.shape)-1].

Supported Platforms:

Ascend CPU

Examples

>>> # case 1: dim=None.
>>> x = Tensor([[1, 2, 3], [1, 2, 3]])
>>> other = Tensor([[4, 5, 6], [4, 5, 6]])
>>> output = ops.cross(x, other)
>>> print(output)
[[-3  6 -3]
 [-3  6 -3]]
>>> # case 2: dim=1.
>>> x = Tensor([[1, 2, 3], [1, 2, 3]])
>>> other = Tensor([[4, 5, 6], [4, 5, 6]])
>>> output = ops.cross(x, other, dim=1)
>>> print(output)
[[-3  6 -3]
 [-3  6 -3]]
tinyms.primitives.cross_entropy(input, target, weight=None, ignore_index=-100, reduction='mean', label_smoothing=0.0)[source]

The cross entropy loss between input and target.

The cross entropy support two kind of targets:

  • Class indices (int) in the range \([0, C)\) where \(C\) is the number of classes, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}\]

    where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

    If reduction is not ‘none’ (default ‘mean’), then

    \[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}} l_n, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
  • Probabilities (float) for each class, useful when labels beyond a single class per minibatch item are required, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C \exp(x_{n,i})} y_{n,c}\]

    where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

    If reduction is not ‘none’ (default ‘mean’), then

    \[\begin{split}\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). input is expected to be log-probabilities, data type must be float16 or float32.

  • target (Tensor) – \((N)\) or \((N, d_1, d_2, ..., d_K)\) for high-dimensional loss.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\), data type must be float16 or float32. Default: None.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default: -100

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

  • label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.

Returns:

Tensor, the computed loss value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # Case 1: Indices labels
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
>>> output = ops.cross_entropy(inputs, target)
>>> # Case 2: Probability labels
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> output = ops.cross_entropy(inputs, target)
tinyms.primitives.csr_abs(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns csr_absolute value of a CSRTensor element-wise.

\[out_i = |x_i|\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_abs(x)
>>> print(output.values)
[1. 2.]
tinyms.primitives.csr_acos(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes arccosine of input csr_tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64,

complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_acos(x)
>>> print(output.values)
[3.1415927       nan]
tinyms.primitives.csr_acosh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of inverse hyperbolic cosine function, its element must be in range [1, inf].

Returns:

CSRTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_acosh(x)
>>> print(output.values)
[     nan 1.316958]
tinyms.primitives.csr_add(a: mindspore.common.sparse_tensor.CSRTensor, b: mindspore.common.sparse_tensor.CSRTensor, alpha: mindspore.common.tensor.Tensor, beta: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes the linear combination of two input CSRTensors a and b.

\[out = alpha * a + beta * b\]

where both \(a\) and \(b\) are CSRTensor, \(alpha\) and \(beta\) are both Tensor

Note

The user need to ensure that the input sparse matrix is legal. Otherwise, the behavior of the operator is undefined. For example, when there are multiple elements in the same position, the operator may return an error of fail execute.

Parameters:
  • a (CSRTensor) – Input sparse CSRTensor.

  • b (CSRTensor) – Input sparse CSRTensor.

  • alpha (Tensor) – Dense Tensor, its shape must be able to broadcast to a.

  • beta (Tensor) – Dense Tensor, its shape must be able to broadcast to b.

Returns:

CSRTensor, a CSRTensor containing the following parts.

  • indptr - Indicates the start and end point for non-zero values in each row.

  • indices - The column positions of all non-zero values of the input.

  • values - The non-zero values of the dense tensor.

  • shape - The shape of the CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> import mindspore.ops as ops
>>> a_indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> a_indices = Tensor([0, 1], dtype=mstype.int32)
>>> a_values = Tensor([1, 2], dtype=mstype.float32)
>>> shape = (2, 6)
>>> b_indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> b_indices = Tensor([0, 1], dtype=mstype.int32)
>>> b_values = Tensor([1, 2], dtype=mstype.float32)
>>> alpha = Tensor(1, mstype.float32)
>>> beta = Tensor(1, mstype.float32)
>>> csra = CSRTensor(a_indptr, a_indices, a_values, shape)
>>> csrb = CSRTensor(b_indptr, b_indices, b_values, shape)
>>> out = ops.csr_add(csra, csrb, alpha, beta)
>>> print(out)
CSRTensor(shape=[2,6], dtype=Float32,
          indptr=Tensor(shape=[3], dtype=Int32, value = [0, 1, 2]),
          indices=Tensor(shape=[2], dtype=Int32, value = [0, 1]),
          values=Tensor(shape=[2], dtype=Float32, value = [2.0, 4.0]))
tinyms.primitives.csr_asin(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes arcsine of input csr_tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor. The data types should be one of the following types: float16, float32, float64.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32, float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_asin(x)
>>> print(output.values)
[-1.5707964        nan]
tinyms.primitives.csr_asinh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(input_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of inverse hyperbolic sine function.

Returns:

CSRTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_asinh(x)
>>> print(output.values)
[-0.8813736  1.4436355]
tinyms.primitives.csr_atan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Parameters:

x (CSRTensor) – The data type should be one of the following types: float16, float32.

Returns:

A CSRTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_atan(x)
>>> print(output.values)
[-0.7853982  1.1071488]
tinyms.primitives.csr_atanh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = anh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (CSRTensor) – Input CSRTensor. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A CSRTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_atanh(x)
>>> print(output.values)
[-inf  nan]
tinyms.primitives.csr_ceil(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Rounds a CSRTensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of float16 or float32.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_ceil(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_cos(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

Currently support data types float16 and float32. If use Float64, there may be a problem of missing precision.

Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_cos(x)
>>> print(output.values)
[ 0.5403023  -0.41614684]
tinyms.primitives.csr_cosh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = \cosh(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

CSRTensor, has the same shape as x.

Raises:
  • TypeError – If the dtype of x is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_cosh(x)
>>> print(output.values)
[1.5430807 3.7621956]
tinyms.primitives.csr_div(x: mindspore.common.sparse_tensor.CSRTensor, y: mindspore.common.tensor.Tensor) → mindspore.common.tensor.Tensor[source]

Returns x / y where x is CSRTensor and y is Tensor.

Note

This function returns the results of dense Tensor, represents the non-zero values of the CSRTensor. If user expects a CSRTensor as output, please directly use / operator instead. Only support dense tensor broadcast to sparse tensor at the moment.

Parameters:
  • x (CSRTensor) – Sparse CSR Tensor.

  • y (Tensor) – Dense Tensor, its shape must be able to broadcast to x.

Returns:

Dense Tensor, represents the non-zero values of the result.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_exp(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns csr_exponential of a CSRTensor element-wise.

\[out_i = e^{x_i}\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_exp(x)
>>> print(output.values)
[0.36787948 7.3890557 ]
tinyms.primitives.csr_expm1(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns exponential then minus 1 of a CSRTensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of float16 or float32.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_expm1(x)
>>> print(output.values)
[-0.63212055  6.389056  ]
tinyms.primitives.csr_floor(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Rounds a CSRTensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

x (CSRTensor) – The input CSRTensor, its data type must be float16, float32 or float64.

Returns:

CSRTensor, has the same shape as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_floor(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_inv(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes Reciprocal of input CSRTensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (CSRTensor) – Input CSRTensor. Must be one of the following types: float16, float32 or int32.

Returns:

CSRTensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_inv(x)
>>> print(output.values)
[-1.   0.5]
tinyms.primitives.csr_isfinite(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True\ \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isfinite(x)
>>> print(output.values)
[ True  True]
tinyms.primitives.csr_isinf(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Inf},\ \ True \\ & \text{ if } x_{i} \ne \text{Inf},\ \ False \end{cases}\end{split}\]

where \(Inf\) means not a number.

Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isinf(x)
>>> print(output.values)
[False False]
tinyms.primitives.csr_isnan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isnan(x)
>>> print(output.values)
[False False]
tinyms.primitives.csr_log(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns the natural logarithm of a CSRTensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

x (CSRTensor) – The value must be greater than 0.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64 on GPU and CPU.

  • TypeError – If dtype of x is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_log(x)
>>> print(output.values)
[       nan 0.69314575]
tinyms.primitives.csr_log1p(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns the natural logarithm of one plus the input CSRTensor element-wise.

\[out_i = {log_e}(x_i + 1)\]
Parameters:

x (CSRTensor) – The input CSRTensor. With float16 or float32 data type. The value must be greater than -1.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_log1p(x)
>>> print(output.values)
[     -inf 1.0986123]
tinyms.primitives.csr_mm(a: mindspore.common.sparse_tensor.CSRTensor, b: mindspore.common.sparse_tensor.CSRTensor, trans_a: bool = False, trans_b: bool = False, adjoint_a: bool = False, adjoint_b: bool = False)[source]

Return the matrix multiplication result of the right-multiply matrix (dense or CSRTensor) of the CSRTensor. The CSRTensor with shape [M, N] needs to adapt the right matrix with shape [N, K] to get the dense matrix or CSRTensor with result [M, K].

Note

Currently supports GPU backend with right matrix is CSRTensor.

Parameters:
  • a (CSRTensor) – Sparse CSR Tensor, rank should be 2.

  • b (CSRTensor) – Sparse CSR Tensor, rank should be 2.

  • trans_a (bool, optional) – whether to transpose CSRTensor a. Default: False.

  • trans_b (bool, optional) – whether to transpose CSRTensor b. Default: False.

  • adjoint_a (bool, optional) – whether to adjoint CSRTensor a. Default: False.

  • adjoint_b (bool, optional) – whether to adjoint CSRTensor b. Default: False.

Returns:

CSRTensor.

Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>> a_shape = (4, 5)
>>> a_indptr = Tensor([0, 1, 1, 3, 4], dtype=mstype.int32)
>>> a_indices = Tensor([0, 3, 4, 0],dtype=mstype.int32)
>>> a_values = Tensor([1.0, 5.0, -1.0, -2.0], dtype=mstype.float32)
>>> b_shape = (5, 3)
>>> b_indptr = Tensor([0, 1, 1, 3, 3, 3], dtype=mstype.int32)
>>> b_indices = Tensor([0, 0, 1],dtype=mstype.int32)
>>> b_values = Tensor([2.0, 7.0, 8.0], dtype=mstype.float32)
>>> a = CSRTensor(a_indptr, a_indices, a_values, a_shape)
>>> b = CSRTensor(b_indptr, b_indices, b_values, b_shape)
>>> c = ops.csr_mm(a, b)
>>> print(c.shape)
(4, 3)
>>> print(c.values)
[2. -4.]
>>> print(c.indptr)
[0 1 1 1 2]
>>> print(c.indices)
[0 0]
tinyms.primitives.csr_mul(x: mindspore.common.sparse_tensor.CSRTensor, y: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns x * y where x is CSRTensor and y is Tensor.

Parameters:
  • x (CSRTensor) – Sparse CSR Tensor.

  • y (Tensor) – Dense Tensor, its shape must be able to broadcast to x.

Returns:

CSRTensor.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_mv(csr_tensor: mindspore.common.sparse_tensor.CSRTensor, dense: mindspore.common.tensor.Tensor) → mindspore.common.tensor.Tensor[source]

Sparse matrix-vector multiplication.

Parameters:
  • csr_tensor (CSRTensor) – Sparse CSR Tensor.

  • dense (Tensor) – Dense Tensor.

Returns:

Dense Tensor.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_neg(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns a CSRTensor with csr_negative values of the input CSRTensor element-wise.

\[out_{i} = - x_{i}\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as input.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_neg(x)
>>> print(output.values)
[ 1. -2.]
tinyms.primitives.csr_reduce_sum(csr_tensor: mindspore.common.sparse_tensor.CSRTensor, axis: int) → mindspore.common.tensor.Tensor[source]

Reduces a dimension of a CSRTensor by summing all elements in the dimension.

Parameters:
  • csr_tensor (CSRTensor) – Sparse CSR Tensor.

  • axis (int) – Axis to be reduced.

Returns:

Dense Tensor, represents the non-zero values of the result.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_relu(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes ReLU (Rectified Linear Unit activation function) of input csr_tensors element-wise.

It returns max(x, 0) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(x) = (x)^+ = max(0, x)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, with the same dtype and shape as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_relu(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.csr_relu6(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input csr_tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_relu6(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.csr_round(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns half to even of a CSRTensor element-wise.

\[out_i \approx x_i\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape and type as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_round(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_sigmoid(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{csr_sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)}\]

where \(x_i\) is an element of the x.

Parameters:

x (CSRTensor) – Input CSRTensor, the data type is float16, float32, float64, complex64 or complex128.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is not float16, float32, float64, complex64 or complex128.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sigmoid(x)
>>> print(output.values)
[0.26894143 0.8807971 ]
tinyms.primitives.csr_sin(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes sine of the input element-wise.

\[out_i = sin(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64,

complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sin(x)
>>> print(output.values)
[-0.84147096  0.9092974 ]
tinyms.primitives.csr_sinh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of hyperbolic sine function.

Returns:

CSRTensor, has the same shape as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sinh(x)
>>> print(output.values)
[-1.1752012  3.6268604]
tinyms.primitives.csr_softmax(logits: mindspore.common.sparse_tensor.CSRTensor, dtype: <module 'mindspore.common.dtype' from '/home/docs/checkouts/readthedocs.org/user_builds/tinyms/envs/latest/lib/python3.7/site-packages/mindspore-2.0.0rc1-py3.7-linux-x86_64.egg/mindspore/common/dtype.py'>)[source]

Calculates the softmax of a CSRTensorMatrix.

Parameters:
  • logits (CSRTensor) – Input sparse CSRTensor.

  • dtype (dtype) – Input data type.

Returns:

CSRTensor, a CSRTensor containing

  • indptr - Indicates the start and end point for non-zero values in each row.

  • indices - The column positions of all non-zero values of the input.

  • values - The non-zero values of the dense tensor.

  • shape - The shape of the CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> logits_indptr = Tensor([0, 4, 6], dtype=mstype.int32)
>>> logits_indices = Tensor([0, 2, 3, 4, 3, 4], dtype=mstype.int32)
>>> logits_values = Tensor([1, 2, 3, 4, 1, 2], dtype=mstype.float32)
>>> shape = (2, 6)
>>> logits = CSRTensor(logits_indptr, logits_indices, logits_values, shape)
>>> out = ops.csr_softmax(logits, dtype=mstype.float32)
>>> print(out)
CSRTensor(shape=[2, 6], dtype=Float32, indptr=Tensor(shape=[3], dtype=Int32, value=[0 4 6]),
               indices=Tensor(shape=[6], dtype=Int32, value=[0 2 3 4 3 4]),
               values=Tensor(shape=[6], dtype=Float32, value=[ 3.20586003e-02  8.71443152e-02  2.36882806e-01
               6.43914223e-01  2.68941432e-01  7.31058598e-01]))
tinyms.primitives.csr_softsign(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_softsign(x)
>>> print(output.values)
[-0.5        0.6666667]
tinyms.primitives.csr_sqrt(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns sqrt of a CSRTensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sqrt(x)
>>> print(output.values)
[      nan 1.4142135]
tinyms.primitives.csr_square(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns square of a CSRTensor element-wise.

\[out_{i} = (x_{i})^2\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_square(x)
>>> print(output.values)
[1. 4.]
tinyms.primitives.csr_tan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes tangent of x element-wise.

\[out_i = tan(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_tan(x)
>>> print(output.values)
[-1.5574077 -2.1850398]
tinyms.primitives.csr_tanh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input CSRTensor.

Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_tanh(x)
>>> print(output.values)
[-0.7615942  0.9640276]
tinyms.primitives.csr_to_coo(tensor: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Converts a CSRTensor to COOTensor.

Note

Only 2-D CSRTensor is supported for now.

Parameters:

tensor (CSRTensor) – A CSRTensor, must be 2-D.

Returns:

2D COOTensor, the input tensor stored in COO format.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> indptr = Tensor([0, 1, 2]).astype("int32")
>>> indices = Tensor([0, 1]).astype("int32")
>>> values = Tensor([2, 1]).astype("float32")
>>> shape = (2, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_to_coo(x)
>>> print(output.indices)
[[0 0]
[1 1]]
tinyms.primitives.csr_to_dense(csr_tensor: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.tensor.Tensor[source]

Converts a CSRTensor to its dense form.

Note

Only 2-D CSRTensor is supported for now.

Parameters:

csr_tensor (CSRTensor) – A CSRTensor, must be 2-D.

Returns:

Tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, CSRTensor, ops
>>> indptr = Tensor([0, 1, 2]).astype("int32")
>>> indices = Tensor([0, 1]).astype("int32")
>>> values = Tensor([2, 1]).astype("float32")
>>> shape = (2, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_to_dense(x)
>>> print(output)
tinyms.primitives.ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True)[source]

Performs greedy decoding on the logits given in inputs.

Parameters:
  • inputs (Tensor) – The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float32 or float64.

  • sequence_length (Tensor) – A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must be equal to or less than max_time.

  • merge_repeated (bool) – If true, merge repeated classes in output. Default: True.

Returns:

decoded_indices (Tensor), A tensor with shape of \((total\_decoded\_outputs, 2)\). Data type is int64.

decoded_values (Tensor), A tensor with shape of \((total\_decoded\_outputs, )\), it stores the decoded classes. Data type is int64.

decoded_shape (Tensor), A tensor with shape of \((batch\_size, max\_decoded\_length)\). Data type is int64.

log_probability (Tensor), A tensor with shape of \((batch\_size, 1)\), containing sequence log-probability, has the same type as inputs.

Raises:
  • TypeError – If merge_repeated is not a bool.

  • ValueError – If length of shape of inputs is not equal to 3.

  • ValueError – If length of shape of sequence_length is not equal to 1.

  • ValueError – If value in the sequence_length is larger than max_time.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.array([[[0.6, 0.4, 0.2], [0.8, 0.6, 0.3]],
...                           [[0.0, 0.6, 0.0], [0.5, 0.4, 0.5]]]), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> decoded_indices, decoded_values, decoded_shape, log_probability = ops.ctc_greedy_decoder(inputs,
...                                                                                          sequence_length)
>>> print(decoded_indices)
[[0 0]
 [0 1]
 [1 0]]
>>> print(decoded_values)
[0 1 0]
>>> print(decoded_shape)
[2 2]
>>> print(log_probability)
[[-1.2]
 [-1.3]]
tinyms.primitives.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

CTC is a loss function in sequence labeling problems, which is mainly used to deal with the alignment of input and output labels in sequence labeling problems. While traditional sequence labeling algorithms require the input and output symbols to be perfectly aligned at each moment, CTC expands the label collection and adds empty elements. After labeling the sequence using the extended label set, all the prediction sequences that can be converted into real sequences by the mapping function are correct prediction results, that is, the predicted sequence can be obtained without data alignment processing. Its objective function is to maximize the sum of probabilities of all correct prediction sequences.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Parameters:
  • log_probs (Tensor) – A tensor of shape \((T, N, C)\), where T is input length, N is batch size and C is number of classes (including blank).

  • targets (Tensor) – Target sequences. A tensor of shape \((N, S)\), where S is max target length.

  • input_lengths (Union(tuple, Tensor)) – Lengths of the input. A tuple or Tensor of shape(N).

  • target_lengths (Union(tuple, Tensor)) – Lengths of the target. A tuple or Tensor of shape(N).

  • blank (int, optional) – The blank label. Default: 0.

  • reduction (str, optional) – Implements the reduction method to the output with ‘none’, ‘mean’, or ‘sum’, respectively indicate that no calculation is specified, that the mean is used, and that is calculated using summation. Default: ‘mean’.

  • zero_infinity (bool, optional) – Whether to set infinite loss and correlation gradient to 0. Default: False.

Returns:

neg_log_likelihood (Tensor), A loss value with shape \((N)\) , which is differentiable with respect to each input node.

log_alpha (Tensor), The probability of possible trace of input to target with shape \((N, T, 2 * S + 1)\) .

Raises:
  • TypeError – If zero_infinity is not a bool, reduction is not string.

  • TypeError – If the dtype of log_probs is not float or double.

  • TypeError – If the dtype of targets, input_lengths or target_lengths is not int32 or int64.

  • ValueError – If the rank of log_probs is not 3.

  • ValueError – If the rank of targets is not 2.

  • ValueError – If the shape of input_lengths does not match N. N is batch size of log_probs .

  • ValueError – If the shape of target_lengths does not match N. N is batch size of log_probs .

  • TypeError – If the types of targets, input_lengths or target_lengths are different.

  • ValueError – If the value of blank is not in range [0, num_labels|C). C is number of classes of log_probs .

  • RuntimeError – If any value of input_lengths is larger than T. T is the length of log_probs.

  • RuntimeError – If any target_lengths[i] is not in range [0, input_length[i]].

Supported Platforms:

Ascend GPU CPU

Examples

>>> log_probs = Tensor(np.array([[[0.3, 0.6, 0.6]],
...                              [[0.9, 0.4, 0.2]]]).astype(np.float32))
>>> targets = Tensor(np.array([[0, 1]]), mstype.int32)
>>> input_lengths = Tensor(np.array([2]), mstype.int32)
>>> target_lengths = Tensor(np.array([1]), mstype.int32)
>>> loss, log_alpha = ops.ctc_loss(log_probs, targets, input_lengths,
...                                target_lengths, 0, 'mean', True)
>>> print(loss)
-2.2986124
>>> print(log_alpha)
[[[0.3       0.3            -inf      -inf      -inf]
  [1.2       1.8931472 1.2            -inf      -inf]]]
tinyms.primitives.cummax(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative maximum value of input Tensor input along the dimension axis, and indices is the index location of each maximum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = max(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative maximum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> output = ops.cummax(x, axis=0)
>>> print(output[0])
[[ 3.  4.  6. 10.]
 [ 3.  6.  7. 10.]
 [ 4.  6.  8. 10.]
 [ 4.  6.  8. 10.]]
>>> print(output[1])
[[0 0 0 0]
 [0 1 1 0]
 [2 1 2 0]
 [2 1 2 0]]
tinyms.primitives.cummin(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative minimum value of input Tensor input along the dimension axis, and indices is the index location of each minimum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = min(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative minimum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> output = ops.cummin(a, axis=0)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
tinyms.primitives.cumprod(input, dim, dtype=None)[source]

Computes the cumulative product of the input tensor along dimension dim. For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

\[y_i = x_1 * x_2 * x_3 * ... * x_i\]
Parameters:
  • input (Tensor[Number]) – The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • dim (int) – The dimensions to compute the cumulative product. Only constant value is allowed.

  • dtype (mindspore.dtype, optional) – The desired data type of output. If not specified, remains the same as the original Tensor. Default: None.

Returns:

Tensor, has the same shape and dtype as the input unless dtype is specified.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3], np.float32))
>>> output = ops.cumprod(x, 0)
>>> print(output)
[1. 2. 6.]
tinyms.primitives.cumsum(x, axis, dtype=None)[source]

Computes the cumulative sum of input Tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]

Note

On Ascend, the dtype of x only support :int8, uint8, int32, float16 or float32 in case of static shape. For the case of dynamic shape, the dtype of x only support int32, float16 or float32.

Parameters:
  • x (Tensor) – The input Tensor to accumulate.

  • axis (int) – Axis along which the cumulative sum is computed.

  • dtype (mindspore.dtype, optional) – The desired dtype of returned Tensor. If specified, the input Tensor will be cast to dtype before the computation. This is useful for preventing overflows. If not specified, stay the same as original Tensor. Default: None.

Returns:

Tensor, the shape of the output Tensor is consistent with the input Tensor’s.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> # case 1: along the axis 0
>>> y = ops.cumsum(x, 0)
>>> print(y)
[[ 3.  4.  6. 10.]
 [ 4. 10. 13. 19.]
 [ 8. 13. 21. 26.]
 [ 9. 16. 28. 35.]]
>>> # case 2: along the axis 1
>>> y = ops.cumsum(x, 1)
>>> print(y)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
tinyms.primitives.deformable_conv2d(x, weight, offsets, kernel_size, strides, padding, bias=None, dilations=(1, 1, 1, 1), groups=1, deformable_groups=1, modulated=True)[source]

Given 4D tensor inputs x, weight and offsets, compute a 2D deformable convolution. The deformable convolution operation can be expressed as follow:

Deformable Convolution v1:

\[y(p)=\sum_{k=1}^{K}w_{k}\cdot x(p+p_{k}+\Delta{p_{k}})\]

Deformable Convolution v2:

\[y(p)=\sum_{k=1}^{K}w_{k}\cdot x(p+p_{k}+\Delta{p_{k}})\cdot \Delta{m_{k}}\]

Where \(\Delta{p_{k}}\) and \(\Delta{m_{k}}\) are the learnable offset and modulation scalar for the k-th location. For details, please refer to Deformable ConvNets v2: More Deformable, Better Results and Deformable Convolutional Networks.

Parameters:
  • x (Tensor) – A 4D tensor of input image. With the format “NCHW”, the shape is \((N, C_{in}, H_{in}, W_{in})\). Dtype: float16 or float32.

  • weight (Tensor) – A 4D tensor of learnable filters. Must have the same type as x. The shape is \((C_{out}, C_{in} / groups, H_{f}, W_{f})\).

  • offsets (Tensor) – A 4D tensor of x-y coordinates offset and mask. With the format “NCHW”, the shape is \((batch, 3 * deformable\_groups * H_{f} * W_{f}, H_{out}, W_{out})\). Note the C dimension is stored in the order of (offset_x, offset_y, mask). Must have the same type as x.

  • kernel_size (tuple[int]) – A tuple of 2 integers. The size of kernel.

  • strides (tuple[int]) – A tuple of 4 integers. The stride of the sliding window for each dimension of input. The dimension order is interpreted according to the data format of x. The N and C dimensions must be set to 1.

  • padding (tuple[int]) – A tuple of 4 integers. The number of pixels to add to each (top, bottom, left, right) side of the input.

  • bias (Tensor, optional) – An 1D tensor of additive biases to the filter outputs. The shape is \((C_{out})\). Defaults to None.

  • dilations (tuple[int], optional) – A tuple of 4 integers. The dilation factor for each dimension of input. The dimension order is interpreted according to the data format of x. The N and C dimensions must be set to 1. Defaults to (1, 1, 1, 1).

  • groups (int, optional) – An integer of type int32. The number of blocked connections from input channels to output channels. In_channels and out_channels must both be divisible by groups. Defaults to 1.

  • deformable_groups (int, optional) – An integer of type int32. The number of deformable group partitions. In_channels must be divisible by deformable_groups. Defaults to 1.

  • modulated (bool, optional) – Specifies version of DeformableConv2D, True means v2, False means v1, currently only supports v2. Defaults to True.

Returns:

Tensor, A 4D Tensor of output feature map. With the same type as x. With the format “NCHW”, the shape is \((N, C_{out}, H_{out}, W_{out})\).

\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[0] + padding[1] - (H_{f} - 1) \times \text{dilations[2]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[2] + padding[3] - (W_{f} - 1) \times \text{dilations[3]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

Raises:
  • TypeError – If strides, padding, kernel_size or dilations is not a tuple with integer elements.

  • TypeError – If modulated is not a bool.

  • ValueError – If the tuple size of strides, padding, kernel_size or dilations is not expected.

  • ValueError – The N or C dimensions of ‘strides’ or dilations is not set to 1.

  • ValueError – If modulated is not set to True.

Warning

This is an experimental API that is subject to change or deletion.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones((4, 3, 10, 10)), mstype.float32)
>>> kh, kw = 3, 3
>>> weight = Tensor(np.ones((5, 3, kh, kw)), mstype.float32)
>>> offsets = Tensor(np.ones((4, 3 * kh * kw, 8, 8)), mstype.float32)
>>> output = ops.deformable_conv2d(x, weight, offsets, (kh, kw), (1, 1, 1, 1), (0, 0, 0, 0))
>>> print(output.shape)
(4, 5, 8, 8)
tinyms.primitives.deg2rad(x)[source]

Converts angles in degrees to angles in radians element-wise.

Parameters:

x (Tensor) – The input tensor. With float16, float32 or float64 data type.

Returns:

Tensor, has the same dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x isn’t float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[90.0, -90.0], [180.0, -180.0], [270.0, -270.0]]).astype(np.float32))
>>> output = ops.deg2rad(x)
>>> print(output)
[[ 1.5707964 -1.5707964]
 [ 3.1415927 -3.1415927]
 [ 4.712389  -4.712389 ]]
tinyms.primitives.dense_to_sparse_coo(tensor: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.COOTensor[source]

Convert a Tensor to COOTensor.

Note

Only 2-D tensor is supported for now.

Parameters:

tensor (Tensor) – A dense tensor, must be 2-D.

Returns:

COOTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indices (Tensor): 2-D integer tensor, indicates the positions of values of the dense tensor.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the COOTensor, is the same as the original dense tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore as ms
>>> x = Tensor([[1, 0], [-5, 0]], ms.float32)
>>> output = ops.dense_to_sparse_coo(x)
>>> print(output.indices)
[[0 0]
[1 0]]
>>> print(output.values)
[ 1. -5.]
>>> print(output.shape)
(2, 2)
tinyms.primitives.dense_to_sparse_csr(tensor: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Convert a Tensor to CSRTensor.

Note

Only 2-D tensor is supported for now.

Parameters:

tensor (Tensor) – A dense tensor, must be 2-D.

Returns:

CSRTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indptr (Tensor): 1-D integer tensor, indicates the start and end point for values in each row.

  • indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the CSRTensor, is the same as the original dense tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore as ms
>>> x = Tensor([[1, 0], [-5, 0]], ms.float32)
>>> output = ops.dense_to_sparse_csr(x)
>>> print(output.indptr)
[0 1 2]
>>> print(output.indices)
[0 0]
>>> print(output.shape)
(2, 2)
tinyms.primitives.derivative(fn, primals, order)[source]

This function is designed to calculate the higher order differentiation of given composite function. To figure out order-th order differentiations, original inputs and order must be provided together. In particular, the value of input first order derivative is set to 1, while the other to 0.

Note

If primals is Tensor of int type, it will be converted to Tensor of float type.

Parameters:
  • fn (Union[Cell, function]) – Function to do TaylorOperation.

  • primals (Union[Tensor, tuple[Tensor]]) – The inputs to fn.

  • order (int) – For each Tensor, the order-th order of derivative of output with respect to the inputs will be figured out.

Returns:

Tuple, tuple of out_primals and out_series.

  • out_primals (Union[Tensor, list[Tensor]]) - The output of fn(primals).

  • out_series (Union[Tensor, list[Tensor]]) - The order-th order of derivative of output with respect to the inputs.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.sin = ops.Sin()
...         self.exp = ops.Exp()
...     def construct(self, x):
...         out1 = self.sin(x)
...         out2 = self.exp(out1)
...         return out2
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> order = 3
>>> net = Net()
>>> out_primals, out_series = ops.derivative(net, primals, order)
>>> print(out_primals, out_series)
[[2.319777  2.4825778]
 [1.1515628 0.4691642]] [[-4.0515366   3.6724353 ]
 [ 0.5053504  -0.52061415]]
tinyms.primitives.det(input)[source]

Computes the determinant of one or more square matrices.

Parameters:

input (Tensor) – A matrix to be calculated, its shape should be \([..., M, M]\) who must have at least two dimensions, and the last two dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.

Returns:

Tensor. The shape is \(input.shape[:-2]\), and the dtype is same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input not float32, float64, complex64 or complex128.

  • ValueError – If the last two dimensions of input is not same size.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> output = ops.det(input)
>>> print(output)
[-16.5 21. ]
Supported Platforms:

Ascend GPU CPU

tinyms.primitives.diag(input)[source]

Constructs a diagonal tensor with a given diagonal values.

Assume input has dimensions \((D_1,... D_k)\) , the output is a tensor of rank 2k with dimensions \((D_1,..., D_k, D_1,..., D_k)\) where: \(output[i_1,..., i_k, i_1,..., i_k] = input[i_1,..., i_k]\) and 0 everywhere else.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same dtype as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> input_x = Tensor([1, 2, 3, 4]).astype('int32')
>>> output = ops.diag(input_x)
>>> print(output)
[[1 0 0 0]
 [0 2 0 0]
 [0 0 3 0]
 [0 0 0 4]]
tinyms.primitives.diag_embed(input, offset=0, dim1=-2, dim2=-1)[source]

Creates a tensor with diagonals filled by input. The remaining elements are filled by 0. If the shape of input is \([x_{0}, x_{1}, ..., x_{n-1}, x_{n}]\), the output shape is: the vector obtained by inserting \(x_{n}+|offset|\) into the vector \([x_{0}, x_{1}, ..., x_{n-1}]\) at position dim1 and dim2.

Parameters:
  • input (Tensor) – Values to fill diagonal.

  • offset (int, optional) –

    Offset of the diagonal. \(offset=0\) refers to the main diagonal. Default: 0.

    • If \(offset>0\), fill the diagonals that are offset units upward from the main diagonal.

    • If \(offset<0\), fill the diagonals that are |offset| units downward from the main diagonal.

  • dim1 (int, optional) – The first dimension in input with respect to which to fill diagonal. Default: -2.

  • dim2 (int, optional) – The second dimension in input with respect to which to fill diagonal. Default: -1.

Returns:

Tensor, has the same dtype as input, but the shape of output is one dimension higher than the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not supported.

  • TypeError – If offset is not an int.

  • TypeError – If dim1 or dim2 is not an int.

  • ValueError – If the dimension of input is not 1D-6D.

  • ValueError – If dim1 is not in range of [-len(input.shape) - 1, len(input.shape)].

  • ValueError – If dim2 is not in range of [-len(input.shape) - 1, len(input.shape)].

  • ValueError – If dim1 and dim2 are identical.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2,3,4]), mindspore.float32)
>>> output = ops.diag_embed(x)
>>> print(output)
[[2. 0. 0.]
 [0. 3. 0.]
 [0. 0. 4.]]
tinyms.primitives.diagflat(input, offset=0)[source]

Create a 2-D Tensor which diagonal is the flattened input .

Parameters:
  • input (Tensor) – Input Tensor, which is flattened and set as the diagonal of the output.

  • offset (int, optional) –

    offset controls which diagonal to choose. Default: 0.

    • When offset is zero, the diagonal chosen is the main diagonal.

    • When offset is a positive integer, the diagonal chosen is up the main diagonal.

    • When offset is a negative integer, the diagonal chosen is down the main diagonal.

Returns:

The 2-D Tensor, whose diagonal is the flattened input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 2], mindspore.float32)
>>> output = ops.diagflat(x, 1)
>>> print(output)
[[0. 1. 0.]
 [0. 0. 2.]
 [0. 0. 0.]]
tinyms.primitives.diagonal(input, offset=0, dim1=0, dim2=1)[source]

Returns specified diagonals of input.

If input is 2-D, returns the diagonal of input with the given offset. If input has more than two dimensions, then the axes specified by dim1 and dim2 are used to determine the 2-D sub-array whose diagonal is returned. In this case, remove the dim1 and dim2 dimensions of input and insert the last dimension of input by the diagonal elements determined by dim1 and dim2.

Parameters:
  • input (Tensor) – Array from which the diagonals are taken.

  • offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults: 0.

  • dim1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

  • dim2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis (1).

Returns:

Tensor, if input is 2-D, then input 1-D array containing the diagonal. If input.ndim > 2, then the dimensions specified by dim1 and dim2 are removed, and a new axis inserted at the end corresponding to the diagonal.

Raises:

ValueError – if the input tensor has less than two dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[0, 1], [2, 3]], mstype.float32)
>>> output = ops.diagonal(x)
>>> print(output)
[0 3]
tinyms.primitives.diff(x, n=1, axis=-1, prepend=None, append=None)[source]

Computes the n-th discrete difference along a specified axis of a given input x.

The first difference is calculated as \(out[i] = x[i+1] - x[i]\) along the specified axis. To compute higher differences, the function is called recursively using the output from the previous iteration as input.

Note

Zero-shaped Tensor is not supported, a value error is raised if an empty Tensor is encountered. Any dimension of an Tensor is 0 is considered an empty Tensor. Tensor with shape of \((0,)\), \((1, 2, 0, 4)\) are all empty Tensor.

Parameters:
  • x (Tensor) – Input tensor. Full support for signed integers, partial support for floats and complex numbers

  • n (int, optional) – The number of times values are differenced. If zero, the input is returned as-is. Currently only 1 is supported. Default: 1.

  • axis (int, optional) – The axis along which the difference is taken, default is the last axis. Default: -1.

  • prepend (Tensor, optional) – Values to prepend to x along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array along all other axis. Otherwise the dimension and shape must match x except along axis. Default: None.

  • append (Tensor, optional) – Values to append to x along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array along all other axis. Otherwise the dimension and shape must match x except along axis. Default: None.

Returns:

Tensor, the n-th differences of input. The shape of the output is the same as x except along axis where the size is reduced by n. The type of the output is the same as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 3, -1, 0, 4])
>>> out = ops.diff(x)
>>> print(out.asnumpy())
[ 2 -4  1  4]
tinyms.primitives.digamma(input)[source]

Computes the grad of the lgamma function on input.

\[P(input) = grad(\ln \Gamma(input))\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

input (Tensor) – The input tensor. With type of float16 or float32 or float64.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.5, 0.5, 9]).astype(np.float16))
>>> output = ops.digamma(x)
>>> print(output)
[ 0.0365 -1.964   2.14  ]
tinyms.primitives.dist(input, other, p=2)[source]

Computes batched the \(p\)-norm distance between each pair of the two collections of row vectors.

Note

Since only normalization for integer \(p\)-normal form is supported in MindSpore, a type error will be raised if \(p\) is not an integer.

Parameters:
  • input (Tensor) – The first input tensor. The dtype must be float16 or float32.

  • other (Tensor) – The second input tensor. The dtype must be float16 or float32.

  • p (int, optional) – The order of norm. p is greater than or equal to 0. Default: 2.

Returns:

Tensor, has the same dtype as input, which shape is \((1)\).

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input or other is neither float16 nor float32.

  • TypeError – If p is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1.0, 1.0], [2.0, 2.0]]])
>>> input_y = Tensor([[[3.0, 3.0], [3.0, 3.0]]])
>>> out = ops.dist(input_x, input_y)
>>> print(out.asnumpy())
3.1622777
tinyms.primitives.div(input, other, *, rounding_mode=None)[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = input_{i} / other_{i}\]
Parameters:
  • input (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • other (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Keyword Arguments:

rounding_mode (str, optional) –

Type of rounding applied to the result. Three types are defined as,

  • None: Default behavior, which is the same as true division in Python or true_divide in NumPy.

  • ”floor”: Rounds the division of the inputs down, which is the same as floor division in Python or floor_divide in NumPy.

  • ”trunc”: Rounds the division of the inputs towards zero, which is the same as C-style integer division.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not one of the following: Tensor, Number, bool.

  • ValueError – If rounding_mode value is not None, “floor” or “trunc”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> output = ops.div(x, y)
>>> print(output)
[0.25 0.4 0.5]
tinyms.primitives.divide(input, other, *, rounding_mode=None)[source]

Alias for mindspore.ops.div() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.dot(input, other)[source]

Computation a dot product between samples in two tensors.

Parameters:
  • input (Tensor) – First tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

  • other (Tensor) – Second tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

Returns:

Tensor, dot product of input and other.

Raises:
  • TypeError – If type of input and other are not the same.

  • TypeError – If dtype of input or other is not float16 or float32.

  • ValueError – If rank of input or other less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor(np.ones(shape=[2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[3. 3.]]
 [[3. 3.]]]
>>> print(output.shape)
(2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]]
  [[3. 3.]]]]
>>> print(output.shape)
(1, 2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]
   [3. 3.]]
  [[3. 3.]
   [3. 3.]]]]
>>> print(output.shape)
(1, 2, 2, 2)
>>> input = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]]
>>> print(output.shape)
(3, 2, 2, 1, 2)
tinyms.primitives.dropout(input, p=0.5, training=True, seed=None)[source]

During training, randomly zeroes some of the elements of the input tensor with probability p from a Bernoulli distribution. It plays the role of reducing neuron correlation and avoid overfitting. The meaning of probability here is opposite to that in ops.Dropout and nn.Dropout.

Parameters:
  • input (Tensor) – The input of Dropout, a Tensor of any shape with data type of float16 or float32.

  • p (float, optional) – The dropping rate, between 0 and 1, e.g. p = 0.1, means dropping out 10% of input units. Default: 0.5.

  • training (bool) – Apply dropout if is True. Default: True.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

  • output (Tensor) - Zeroed tensor, with the same shape and data type as input.

Raises:
  • TypeError – If p is not a float.

  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(((20, 16), (50, 50)), mindspore.float32)
>>> output = ops.dropout(input, p=0.5)
>>> print(output.shape)
(2, 2)
tinyms.primitives.dropout1d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 3-dimensional tensor with a shape of \(NCL\), the channel feature map refers to a 1-dimensional feature map with the shape of \(L\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 1D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p.

The parper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

dropout1d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A tensor with shape \((N, C, L)\) or \((C, L)\), where N is the batch size, C is the number of channels, L is the feature length. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float, optional) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means an 80% chance of clearing. Default: 0.5.

  • training (bool, optional) – Apply dropout if is True. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 2D or 3D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(4, 3), mindspore.float32)
>>> output = ops.dropout1d(input_x, 0.5)
>>> print(output.shape)
(4, 3)
tinyms.primitives.dropout2d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 4-dimensional tensor with a shape of \(NCHW\), the channel feature map refers to a 2-dimensional feature map with the shape of \(HW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 2D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p. The parper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

dropout2d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A 4D tensor with shape \((N, C, H, W)\), where N is the batch size, C is the number of channels, H is the feature height, and W is the feature width. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means dropping out 80% of channels. Default: 0.5.

  • training (bool) – If training is True, applying dropout, otherwise, not applying. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 4D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output = ops.dropout2d(input, 0.5)
>>> print(output.shape)
(2, 1, 2, 3)
tinyms.primitives.dropout3d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 5-dimensional tensor with a shape of \(NCDHW\), the channel feature map refers to a 3-dimensional feature map with a shape of \(DHW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 3D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p.

dropout3d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A 5D tensor with shape \((N, C, D, H, W)\), where N is the batch size, C is the number of channels, D is the feature depth, H is the feature height, and W is the feature width. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means dropping out 80% of channels. Default: 0.5.

  • training (bool) – If training is True, applying dropout, otherwise, not applying. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 5D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output = ops.dropout3d(input, 0.5)
>>> print(output.shape)
(2, 1, 2, 1, 2)
tinyms.primitives.dsplit(input, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors along the 3rd axis. It is equivalent to ops.tensor_split with \(axis=2\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(6).reshape((1, 2, 3)).astype('float32')
>>> output = ops.dsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 0.00000000e+00], [ 3.00000000e+00]]]),
 Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 1.00000000e+00], [ 4.00000000e+00]]]),
 Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 2.00000000e+00], [ 5.00000000e+00]]]))
tinyms.primitives.dstack(inputs)[source]

Stacks tensors along the third axis.

1-D tensors \((N,)\) should be reshaped to \((1,N,1)\). 2-D tensors \((M,N)\) should be reshaped to \((M,N,1)\) before concatenation.

Parameters:

inputs (Union(List[Tensor], Tuple[Tensor])) – A sequence of tensors. The tensors must have the same shape along all but the third axis. 1-D or 2-D tensors must have the same shape.

Returns:

Stacked Tensor, will be at least 3-D. The output shape is similar to the output of numpy.dstack() function.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.arange(1, 7).reshape(2, 3))
>>> x2 = Tensor(np.arange(7, 13).reshape(2, 3))
>>> out = ops.dstack([x1, x2])
>>> print(out.asnumpy())
[[[ 1.  7.]
  [ 2.  8.]
  [ 3.  9.]]
 [[ 4. 10.]
  [ 5. 11.]
  [ 6. 12.]]]
tinyms.primitives.dyn_shape(input_x)[source]

Returns the shape of the input tensor.

Parameters:

input_x (Tensor) – The input Tensor.

Returns:

Tensor, the shape of input_x .

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.dyn_shape(input_x)
>>> print(output)
[3 2 1]
tinyms.primitives.eig(A)[source]

Computes the eigenvalues and eigenvectors of a square matrix(batch square matrices).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

A (Tensor) – Square matrices of shape \((*, N, N)\), with float32, float64, complex64 or complex128 data type.

Returns:

  • eigen_values (Tensor) - Shape \((*, N)\). eigenvalues of the corresponding matrix. The eigenvalues may not have an order.

  • eigen_vectors (Tensor) - Shape \((*, N, N)\),columns of eigen vectors represent

  • normalized (unit length) eigenvectors of corresponding eigenvalues.

Raises:
  • TypeError – If dtype of A is not one of: float64, float32, complex64 or complex128.

  • TypeError – If A is not a Tensor.

  • ValueError – If A is not a square(batch squares).

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 0.0], [0.0, 2.0]]), mindspore.float32)
>>> u, v = ops.eig(input_x)
>>> print(u)
[1.+0.j 2.+0.j]
>>> print(v)
[[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]]
tinyms.primitives.einsum(equation, *operands)[source]

According to the Einstein summation Convention (Einsum), the product of the input tensor elements is summed along the specified dimension. You can use this operator to perform diagonal, reducesum, transpose, matmul, mul, inner product operations, etc.

Note

The sublist format is also supported. For example, ops.einsum(op1, sublist1, op2, sublist2, …, sublist_out). In this format, equation can be derived by the sublists which are made up of Python’s Ellipsis and list of integers in [0, 52). Each operand is followed by a sublist and an output sublist is at the end.

Parameters:
  • equation (str) – Notation based on the Einstein summation convention, represent the operation you want to do. the value can contain only letters, commas, ellipsis and arrow. The letters represent input tensor dimension, commas represent separate tensors, ellipsis indicates the tensor dimension that you do not care about, the left of the arrow indicates the input tensors, and the right of it indicates the desired output dimension.

  • operands (Tensor) – Input tensor used for calculation. The dtype of the tensor must be the same.

Returns:

Tensor, the shape of it can be obtained from the equation , and the dtype is the same as input tensors.

Raises:
  • TypeError – If equation is invalid, or the equation does not match the input tensor.

  • ValueError – If the number in sublist is not in [0, 52) in sublist format.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> equation = "i->"
>>> output = ops.einsum(equation, x)
>>> print(output)
[7.]
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> equation = "i,i->i"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[ 2. 8. 12.]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> y = Tensor(np.array([[2.0, 3.0], [1.0, 2.0], [4.0, 5.0]]), mindspore.float32)
>>> equation = "ij,jk->ik"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[[16. 22.]
 [37. 52.]]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->ji"
>>> output = ops.einsum(equation, x)
>>> print(output)
[[1. 4.]
 [2. 5.]
 [3. 6.]]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->j"
>>> output = ops.einsum(equation, x)
>>> print(output)
[5. 7. 9.]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "...->"
>>> output = ops.einsum(equation, x)
>>> print(output)
[21.]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 1.0]), mindspore.float32)
>>> equation = "j,i->ji"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[[ 2. 4. 1.]
 [ 4. 8. 2.]
 [ 6. 12. 3.]]
>>> x = mindspore.Tensor([1, 2, 3, 4], mindspore.float32)
>>> y = mindspore.Tensor([1, 2], mindspore.float32)
>>> output = ops.einsum(x, [..., 1], y, [..., 2], [..., 1, 2])
>>> print(output)
[[1. 2.]
 [2. 4.]
 [3. 6.]
 [4. 8.]]
tinyms.primitives.elu(input_x, alpha=1.0)[source]

Exponential Linear Unit activation function.

Applies the exponential linear unit function element-wise. The activation function is defined as:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

Where \(x\) is the element of input Tensor input_x, \(\alpha\) is param alpha, it determines the smoothness of ELU. The picture about ELU looks like this ELU .

Parameters:
  • input_x (Tensor) – The input of ELU is a Tensor of any dimension with data type of float16 or float32.

  • alpha (float, optional) – The alpha value of ELU, the data type is float. Only support ‘1.0’ currently. Default: 1.0.

Returns:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.elu(x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
tinyms.primitives.equal(input, other)[source]

Computes the equivalence between two tensors element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} = y_{i} \\ & \text{False, if } x_{i} \ne y_{i} \end{cases}\end{split}\]

Note

  • input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, Number]) – The first input is a number or a tensor whose data type is number.

  • other (Union[Tensor, Number]) – The second input is a number when the first input is a tensor or a tensor whose data type is number. The data type is the same as the first input.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: The shape of two inputs are different
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = ops.equal(x, 2.0)
>>> print(output)
[False True False]
>>> # case 2: The shape of two inputs are the same
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> output = ops.equal(x, y)
>>> print(output)
[ True  True False]
tinyms.primitives.erf(input)[source]

Computes the Gauss error function of input element-wise.

\[erf(x)=\frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Parameters:

input (Tensor) – The input tensor of Gaussian error function. Its data type must be float16 float32 or float64.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> output = ops.erf(x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
tinyms.primitives.erfc(input)[source]

Computes the complementary error function of input element-wise.

\[erfc(x) = 1 - \frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> output = ops.erfc(x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
tinyms.primitives.erfinv(input)[source]

Returns the result of the inverse error function with input, which is defined in the range (-1, 1) as:

\[erfinv(erf(x)) = x\]

where \(x\) is the input.

Parameters:

input (Tensor) – The input tensor to compute to, with data type float32, float16 or float64.

Returns:

Tensor, has the same shape and dtype as input.

Raises:

TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> output = ops.erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
tinyms.primitives.exp(input)[source]

Returns exponential of a tensor element-wise.

\[out_i = e^{x_i}\]
Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.exp(x)
>>> print(output)
[ 2.718282  7.389056 54.598152]
tinyms.primitives.exp2(input)[source]

Computes base two exponential of Tensor input element-wise.

\[out_i = 2^{input_i}\]
Parameters:

input (Tensor) – Input tensor.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 3, 4]), mindspore.float32)
>>> output = ops.exp2(x)
>>> print(output)
[ 4.  8. 16.]
tinyms.primitives.expand(input_x, size)[source]

Returns a new tensor where the dimension of size is expanded to a larger size.

Note

  • If the size for a dimension is -1, it means no change for the size of that dimension.

  • When a Tensor is expanded to a larger number of dimensions, the new ones will be appended at the front, and for the new dimensions, the size can not be -1.

Parameters:
  • input_x (Tensor) – A Tensor to be expanded. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • size (Tensor) – The expanded shape of input_x.

Returns:

y (Tensor) - Tensor after expansion whose shape is size.

Raises:
  • TypeError – If input_x or size is not Tensor.

  • TypeError – If the type of size is not one of the following dtype: int16, int32, int64.

  • ValueError – If the size of size is less than the size of input_x.shape.

  • ValueError – If size is not a 1-D tensor.

  • ValueError – If the expanded size is not equal to the existing shape of input_x at a dimension that is not 1.

  • ValueError – If the expanded size < 0 and it is in a leading position, corresponding to a non-existing dimension in input_x.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[2], [3], [4]]), mindspore.float32)
>>> size = Tensor(np.array([3,4]), mindspore.int32)
>>> y = ops.expand(input_x, size)
>>> print(y)
[[2. 2. 2. 2.]
 [3. 3. 3. 3.]
 [4. 4. 4. 4.]]
tinyms.primitives.expand_dims(input_x, axis)[source]

Adds an additional dimension to input_x at the given axis.

Note

If the specified axis is a negative number, the index is counted backward from the end and starts at 1.

Parameters:
  • input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (int) – Specifies the dimension index at which to expand the shape of input_x. The value of axis must be in the range [-input_x.ndim-1, input_x.ndim]. Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((1, x_1, x_2, ..., x_R)\) if the value of axis is 0. It has the same data type as input_x.

Raises:
  • TypeError – If axis is not an int.

  • ValueError – If axis is not in the valid range \([-a.ndim-1, a.ndim]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
tinyms.primitives.expm1(input)[source]

Returns exponential then minus 1 of a tensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16 or float32.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.expm1(x)
>>> print(output)
[ 0.        1.718282  6.389056 53.598152]
tinyms.primitives.eye(n, m=None, dtype=None)[source]

Creates a tensor with ones on the diagonal and zeros in the rest.

Note

Combines ReverseV2 operator to get an anti-diagonal Tensor, but ReverseV2 only supports Ascend and GPU platforms currently.

Parameters:
  • n (int) – The number of rows of returned tensor. Constant value only.

  • m (int) – The number of columns of returned tensor. Constant value only. Default: if None, the number of columns is as the same as n.

  • dtype (mindspore.dtype) – MindSpore’s dtype, the data type of the returned tensor. The data type can be bool or Number. Default: None, the data type of the returned tensor is mindspore.float32.

Returns:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of output depends on the user’s Inputs n and m. And the data type depends on Inputs dtype.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = ops.eye(1, 2, mindspore.float64)
>>> print(output)
[[1. 0.]]
>>> print(output.dtype)
Float64
>>> output = ops.eye(2, dtype=mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = ops.eye(2)
>>> print(output)
[[1. 0.]
 [0. 1.]]
>>> print(output.dtype)
Float32
tinyms.primitives.fast_gelu(x)[source]

Fast Gaussian Error Linear Units activation function.

FastGeLU is defined as follows:

\[\text{output} = \frac {x} {1 + \exp(-1.702 * \left| x \right|)} * \exp(0.851 * (x - \left| x \right|)),\]

where \(x\) is the element of the input.

Parameters:

x (Tensor) – Input to compute the FastGeLU with data type of float16 or float32.

Returns:

Tensor, with the same type and shape as x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
tinyms.primitives.fill(type, shape, value)[source]

Create a Tensor of the specified shape and fill it with the specified value.

Parameters:
  • type (mindspore.dtype) –

    The specified type of output tensor. The data type only supports bool_ and number .

  • shape (Union(Tensor, tuple[int])) – The specified shape of output tensor.

  • value (Union(Tensor, number.Number, bool)) – Value to fill the returned tensor.

Returns:

Tensor.

Raises:

TypeError – If shape is not a tuple or a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.fill(mindspore.float32, (2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ops.fill(mindspore.float32, (3, 3), 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.fills(x, value)[source]

fills is deprecated, please use ops.fill instead.

tinyms.primitives.flatten(input, order='C', *, start_dim=1, end_dim=-1)[source]

Flatten a tensor along dimensions from start_dim to start_dim.

Parameters:
  • input (Tensor) – The input Tensor.

  • order (str, optional) – Only ‘C’ and ‘F’ are supported. ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. Default: ‘C’.

Keyword Arguments:
  • start_dim (int, optional) – The first dimension to flatten. Default: 1.

  • end_dim (int, optional) – The last dimension to flatten. Default: -1.

Returns:

Tensor. If no dimensions are flattened, returns the original input, otherwise return the flattened Tensor. If input is a 0-dimensional Tensor, a 1-dimensional Tensor will be returned.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If order is not string type.

  • ValueError – If order is string type, but not ‘C’ or ‘F’.

  • TypeError – If start_dim or end_dim is not int.

  • ValueError – If start_dim is greater than end_dim after canonicalized.

  • ValueError – If start_dim or end_dim is not in range of [-input.dim, input.dim-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> output = ops.flatten(input_x)
>>> print(output.shape)
(1, 24)
tinyms.primitives.flip(input, dims)[source]

Reverses the order of elements in a tensor along the given axis.

The shape of the tensor is preserved, but the elements are reordered.

Parameters:
  • input (Tensor) – Input tensor.

  • dims (Union[list[int], tuple[int]]) – Axis or axes along which to flip over. Flipping is performed on all of the axes specified in the tuple, If dims is a tuple of integers contains negative, it counts from the last to the first axis.

Returns:

Tensor, with the entries of dims reversed.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.flip(input, (0, 2))
>>> print(output)
[[[6 5]
  [8 7]]
 [[2 1]
  [4 3]]]
tinyms.primitives.fliplr(input)[source]

Flips the elements of each row in the left/right direction, while preserving the columns of the input tensor.

Parameters:

input (Tensor) – Input tensor.

Returns:

Tensor after the flip.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.fliplr(input)
>>> print(output)
[[[3 4]
  [1 2]]
 [[7 8]
  [5 6]]]
tinyms.primitives.flipud(input)[source]

Flips the elements of each column in the up/down direction, while preserving the rows of the input tensor.

Parameters:

input (Tensor) – Input array.

Returns:

Tensor after the flip.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.flipud(input)
>>> print(output)
[[[5 6]
  [7 8]]
 [[1 2]
  [3 4]]]
tinyms.primitives.float_power(input, exponent)[source]

Computes input to the power of the exponent. For the real number type, cast input and exponent to mindspore.float64 to calculate. Currently, complex type calculation is not supported.

Parameters:
  • input (Union[Tensor, Number]) – The first input is a tensor or a number.

  • exponent (Union[Tensor, Number]) – The second input, if the first input is Tensor, the second input can be Number or Tensor. Otherwise, it must be a Tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting. For the complex type, the return value type is the same as the input type. For the real number type, the return value type is mindspore.float64.

Raises:
  • TypeError – If neither input nor exponent is a Tensor.

  • TypeError – If the data type of input or exponent is not in Tensor and Number.

Supported Platforms:

GPU CPU

Examples

>>> input = Tensor(np.array([-1.5, 0., 2.]))
>>> output = ops.float_power(input, 2)
>>> print(output)
[2.25 0.   4.  ]
tinyms.primitives.floor(input)[source]

Rounds a tensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

input (Tensor) – The input tensor, its data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> output = ops.floor(x)
>>> print(output)
[ 1.  2. -2.]
tinyms.primitives.floor_div(x, y)[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = \text{floor}( \frac{x_i}{y_i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the mindspore.ops.Floor operator.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.floor_div(x, y)
>>> print(output)
[ 0  1 -1]
tinyms.primitives.floor_mod(x, y)[source]

Computes the remainder of division element-wise. It’s a flooring divide. E.g. \(floor(x / y) * y + mod(x, y) = x\).

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} =\text{floor}(x_{i} // y_{i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the mindspore.ops.Floor operator.

Warning

  • Data of input y should not be 0, or the maximum value of its dtype will be returned.

  • When the elements of input exceeds 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1, D2 ..., Dn)\), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision of the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.floor_mod(x, y)
>>> print(output)
[2 1 2]
tinyms.primitives.fmax(input, other)[source]

Computes the maximum of input tensors element-wise.

\[output_i = max(x1_i, x2_i)\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • Shapes of input and other should be able to broadcast.

  • If one of the elements to be compared is NaN, another element is returned.

Parameters:
  • input (Tensor) – The first tensor. The supported dtypes are: float16, float32, float64, int32, int64.

  • other (Tensor) – The second tensor. The supported dtypes are: float16, float32, float64, int32, int64.

Returns:

A Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input or other is not Tensor.

  • TypeError – If dtype of input or other is not one of: float16, float32, float64, int32, int64.

  • ValueError – If the shape of input and other can not broadcast.

Supported Platforms:

CPU

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.fmax(x1, x2)
>>> print(output)
[4. 5. 6.]
tinyms.primitives.fmin(input, other)[source]

Computes the minimum of input tensors element-wise.

\[output_i = min(input_i, other_i)\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • Shapes of input and other should be able to broadcast.

  • If one of the elements to be compared is NaN, another element is returned.

Parameters:
  • input (Tensor) – The first tensor. The supported dtypes are: float16, float32, float64, int32, int64.

  • other (Tensor) – The second tensor. The supported dtypes are: float16, float32, float64, int32, int64.

Returns:

A Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input or other is not Tensor.

  • TypeError – If dtype of input or other is not one of: float16, float32, float64, int32, int64.

  • ValueError – If the shape of input and other can not broadcast.

Supported Platforms:

Examples

>>> input = Tensor(np.array([1.0, 5.0, 3.0]), mstype.float32)
>>> other = Tensor(np.array([4.0, 2.0, 6.0]), mstype.float32)
>>> output = ops.fmin(input, other)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.fmod(input, other)[source]

Computes the floating-point remainder of the division operation input/other.

\[out = input - n * other\]

Where \(n\) is \(input/other\) with its fractional part truncated. The returned value has the same sign as input and is less than other in magnitude.

Parameters:
  • input (Union[Tensor, Number]) – the dividend.

  • other (Union[Tensor, Number]) – the divisor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-4., -3.5, 0, 3.5, 4]), mindspore.float32)
>>> output = ops.fmod(input, 2.5)
>>> print(output)
[-1.5 -1.   0.   1.   1.5]
tinyms.primitives.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)[source]

Combines an array of sliding local blocks into a large containing tensor.

Warning

  • Currently, only 4-D output tensors (batched image-like tensors) are supported.

Parameters:
  • input (Tensor) – 4-D Tensor with data type float16 or float32.

  • output_size (Tensor) – 1D tensor with 2 elements of data type int.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The size of the dilation, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The size of the stride, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

Returns:

A Tensor, with same type as input , format of the Tensor is (N, C, H, W).

Raises:
  • TypeError – If kernel_size, dilation, padding, stride data type is not int, tuple or list.

  • ValueError – If kernel_size, dilation, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding value is less than zero or elements number more than 2.

  • ValueError – If input.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If input.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> output = ops.fold(x, output_size, [2, 2], [2, 2], [2, 2], [2, 2])
>>> print(output.shape)
(16, 16, 8, 8)
tinyms.primitives.frac(x)[source]

Calculates the fractional part of each element in the input

Parameters:

x (Tensor) – x is a tensor.

Returns:

Tensor, has the same shape and type as input.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.common import dtype as mstype
>>> import mindspore.ops as ops
>>> x = Tensor([2, 4.2, -2.5], mstype.float16)
>>> output = ops.frac(x)
>>> print(output)
[ 0.      0.1992 -0.5   ]
tinyms.primitives.fractional_max_pool2d(input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 2D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\), with float16, float32, float64, int32, int64 data type.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. The value must be a positive integer.

  • output_size (Union[int, tuple[int]], optional) – The shape of the target output_size, is an int number that represents height and width, or a tuple of two int numbers that represent height and width respectively. The value must be a positive integer. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type: float16, float32, double, and value is between (0, 1). Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool2d, which is a 3D tensor. Tensor of data type: float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 2)\) or \((1, C, 2)\). Default: None.

Returns:

  • y (Tensor) - Has the same type as the input. Has the shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\) , where \((H_{out}, W_{out})\) = output_size or \((H_{out}, W_{out})\) = output_ratio * \((H_{in}, W_{in})\).

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int64 data type. It will output only when return_indices is True.

Raises:
  • TypeError – If data type of input is not one of the following: float16, float32, float64, int32, int64.

  • TypeError – If data type of _random_samples is not one of the following: float16, float32, float64.

  • ValueError – If kernel_size is not a number and kernel_size is not a tuple of length 2.

  • ValueError – If output_size is not a number and output_size is not a tuple of length 2.

  • ValueError – If the sum of kernel_size , output_size and -1 is larger than the corresponding dimension of input.

  • ValueError – If the dimension of _random_samples is not 3.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 2.

Supported Platforms:

CPU

Examples

>>> input = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.9581, 0.4760, 0.9014, 0.8522, 0.3664,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([1, 1, 5, 5]), mstype.float32)
>>> _random_samples = Tensor(np.array([[[0.8, 0.8]]]), mstype.float32)
>>> y, argmax = ops.fractional_max_pool2d(input, kernel_size=2, output_size=(2, 2),
...                                       _random_samples=_random_samples, return_indices=True)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
>>> y, argmax = ops.fractional_max_pool2d(input, kernel_size=2, output_ratio=(0.5, 0.5),
...                                       _random_samples=_random_samples, return_indices=True)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
tinyms.primitives.fractional_max_pool3d(input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 3D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

The input and output data format can be “NCDHW”. N is the batch size, C is the number of channels, D the feature depth, H is the feature height, and W is the feature width.

Parameters:
  • input (Tensor) – The input of FractionalMaxPool3d, which is a 4D or 5D tensor. Tensor of data type: float16, float32, double, int32, int64. Supported shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer.

  • output_size (Union[int, tuple[int]], optional) – The Shape of the target output_size, is an int number that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type: float16, float32, double, and value is between (0, 1). Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool3d, which is a 3D tensor. Tensor of data type: float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 3)\) or \((1, C, 3)\) .

Returns:

  • y (Tensor) - A tensor, the output of FractionalMaxPool3d. Has the same data type with input. Has the shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\) , where \((D_{out}, H_{out}, W_{out})\) = output_size or \((D_{out}, H_{out}, W_{out})\) = output_ratio * \((D_{in}, H_{in}, W_{in})\) .

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int32 data type. It will output only when return_indices is True.

Raises:
  • TypeError – If input is not a 4D or 5D tensor.

  • TypeError – If _random_samples is not a 3D tensor.

  • TypeError – If data type of input is not float16, float32, double, int32, int64.

  • TypeError – If dtype of _random_samples is not float16, float32, double.

  • TypeError – If dtype of argmax is not int32, int64.

  • ValueError – If output_size is a tuple and if output_size length is not 3.

  • ValueError – If kernel_size is a tuple and if kernel_size length is not 3.

  • ValueError – If numbers in output_size or kernel_size is not positive.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 3.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...            .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> _random_samples = Tensor(np.array([0.7, 0.7, 0.7]).reshape([1, 1, 3]), mstype.float32)
>>> output, argmax = ops.fractional_max_pool3d(x, kernel_size=(1.0, 1.0, 1.0), output_size=(1, 1, 3),
...                                            _random_samples=_random_samples, return_indices=True)
>>> print(output)
[[[[[13. 14. 16.]]]]]
>>> print(argmax)
[[[[[12 13 15]]]]]
>>> output, argmax = ops.fractional_max_pool3d(x, kernel_size=(1.0, 1.0, 1.0), output_ratio=(0.5, 0.5, 0.5),
...                                            _random_samples=_random_samples, return_indices=True)
>>> print(output)
[[[[[13. 16.]]]]]
>>> print(argmax)
[[[[[12 15]]]]]
tinyms.primitives.full(size, fill_value, *, dtype=None)[source]

Create a Tensor of the specified shape and fill it with the specified value.

Parameters:
  • size (Union(tuple[int], list[int])) – The specified shape of output tensor.

  • fill_value (number.Number) – Value to fill the returned tensor. Complex numbers are not supported for now.

Keyword Arguments:

dtype (mindspore.dtype) – The specified type of output tensor. bool_ and number are supported, for details, please refer to mindspore.dtype . Default: None.

Returns:

Tensor.

Raises:
  • TypeError – If size is not a tuple or list.

  • ValueError – The element in size is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.full((2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ops.full((3, 3), 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.full_like(input, fill_value, *, dtype=None)[source]

Return a Tensor of the same shape as input and filled with fill_value.

Parameters:
  • input (Tensor) – input Tensor and the output Tensor have the same shape as input.

  • fill_value (Number) – Value to fill the returned Tensor. Complex numbers are not supported for now.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. bool_ and number are supported, for details, please refer to mindspore.dtype . Default: None.

Returns:

Tensor.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([[0, 1], [2, 1]], dtype=mindspore.int32)
>>> output = ops.full_like(input, 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> input = Tensor([[0, 1, 1], [2, 1, 2], [1, 3, 4]], dtype=mindspore.int32)
>>> output = ops.full_like(input, 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The \(\beta\) distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If neither alpha nor beta is a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha and beta is not float32.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: alpha_shape is (2, 2)
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> # case 2: alpha_shape is (2, 3), so shape is (3, 1, 3)
>>> shape = (3, 1, 3)
>>> alpha = Tensor(np.array([[1, 3, 4], [2, 5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> # case 3: beta_shape is (1, 2), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0, 2]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 2.2132034  5.8855834]]
 [ 3.3981476  7.5805717]
[[ 3.3981476  7.5805717]]
 [ 3.7190282 19.941492]
[[ 2.9512358  2.5969937]]
 [ 3.786061   5.160872 ]]]
>>> # case 4: beta_shape is (2, 1), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([[1.0], [2.0]]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 5.6085486  7.8280783]]
 [ 15.97684  16.116285]
[[ 1.8347423  1.713663]]
 [ 3.2434065 15.667398]
[[ 4.2922077  7.3365674]]
 [ 5.3876944  13.159832 ]]]
tinyms.primitives.gather(input_params, input_indices, axis, batch_dims=0)[source]

Returns the slice of the input tensor corresponding to the elements of input_indices on the specified axis.

The following figure shows the calculation process of Gather commonly:

tinyms/Gather.png

where params represents the input input_params, and indices represents the index to be sliced input_indices.

Note

  1. The value of input_indices must be in the range of [0, input_param.shape[axis]), the result is undefined out of range.

  2. The data type of input_params cannot be bool_ on Ascend platform currently.

Parameters:
  • input_params (Tensor) – The original Tensor. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_indices (Tensor) – Index tensor to be sliced, the shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. The data type can be int32 or int64.

  • axis (int) – Specifies the dimension index to gather indices. It must be greater than or equal to batch_dims.

  • batch_dims (int) – Specifies the number of batch dimensions. It must be less than or euqal to the rank of input_indices. Default: 0.

Returns:

Tensor, the shape of tensor is \(input\_params.shape[:axis] + input\_indices.shape[batch\_dims:] + input\_params.shape[axis + 1:]\).

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If input_params is not a tensor.

  • TypeError – If input_indices is not a tensor of type int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: input_indices is a Tensor with shape (5, ).
>>> input_params = Tensor(np.array([1, 2, 3, 4, 5, 6, 7]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 4, 2, 6]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[1. 3. 5. 3. 7.]
>>> # case2: input_indices is a Tensor with shape (2, 2). When the input_params has one dimension,
>>> # the output shape is equal to the input_indices shape.
>>> input_indices = Tensor(np.array([[0, 2], [2, 6]]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[[1. 3.]
 [3. 7.]]
>>> # case3: input_indices is a Tensor with shape (2, ) and
>>> # input_params is a Tensor with shape (3, 4) and axis is 0.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[[ 1.  2.  3.  4.]
 [ 9. 10. 11. 12.]]
>>> # case4: input_indices is a Tensor with shape (2, ) and
>>> # input_params is a Tensor with shape (3, 4) and axis is 1, batch_dims is 1.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 1]), mindspore.int32)
>>> axis = 1
>>> batch_dims = 1
>>> output = ops.gather(input_params, input_indices, axis, batch_dims)
>>> print(output)
[ 1.  7. 10.]
tinyms.primitives.gather_d(x, dim, index)[source]

Gathers elements along an axis specified by dim.

Refer to mindspore.ops.gather_elements() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.gather_d(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
tinyms.primitives.gather_elements(input, dim, index)[source]

Gathers elements along an axis specified by dim.

For a 3-D tensor, the output is:

output[i][j][k] = x[index[i][j][k]][j][k]  # if dim == 0

output[i][j][k] = x[i][index[i][j][k]][k]  # if dim == 1

output[i][j][k] = x[i][j][index[i][j][k]]  # if dim == 2

input and index have the same length of dimensions, and all dimensions except dim have the same size. If dim = i, input is an n-D tensor with shape \((z_0, z_1, ..., z_i, ..., z_{n-1})\), the index must be an n-D tensor with shape \((z_0, z_1, ..., y, ..., z_{n-1})\) where y>=1 and the output will have the same shape with index.

Parameters:
  • input (Tensor) – The input tensor.

  • dim (int) – The axis along which to index. It must be int32 or int64. The value range is [-input.ndim, input.ndim).

  • index (Tensor) – The indices of elements to gather. It can be one of the following data types: int32, int64. The value range of each index element is [-input.shape(dim), input.shape(dim)).

Returns:

Tensor, has the same shape as index tensor, the shape of tensor is \((z_0, z_1, ..., y, ..., z_{n-1})\), and has the same data type with input.

Raises:
  • TypeError – If dtype of dim or index is neither int32 nor int64.

  • ValueError – If length of shape of input is not equal to length of shape of index.

  • ValueError – If the size of the dimension except dim is not equal between input and index.

  • ValueError – If the value of dim is not in the expected range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = mindspore.ops.gather_elements(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
tinyms.primitives.gather_nd(input_x, indices)[source]

Gathers slices from a tensor by indices.

Using given indices to gather slices from a tensor with a specified shape.

indices is an K-dimensional integer tensor. Supposes it as a (K-1)-dimensional tensor and each element of it defines a slice of input_x:

\[output[(i_0, ..., i_{K-2})] = input\_x[indices[(i_0, ..., i_{K-2})]]\]

The last dimension of indices can not more than the rank of input_x: \(indices.shape[-1] <= input\_x.rank\).

Parameters:
  • input_x (Tensor) – The target tensor to gather values.

  • indices (Tensor) – The index tensor, with int32 or int64 data type.

Returns:

Tensor, has the same type as input_x and the shape is \(indices\_shape[:-1] + input\_x\_shape[indices\_shape[-1]:]\).

Raises:

ValueError – If length of shape of input_x is less than the last dimension of indices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> output = ops.gather_nd(input_x, indices)
>>> print(output)
[-0.1  0.5]
tinyms.primitives.gaussian_nll_loss(x, target, var, full=False, eps=1e-06, reduction='mean')[source]

Gaussian negative log likelihood loss.

The target values are considered to be samples from a Gaussian distribution, where the expectation and variance are predicted by a neural network. For labels modeled on a Gaussian distribution, logits to record expectations, and the variance var (elements are all positive), the calculated loss is:

\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{x} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]

where \(eps\) is used for stability of \(log\). When \(full=True\), a constant will be added to the loss. If the shape of \(var\) and \(logits\) are not the same (due to a homoscedastic assumption), their shapes must allow correct broadcasting.

Parameters:
  • x (Tensor) – Tensor of shape \((N, *)\) or \((*)\) where \(*\) means any number of additional dimensions.

  • target (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as the x, or same shape as the x but with one dimension equal to 1 (to allow broadcasting).

  • var (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as x, or same shape as the x but with one dimension equal to 1, or same shape as the x but with one fewer dimension (to allow for broadcasting).

  • full (bool, optional) – Include the constant term in the loss calculation. When \(full=True\), the constant term will be \(const = 0.5*log(2\pi)\). Default: False.

  • eps (float, optional) – Used to improve the stability of log function must be greater than 0. Default: 1e-6.

  • reduction (str, optional) – Apply specific reduction method to the output: “none”, “mean”, or “sum”. Default: “mean”.

Returns:

Tensor or Tensor scalar, the computed loss depending on \(reduction\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> import mindspore.common.dtype as mstype
>>> arr1 = np.arange(8).reshape((4, 2))
>>> arr2 = np.array([2, 3, 1, 4, 6, 4, 4, 9]).reshape((4, 2))
>>> x = Tensor(arr1, mstype.float32)
>>> var = Tensor(np.ones((4, 1)), mstype.float32)
>>> target = Tensor(arr2, mstype.float32)
>>> output = ops.gaussian_nll_loss(x, target, var)
>>> print(output)
1.4374993
Reference:

Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.

tinyms.primitives.gcd(input, other)[source]

Computes greatest common divisor of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64

Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type input or other is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> y = ops.gcd(x1, x2)
>>> print(y)
[7 2 3]
tinyms.primitives.ge(x, y)[source]

Computes the boolean value of \(x >= y\) element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If the input Tensor can be broadcast, the low dimension will be extended to the corresponding high dimension in another input by copying the value of the dimension.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>=y_{i} \\ & \text{False, if } x_{i}<y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.ge(x, y)
>>> print(output)
[True True False]
tinyms.primitives.gelu(input_x, approximate='none')[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

When approximate argument is none, GeLU is defined as follows:

\[GELU(x_i) = x_i*P(X < x_i),\]

where \(P\) is the cumulative distribution function of the standard Gaussian distribution, \(x_i\) is the input element.

When approximate argument is tanh, GeLU is estimated with:

\[GELU(x_i) = 0.5 * x_i * (1 + tanh(\sqrt(2 / \pi) * (x_i + 0.044715 * x_i^3)))\]
Parameters:
  • input_x (Tensor) – The input of the activation function GeLU, the data type is float16, float32 or float64.

  • approximate (str) – the gelu approximation algorithm to use. Acceptable vaslues are ‘none’ and ‘tanh’. Default: ‘none’.

Returns:

Tensor, with the same type and shape as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not float16, float32 or float64.

  • ValueError – If approximate value is neither none or tanh.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1.0, 2.0, 3.0], mindspore.float32)
>>> result = ops.gelu(x)
>>> print(result)
[0.841192 1.9545976 2.9963627]
tinyms.primitives.geqrf(input)[source]

Decomposes a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R. The process is called QR decomposition: \(A = QR\).

Both Q and R matrices are stored in the same output tensor y. The elements of R are stored on and above the diagonal, whereas elementary reflectors (or Householder vectors) implicitly defining matrix Q are stored below the diagonal.

This function returns two tensors (y, tau).

Parameters:

input (Tensor) – Tensor of shape \((*, m, n)\), input must be a matrix greater than or equal to 2D, with dtype of float32, float64, complex64, complex128.

Returns:

  • y (Tensor) - Tensor of shape \((*, m, n)\), has the same dtype as the x.

  • tau (Tensor) - Tensor of shape \((*, p)\) and \(p = min(m, n)\), has the same dtype as the x.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is neither float32, float64, complex64, complex128.

  • ValueError – If input dimension is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-2.0, -1.0], [1.0, 2.0]]).astype(np.float32))
>>> y, tau = ops.geqrf(input_x)
>>> print(y)
[[ 2.236068   1.7888544]
 [-0.236068   1.3416407]]
>>> print(tau)
[1.8944271 0.       ]
tinyms.primitives.ger(input, vec2)[source]

Ger product of input and vec2. Calculate the outer product of two arrays. If input is a 1D Tensor of shape \((m,)\) and vec2 is a 1D Tensor of shape \((n,)\), then output must be a 2D Tensor of shape \((m, n)\).

Note

Currently Ascend does not support float64 data input.

Parameters:
  • input (Tensor) – input Tensor, with dtype of float16, float32 or float64.

  • vec2 (Tensor) – input Tensor, with dtype of float16, float32 or float64, must have the same dtype as input.

Returns:

Tensor, output matrix with the same dtype as inputs. With input shape \((m,)\) and vec2 shape of \((n,)\), the output has shape \((m, n)\).

Raises:
  • TypeError – If input or vec2 is not a 1-D Tensor.

  • TypeError – If the dtype of input and vec2 is not float16, float32 or float64.

  • TypeError – If the dtype of input and vec2 are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([1., 2., 3., 4.], mindspore.float32)
>>> vec2 = Tensor([1., 2., 3.], mindspore.float32)
>>> output = ops.ger(input, vec2)
>>> print(output)
[[ 1.  2.  3.]
 [ 2.  4.  6.]
 [ 3.  6.  9.]
 [ 4.  8. 12.]]
tinyms.primitives.get_grad(gradients, identifier)[source]

When return_ids of mindspore.grad() is set to True, use its return value as gradients. Then find the specific gradient from gradients according to identifier .

As for gradient, two typical cases are included:

  1. identifier is the position of the specific tensor to get gradient.

  2. identifier is a parameter of a network.

Parameters:
  • gradients (Union[tuple[int, Tensor], tuple[tuple, tuple]]) – The return value of mindspore.grad() when return_ids is set to True.

  • identifier (Union[int, Parameter]) – The position number of a tensor, or a parameter that is used in mindspore.grad().

Returns:

The gradient of the tensor on the position or in the parameter that specified by the identifier.

Raises:
  • RuntimeError – If gradient is not found.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad, get_grad
>>>
>>>  # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> out_grad = grad(net, grad_position=(1, 2), return_ids=True)(x, y, z)
>>> output = get_grad(out_grad, 1)
>>> print(output)
[0. 6.]
tinyms.primitives.glu(x, axis=-1)[source]

Computes GLU (Gated Linear Unit activation function) of input tensors.

\[{GLU}(a, b)= a \otimes \sigma(b)\]

where \(a\) is the first half of the input matrices and \(b\) is the second half.

Here \(\sigma\) is the sigmoid function, and \(\otimes\) is the Hadamard product. See Language Modeling with Gated Convluational Networks.

Parameters:
  • x (Tensor) – Tensor to be splited. Its dtype is Number, and shape is \((\ast_1, N, \ast_2)\) where * means, any number of additional dimensions.

  • axis (int, optional) – the axis to split the input. It must be int. Default: -1, the last axis of x.

Returns:

Tensor, the same dtype as the x, with the shape \((\ast_1, M, \ast_2)\) where \(M=N/2\).

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> input = Tensor([[0.1,0.2,0.3,0.4],[0.5,0.6,0.7,0.8]])
>>> output = ops.glu(input)
>>> print(output)
[[0.05744425 0.11973753]
 [0.33409387 0.41398472]]
tinyms.primitives.grad(fn, grad_position=0, weights=None, has_aux=False, return_ids=False)[source]

A wrapper function to generate the gradient function for the input function.

As for gradient, three typical cases are included:

  1. gradient with respect to inputs. In this case, grad_position is not None while weights is None.

  2. gradient with respect to weights. In this case, grad_position is None while weights is not None.

  3. gradient with respect to inputs and weights. In this case, grad_position and weights are not None.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[NoneType, int, tuple[int]]) – Index to specify which inputs to be differentiated. If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. grad_position begins with 0. If None, none derivative of any input will be figured out, and in this case, weights is required. Default: 0.

  • weights (Union[ParameterTuple, Parameter, list[Parameter]]) – The parameters of the training network that need to calculate the gradient. weights can be got through weights = net.trainable_params() . Default: None.

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

  • return_ids (bool) – Whether return the tuple made by gradients and the index to specify which inputs to be differentiated or the name of parameters of the training network that need to calculate the gradient. If True, the output gradients will be replaced by the tuples made by gradients and the index to specify which inputs to be differentiated or the name of parameters of the training network. Default: False.

Returns:

Function, the gradient function to calculate gradient for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (gradient, out2) and out2 does not contribute to the differentiation, otherwise gradient. When return_ids is set to True, The format of the output will be the same with the output of grad when return_ids is set to false, but every gradient in the output will be replaced by a tuple of position id or parameter name and its gradient.

Raises:
  • ValueError – If both grad_position and weights are None.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> output = grad(net, grad_position=(1, 2))(x, y, z)
>>> print(output)
(Tensor(shape=[2], dtype=Float32, value=[ 0.00000000e+00,  6.00000000e+00]),
 Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00,  6.00000000e+00]))
>>>
>>> # Function object to be differentiated
>>> def fn(x, y, z):
...     res = x * ops.exp(y) * ops.pow(z, 2)
...     return res, z
>>> x = Tensor([3, 3], mindspore.float32)
>>> y = Tensor([0, 0], mindspore.float32)
>>> z = Tensor([5, 5], mindspore.float32)
>>> gradient, aux = grad(fn, (1, 2), None, True)(x, y, z)
>>> print(gradient)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 3.00000000e+01,  3.00000000e+01]))
>>> print(aux)
(Tensor(shape=[2], dtype=Float32, value= [ 5.00000000e+00,  5.00000000e+00]),)
>>>
>>> # For given network to be differentiated with both inputs and weights, there are 4 cases.
>>> net = nn.Dense(10, 1)
>>> loss_fn = nn.MSELoss()
>>> def forward(inputs, labels):
...     logits = net(inputs)
...     loss = loss_fn(logits, labels)
...     return loss, logits
>>> inputs = Tensor(np.random.randn(16, 10).astype(np.float32))
>>> labels = Tensor(np.random.randn(16, 1).astype(np.float32))
>>> weights = net.trainable_params()
>>> # Case 1: gradient with respect to inputs.
>>> # Aux value does not contribute to the gradient.
>>> grad_fn = grad(forward, grad_position=(0, 1), weights=None, has_aux=True)
>>> inputs_gradient, (aux_logits,) = grad_fn(inputs, labels)
>>> print(len(inputs_gradient))
2
>>> print(aux_logits.shape)
(16, 1)
>>>
>>> # Case 2: gradient with respect to weights.
>>> grad_fn = grad(forward, grad_position=None, weights=weights, has_aux=True)
>>> params_gradient, (aux_logits,) = grad_fn(inputs, labels)
>>> print(len(weights), len(params_gradient))
2 2
>>> print(aux_logits.shape)
(16, 1)
>>>
>>> # Case 3: gradient with respect to inputs and weights.
>>> grad_fn = grad(forward, grad_position=0, weights=weights, has_aux=False)
>>> inputs_gradient, params_gradient = grad_fn(inputs, labels)
>>> print(len(weights), len(params_gradient))
2 2
>>> # Case 4: return the gradient with ids.
>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> output = grad(net, grad_position=(1, 2), return_ids = True)(x, y, z)
>>> print(output)
((1, Tensor(shape=[2], dtype=Float32, value=[ 0.00000000e+00,  6.00000000e+00])),
 (2, Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00,  6.00000000e+00])))
tinyms.primitives.greater(input, other)[source]

Computes the boolean value of \(input > other\) element-wise.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.greater(x, y)
>>> print(output)
[False True False]
tinyms.primitives.greater_equal(input, other)[source]

Computes the boolean value of \(input \geq other\) element-wise.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.greater_equal(x, y)
>>> print(output)
[True True False]
tinyms.primitives.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=False)[source]

Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Only spatial (4-D) and volumetric (5-D) input is supported.

In the spatial (4-D) case, for input with shape \((N, C, H_{in}, W_{in})\) and grid with shape \((N, H_{out}, W_{out}, 2)\), the output will have shape \((N, C, H_{out}, W_{out})\).

For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. In the case of 5D inputs, grid[n, d, h, w], specifies the x, y, z pixel locations for interpolating output[n, :, d, h, w]. And mode argument specifies “nearest” or “bilinear” or “bicubic” (supported in 4D case only) interpolation method to sample the input pixels.

grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of \([-1, 1]\).

If grid has values outside the range of \([-1, 1]\), the corresponding outputs are handled as defined by padding_mode. If padding_mode is set to be “zeros”, use \(0\) for out-of-bound grid locations. If padding_mode is set to be “border”, use border values for out-of-bound grid locations. If padding_mode is set to be “reflection”, use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound.

Parameters:
  • input (Tensor) – input with shape of \((N, C, H_{in}, W_{in})\) (4-D case) or \((N, C, D_{in}, H_{in}, W_{in})\) (5-D case) and dtype of float32 or float64.

  • grid (Tensor) – flow-field with shape of \((N, H_{out}, W_{out}, 2)\) (4-D case) or \((N, D_{out}, H_{out}, W_{out}, 3)\) (5-D case) and same dtype as input.

  • mode (str) – An optional string specifying the interpolation method. The optional values are “bilinear”, “nearest” or “bicubic”. Default: “bilinear”. Note: bicubic supports only 4-D input. When mode=”bilinear” and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legistimately be bilinear.

  • padding_mode (str) – An optional string specifying the pad method. The optional values are “zeros”, “border” or “reflection”. Default: “zeros”.

  • align_corners (bool) – An optional bool. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. Default: False.

Returns:

Tensor, dtype is the same as input and whose shape is \((N, C, H_{out}, W_{out})\) (4-D) and \((N, C, D_{out}, H_{out}, W_{out})\) (5-D).

Raises:
  • TypeError – If input or grid is not a Tensor.

  • TypeError – If the dtypes of input and grid are inconsistent.

  • TypeError – If the dtype of input or grid is not a valid type.

  • TypeError – If align_corners is not a boolean value.

  • ValueError – If the rank of input or grid is not equal to 4(4-D case) or 5(5-D case).

  • ValueError – If the first dimension of input is not equal to that of grid.

  • ValueError – If the last dimension of grid is not equal to 2(4-D case) or 3(5-D case).

  • ValueError – If mode is not “bilinear”, “nearest”, “bicubic” or a string value.

  • ValueError – If padding_mode is not “zeros”, “border”, “reflection” or a string value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(16).reshape((2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(0.2, 1, 0.1).reshape((2, 2, 1, 2)).astype(np.float32))
>>> output = ops.grid_sample(input_x, grid, mode='bilinear', padding_mode='zeros',
...                          align_corners=True)
>>> print(output)
[[[[ 1.9      ]
   [ 2.1999998]]
  [[ 5.9      ]
   [ 6.2      ]]]
 [[[10.5      ]
   [10.8      ]]
  [[14.5      ]
   [14.8      ]]]]
tinyms.primitives.gt(x, y)[source]

Compare the value of the input parameters \(x,y\) element-wise, and the output result is a bool value.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>y_{i} \\ & \text{False, if } x_{i}<=y_{i} \end{cases}\end{split}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If the input Tensor can be broadcast, the low dimension will be extended to the corresponding high dimension in another input by copying the value of the dimension.

Parameters:
  • x (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • y (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.gt(x, y)
>>> print(output)
[False True False]
tinyms.primitives.gumbel_softmax(logits, tau=1, hard=False, dim=-1)[source]

Returns the samples from the Gumbel-Softmax distribution and optionally discretizes. If hard = True, the returned samples will be one-hot, otherwise it will be probability distributions that sum to 1 across dim.

Parameters:
  • logits (Tensor) – Unnormalized log probabilities. The data type must be float16 or float32.

  • tau (float) – The scalar temperature, which is a positive number. Default: 1.0.

  • hard (bool) – if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd. Default: False.

  • dim (int) – Dim for softmax to compute. Default: -1.

Returns:

Tensor, has the same dtype and shape as logits.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> output = ops.gumbel_softmax(input_x, 1.0, True, -1)
>>> print(output.shape)
(2, 3)
tinyms.primitives.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None)[source]

Returns the Hamming window.

\[w[n]=\alpha − \beta \cos \left( \frac{2 \pi n}{N - 1} \right),\]

where \(N\) is the full window size.

Parameters:
  • window_length (int) – The size of returned window. Must be a non negative integer.

  • periodic (bool, optional) – If True, return a periodic window. If False, return a symmetric window.

  • alpha (float, optional) – The coefficient α.

  • beta (float, optional) – The coefficient β.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type. Default: None.

Returns:

Tensor, a 1-D tensor of size (window_length) containing the window.

Raises:
  • TypeError – If window_length is a negative integer.

  • TypeError – If periodic is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> print(ops.hamming_window(6, False))
[0.08 0.39785218 0.91214782  0.91214782  0.39785218 0.08]
tinyms.primitives.hann_window(window_length, periodic=True, *, dtype=None)[source]

Generates a Hann Window.

The Hann window is defined as

\[w(n) = \frac{1}{2} - \frac{1}{2} \cos\left(\frac{2\pi{n}}{M-1}\right),\qquad 0 \leq n \leq M-1\]
Parameters:
  • window_length (int) – Length of window.

  • periodic (bool, optional) – When set to True, generates a periodic window for spectral analysis. When set to False, generates a symmetric window for filter design.Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type, it must be float. Default: None.

Returns:

Tensor, a Hann window.

Raises:
  • TypeError – If window_length is not an integer.

  • TypeError – If periodic is not a variable of Boolean type.

  • ValueError – If window_length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = 5
>>> out = ops.hann_window(window_length)
>>> print(out.asnumpy())
[0.        0.3454915 0.9045085 0.9045085 0.3454915]
tinyms.primitives.hardshrink(x, lambd=0.5)[source]

Hard Shrink activation function. Calculates the output according to the input elements.

The formula is defined as follows:

\[\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • x (Tensor) – The input of Hard Shrink with data type of float16 or float32.

  • lambd (float) – The threshold \(\lambda\) defined by the Hard Shrink formula. Default: 0.5.

Returns:

Tensor, has the same data type and shape as the input x.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If x is not a tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 0.5,  1,  2.0], [0.0533,0.0776,-2.1233]]), mindspore.float32)
>>> output = ops.hardshrink(x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
tinyms.primitives.hardsigmoid(input)[source]

Hard sigmoid activation function.

Applies hard sigmoid activation element-wise. The input is a Tensor with any valid shape.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{x_{i} + 3}{6}))\]

where \(x_i\) is an element of the input Tensor.

Parameters:

input (Tensor) – Hard Sigmoid input, with float16, float32 or float64 data type.

Returns:

A Tensor whose dtype and shape are the same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([ -3.5,  0,  4.3]), mindspore.float32)
>>> output = ops.hardsigmoid(x)
>>> print(output)
[0.  0.5 1. ]
tinyms.primitives.hardswish(x)[source]

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6}\]

where \(x_i\) is an element of the input Tensor.

Parameters:

x (Tensor) – The input to compute the Hard Swish.

Returns:

Tensor, has the same data type and shape as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> output = ops.hardswish(x)
>>> print(output)
[-0.3333  -0.3333  0  1.666  0.6665]
tinyms.primitives.hardtanh(input, min_val=-1.0, max_val=1.0)[source]

Applies the hardtanh activation function element-wise. The activation function is defined as:

\[\begin{split}\text{hardtanh}(input) = \begin{cases} max\_val, & \text{ if } input > max\_val \\ min\_val, & \text{ if } input < min\_val \\ input, & \text{ otherwise. } \end{cases}\end{split}\]

Linear region range \([min\_val, max\_val]\) can be adjusted using min_val and max_val.

Parameters:
  • input (Tensor) – Input Tensor.

  • min_val (Union[int, float]) – Minimum value of the linear region range. Default: -1.0.

  • max_val (Union[int, float]) – Maximum value of the linear region range. Default: 1.0.

Returns:

Tensor, with the same dtype and shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of min_val is neither float nor int.

  • TypeError – If dtype of max_val is neither float nor int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([-1, -2, 0, 2, 1], mindspore.float16)
>>> output = ops.hardtanh(x, min_val=-1.0, max_val=1.0)
>>> print(output)
[-1. -1.  0.  1.  1.]
tinyms.primitives.heaviside(input, values)[source]

Computes the Heaviside step function for each element in input.

\[\begin{split}\text { heaviside }(\text { input, values })=\left\{\begin{array}{ll} 0, & \text { if input }<0 \\ \text { values, } & \text { if input }=0 \\ 1, & \text { if input }>0 \end{array}\right.\end{split}\]
Parameters:
  • input (Tensor) – The input tensor. With real number data type.

  • values (Tensor) – The values to use where input is zero. Values can be broadcast with input . input should have the same dtype with values .

Returns:

Tensor, has the same type as input and values.

Raises:
  • TypeError – If input or values is not Tensor.

  • TypeError – If data type input and values is different.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-5., 1., 0., 2., 0.]))
>>> values = Tensor(np.array([3.]))
>>> y = ops.heaviside(input, values)
>>> print(y)
[0. 1. 3. 1. 3.]
tinyms.primitives.hinge_embedding_loss(inputs, targets, margin=1.0, reduction='mean')[source]

Measures Hinge Embedding Loss given an input Tensor intputs and a labels Tensor targets (containing 1 or -1).

The loss function for \(n\)-th sample in the mini-batch is

\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]

and the total loss functions is

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(L = \{l_1,\dots,l_N\}^\top\).

Parameters:
  • inputs (Tensor) – Predicted values, represented as \(x\) in the formula.

  • targets (Tensor) – Label values, represented as \(y\) in the formula. Has the same shape as inputs, contains -1 or 1.

  • margin (float, int) – Threshold defined by Hinge Embedding Loss \(margin\). Represented as \(\Delta\) in the formula. Default: 1.0.

  • reduction (str) – Specify the computing method to be applied to the outputs: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

Returns:

Tensor or Tensor scalar, the computed loss depending on \(reduction\).

Raises:
  • TypeError – If inputs is not a Tensor.

  • TypeError – If targets is not a Tensor.

  • TypeError – If margin is not a float or int.

  • ValueError – If targets does not have the same shape as inputs or they could not broadcast to each other.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.common.dtype as mstype
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> arr1 = np.array([0.9, -1.2, 2, 0.8, 3.9, 2, 1, 0, -1]).reshape((3, 3))
>>> arr2 = np.array([1, 1, -1, 1, -1, 1, -1, 1, 1]).reshape((3, 3))
>>> logits = Tensor(arr1, mstype.float32)
>>> labels = Tensor(arr2, mstype.float32)
>>> loss = ops.hinge_embedding_loss(logits, labels, margin=1.0, reduction='mean')
>>> print(loss)
0.16666666
tinyms.primitives.histc(input, bins=100, min=0.0, max=0.0)[source]

Computes the histogram of a tensor.

The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Elements lower than min or higher than max are ignored.

Parameters:
  • input (Tensor) – the input tensor, type support list \([float16, float32, int32]\).

  • bins (int, optional) – Number of histogram bins, optional. Default 100. If specified, must be positive.

  • min (int, float, optional) – An optional float of the lower end of the range (inclusive). Default value is 0.0.

  • max (int, float, optional) – An optional float of the upper end of the range (inclusive). Default value is 0.0.

Returns:

Tensor, 1-D Tensor with type int32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([1., 2, 1])
>>> y = ops.histc(x, bins=4, min=0.0, max=3.0)
>>> print(y)
[0 2 1 0]
tinyms.primitives.hsplit(input, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors horizontally. It is equivalent to ops.tensor_split with \(axis=1\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(6).reshape((2, 3)).astype('float32')
>>> output = ops.hsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Float32, value=[[ 0.00000000e+00], [ 3.00000000e+00]]),
 Tensor(shape=[2, 1], dtype=Float32, value=[[ 1.00000000e+00], [ 4.00000000e+00]]),
 Tensor(shape=[2, 1], dtype=Float32, value=[[ 2.00000000e+00], [ 5.00000000e+00]]))
tinyms.primitives.hstack(tensors)[source]

Stacks tensors in sequence horizontally. This is equivalent to concatenation along the second axis, except for 1-D tensors where it concatenates along the first axis.

Parameters:

tensors (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the second axis, except 1-D tensors which can be any length.

Returns:

Stacked Tensor, formed by stacking the given tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> x1 = Tensor([1, 1, 1])
>>> x2 = Tensor([2, 2, 2])
>>> output = ops.hstack((x1, x2))
>>> print(output)
[1. 1. 1. 2. 2. 2.]
tinyms.primitives.huber_loss(input, target, reduction='mean', delta=1.0)[source]

Calculates the error between the predicted value and the target value, which has the best of both the loss of l1 and the loss of mse.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), the reduction parameter is set to “none” then calculate the loss of \(x\) and \(y\) without dimensionality reduction. The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top\]

with

\[\begin{split}l_n = \begin{cases} 0.5 * (x_n - y_n)^2, & \text{if } |x_n - y_n| < delta; \\ delta * (|x_n - y_n| - 0.5 * delta), & \text{otherwise. } \end{cases}\end{split}\]

where \(N\) is the batch size.

If reduction is “mean” or “sum”, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{"mean";}\\ \operatorname{sum}(L), & \text{if reduction} = \text{"sum".} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, has same dtype and shape as the input in common cases. However, when the shape of target is different from the shape of input, and they should be broadcasted to each other.

  • reduction (str) – Type of reduction to be applied to loss. The optional values are “mean”, “sum” and “none”. Default: “mean”.

  • delta (Union[int, float]) – The threshold to change between two type of loss. The value must be greater than zero. Default: 1.0.

Returns:

Tensor or Scalar, if reduction is “none”, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input or target is not a Tensor.

  • TypeError – If dtype of delta is neither float nor int.

  • ValueError – If delta is less than or equal to 0.

  • ValueError – If reduction is not one of “none”, “mean”, “sum”.

  • ValueError – If input and target have different shapes and cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 2, 10, 2], mindspore.float32)
>>> target = Tensor([1, 5, 1, 20], mindspore.float32)
>>> output = ops.huber_loss(x, target, reduction="mean", delta=2)
>>> print(output)
13.5
tinyms.primitives.hypot(input, other)[source]

Computes hypotenuse of input tensors element-wise as legs of a right triangle. The shape of two inputs should be broadcastable, and data type of them should be one of: float32, float64

\[out_i = \sqrt{input_i^2 + other_i^2}\]
Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type input or other is not float32 or float64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([3., 5., 7.]))
>>> other = Tensor(np.array([4., 12., 24.]))
>>> y = ops.hypot(input, other)
>>> print(y)
[ 5. 13. 25.]
tinyms.primitives.i0(input)[source]

Alias for mindspore.ops.bessel_i0() .

Supported Platforms:

GPU CPU

tinyms.primitives.igamma(input, other)[source]

Calculates lower regularized incomplete Gamma function.

If we define input as a and other as x, the lower regularized incomplete Gamma function is defined as:

\[P(a, x) = Gamma(a, x) / Gamma(a) = 1 - Q(a, x)\]

where

\[Gamma(a, x) = \int_0^x t^{a-1} \exp^{-t} dt\]

is the lower incomplete Gamma function.

Above \(Q(a, x)\) is the upper regularized complete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • input (Tensor) – The first input tensor. With type of float32 or float64.

  • other (Tensor) – The second input tensor. With float32 or float64 type. other should have the same dtype with input.

Returns:

Tensor, has the same dtype as input and other.

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input other and a is not float32 nor float64.

  • TypeError – If other has different dtype with input.

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> output = ops.igamma(a, x)
>>> print(output)
[0.593994 0.35276785 0.21486944 0.13337152]
tinyms.primitives.igammac(input, other)[source]

Calculates upper regularized incomplete Gamma function.

If we define input as a and other as x, the upper regularized incomplete Gamma function is defined as:

\[Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\]

where

\[Gamma(a, x) = \int_{x}^{\infty} t^{a-1} exp(-t) dt\]

is the upper incomplete Gama function.

Above \(P(a, x)\) is the lower regularized complete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • input (Tensor) – The first input tensor. With type of float32 or float64.

  • other (Tensor) – The second input tensor. With float32 or float64 type. other should have the same dtype with input.

Returns:

Tensor, has the same dtype as input and other.

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input other and a is not float32 nor float64.

  • TypeError – If other has different dtype with input.

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> output = ops.igammac(a, x)
>>> print (output)
[0.40600586 0.6472318 0.7851304 0.8666283]
tinyms.primitives.imag(input)[source]

Returns a new tensor containing imaginary value of the input. If input is real, it will return zeros.

Parameters:

input (Tensor) – The input tensor to compute to.

Returns:

Tensor, the shape is the same as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3 + 0.4j)), mindspore.complex64)
>>> output = ops.imag(x)
>>> print(output)
0.4
tinyms.primitives.index_add(x, indices, y, axis, use_lock=True, check_index_bound=True)[source]

Adds tensor y to specified axis and indices of Parameter x. The axis should be in [0, len(x.dim) - 1], and indices should be in [0, x.shape[axis] - 1] at the axis dimension.

Parameters:
  • x (Parameter) – The input Parameter to add to.

  • indices (Tensor) – Add the value of x and y along the dimension of the axis according to the specified index value, with data type int32. The indices must be 1D with the same size as the size of y in the axis dimension. The values of indices should be in [0, b), where the b is the size of x in the axis dimension.

  • y (Tensor) – The input tensor with the value to add. Must have same data type as x. The shape must be the same as x except the axis th dimension.

  • axis (int) – The dimension along which to index.

  • use_lock (bool) – Whether to enable a lock to protect the updating process of variable tensors. If true, when updating the value of x, this process will be protected by a lock by using atomic operation. If false, the result may be unpredictable. Default: True.

  • check_index_bound (bool) – If true, check index boundary. If false, don’t check index boundary. Default: True.

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a Parameter.

  • TypeError – If neither indices nor y is a Tensor.

  • ValueError – If axis is out of x rank’s range.

  • ValueError – If x rank is not the same as y rank.

  • ValueError – If shape of indices is not 1D or size of indices is not equal to dimension of y[axis].

  • ValueError – If y’s shape is not the same as x except the axis th dimension.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> x = Parameter(Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32), name="name_x")
>>> indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> y = Tensor(np.array([[0.5, 1.0], [1.0, 1.5], [2.0, 2.5]]), mindspore.float32)
>>> output = ops.index_add(x, indices, y, 1)
>>> print(output)
[[ 1.5  2.   4. ]
 [ 5.   5.   7.5]
 [ 9.   8.  11.5]]
tinyms.primitives.index_fill(x, axis, index, value)[source]

Fills the elements under the axis dimension of the input Tensor x with the input value by selecting the indices in the order given in index.

Parameters:
  • x (Tensor) – Input Tensor. The supported data type is Number or Bool.

  • axis (Union[int, Tensor]) – Dimension along which to fill the input Tensor. Only supports an int number or a 0-dimensional Tensor, whose data type is int32 or int64.

  • index (Tensor) – Indices of the input Tensor to fill in. The dtype must be int32.

  • value (Union[bool, int, float, Tensor]) – Value to fill the returned Tensor. If value is a Tensor, it must be a 0-dimensional Tensor and has the same dtype as x. Otherwise, the value will be cast to a 0-dimensional Tensor with the same data type as x.

Returns:

Tensor, has the same dtype and shape as input Tensor.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If axis is neither int number nor Tensor.

  • TypeError – When axis is a Tensor, its dtype is not int32 or int64.

  • TypeError – If index is not a Tensor.

  • TypeError – If dtype of index is not int32.

  • TypeError – If value is not a bool, int, float, or Tensor.

  • TypeError – When value is a Tensor, the dtype of x and value are not the same.

  • ValueError – If axis is a Tensor and its rank is not equal to 0.

  • ValueError – If the rank of index is greater than 1D.

  • ValueError – When value is a Tensor and its rank is not equal to 0.

  • RuntimeError – If the value of axis is out the range of [-x.ndim, x.ndim - 1].

  • RuntimeError – If the values of index are out the range of [-x.shape[axis], x.shape[axis]-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> index = Tensor([0, 2], mindspore.int32)
>>> value = Tensor(-2.0, mindspore.float32)
>>> y = ops.index_fill(x, 1, index, value)
>>> print(y)
[[-2. 2. -2.]
 [-2. 5. -2.]
 [-2. 8. -2.]]
tinyms.primitives.index_select(input, axis, index)[source]

Generates a new Tensor that accesses the values of input along the specified axis dimension using the indices specified in index. The new Tensor has the same number of dimensions as input, with the size of the axis dimension being equal to the length of index, and the size of all other dimensions will be unchanged from the original input Tensor.

Note

The value of index must be in the range of [0, input.shape[axis]), the result is undefined out of range.

Parameters:
  • input (Tensor) – The input Tensor.

  • axis (int) – The dimension to be indexed.

  • index (Tensor) – A 1-D Tensor with the indices to access in input along the specified axis.

Returns:

Tensor, has the same dtype as input Tensor.

Raises:
  • TypeError – If input or index is not a Tensor.

  • TypeError – If axis is not int number.

  • ValueError – If the value of axis is out the range of [-input.ndim, input.ndim - 1].

  • ValueError – If the dimension of index is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> input = Tensor(np.arange(16).astype(np.float32).reshape(2, 2, 4))
>>> print(input)
[[[ 0.  1.  2.  3.]
  [ 4.  5.  6.  7.]]
 [[ 8.  9. 10. 11.]
  [12. 13. 14. 15.]]]
>>> index = Tensor([0,], mindspore.int32)
>>> y = ops.index_select(input, 1, index)
>>> print(y)
[[[ 0.  1.  2.  3.]]
 [[ 8.  9. 10. 11.]]]
tinyms.primitives.inner(input, other)[source]

Returns the inner product of two tensors.

For 1-D tensors (without complex conjugation), returns the ordinary inner product of vectors.

For higher dimensions, returns a sum product over the last axis.

Note

If input or other is a Tensor scalar, mindspore.ops.inner() will be the same as mindspore.ops.mul() .

Parameters:
  • input (Tensor) – First input.

  • other (Tensor) – Second input.

Returns:

Tensor, the result of the inner product.

Raises:

ValueError – If neither input nor other is scalar, and the last dimension of the two input tensors do not match.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: 2 1D tensors
>>> input = ms.Tensor([1, 2, 3], mstype.float32)
>>> y = ms.Tensor([4, 5, 6], mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
32
>>> # case2: Tensor scalar and tensor
>>> input = ms.Tensor([[[1, 2, 3], [3, 2, 1]], [[4, 5, 6], [4, 5, 6]]], mstype.float32)
>>> y = ms.Tensor(2, mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
[[[ 2.  4.  6.]
  [ 6.  4.  2.]]
 [[ 8. 10. 12.]
  [ 8. 10. 12.]]]
>>> # case3: Two tensors
>>> input = ms.Tensor([[[1, 2, 3], [3, 2, 1]], [[4, 5, 6], [4, 5, 6]]], mstype.float32)
>>> y = ms.Tensor([[2, 3, 4], [4, 3, 2]], mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
[[[20. 16.]
  [16. 20.]]
 [[47. 43.]
  [47. 43.]]]
tinyms.primitives.inplace_add(x, v, indices)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Note

indices refers to the left-most dimension.

Parameters:
  • x (Tensor) – The first input is a tensor whose data type is float16, float32, float64 or int32.

  • v (Tensor) – The second input is a tensor that has the same dimension sizes as x except the first dimension, which must be the same as indices’ size. It has the same data type with x.

  • indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to add with v. It is an integer or a tuple, whose value is in [0, the first dimension size of x).

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If the rank of x is not equal to the rank of v.

  • ValueError – If the length of indices is not equal to v.shape[0].

  • ValueError – If the values of indices are not in range of [0, x.shape[0]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_add(x, input_v, indices)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
tinyms.primitives.inplace_index_add(var, indices, updates, axis)[source]

Adds Tensor updates to specified axis and indices of Tensor var element-wise.

Parameters:
  • var (Parameter) – The input Parameter to add to, with data type uint8, int8, int16, int32, float16, float32, float64.

  • indices (Tensor) – The indies along axis to perform the addition. A 1D Tensor of shape \((updates.shape[axis],)\), every value of it should be in range \([0, var.shape[axis])\) with data type int32.

  • updates (Tensor) – The input Tensor with the value to add. Must have same data type as var. The shape must be the same as var except the axis th dimension.

  • axis (int) – The dimension along which to index. It should be in range \([0, len(var.dim))\).

Returns:

Tensor, updated result, has the same shape and dtype as var.

Raises:
  • TypeError – If var is not a Parameter.

  • TypeError – If neither indices nor updates is a Tensor.

  • ValueError – If axis is out of valid range.

  • ValueError – If var rank is not the same as updates rank.

  • ValueError – If shape of indices is not \((updates.shape[axis],)\).

  • ValueError – If updates’s shape is not the same as var except the axis th dimension.

Supported Platforms:

Ascend CPU

Examples

>>> var = Parameter(Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32))
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> var = ops.inplace_index_add(var, indices, updates, axis=0)
>>> print(var)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
tinyms.primitives.inplace_sub(x, v, indices)[source]

Subtracts v into specified rows of x. Computes \(y = x\); \(y[i,] -= input\_v\).

Note

indices refers to the left-most dimension.

Parameters:
  • x (Tensor) – The first input is a tensor whose data type is float16, float32, float64 or int32. Tensors of arbitrary dimensions are supported.

  • v (Tensor) – The second input is a tensor who has the same dimension sizes as x except the first dimension, which must be the same as indices’ size. It has the same data type with x.

  • indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to subtract with v. It is an int or tuple, whose value is in [0, the first dimension size of x).

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If the rank of x is not equal to the rank of v.

  • ValueError – If the length of indices is not equal to v.shape[0].

  • ValueError – If the values of indices are not in range of [0, x.shape[0]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_sub(x, input_v, indices)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
tinyms.primitives.inplace_update(x, v, indices)[source]

Updates specified values in x to v according to indices.

Warning

This is an experimental API that is subject to change or deletion.

Note

indices can only be indexed along the highest dimension.

Parameters:
  • x (Tensor) – A tensor which to be inplace updated. It can be one of the following data types: float32, float16 and int32.

  • v (Tensor) – A tensor with the same type as x and the same dimension size as x except the first dimension, which must be the same as the size of indices.

  • indices (Union[int, tuple[int], Tensor]) – Determines which rows of x to update with v, should be several int. It is an int or tuple or tensor with one dimension, whose value is in [-x.shape[0], x.shape[0]). If it is a tuple or Tensor, the size of ‘indices’ should be the same as the first dimension of ‘v’.

Returns:

Tensor, with the same type and shape as the input x.

Raises:
  • TypeError – If indices is neither int nor tuple nor Tensor.

  • TypeError – If indices is a tuple or Tensor, but its element is not an int.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_update(x, v, indices)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
tinyms.primitives.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]

Samples the input Tensor to the given size or scale_factor by using one of the interpolate algorithms.

Parameters:
  • input (Tensor) – Tensor to be resized. Input tensor must be a 3-D, 4-D, or 5-D tensor with shape \((N, C, [optional D], [optional H], W)\) , with data type of float.

  • size (Union[int, tuple[int], list[int]], optional) – The target size. If size is a tuple or list, size must have the same dimensions as input. One and only one of size and scale_factor can be set to None. Default: None.

  • scale_factor (Union[float, tuple[float], list[float]], optional) – The scale factor of new size of the tensor. If size is a tuple or list, size must have the same dimensions as input. One and only one of size and scale_factor can be set to None. Default: None.

  • mode (str) – The sampling algorithm. One of ‘nearest’(3D and 4D), ‘linear’ (3D only), ‘bilinear’ (4D only), ‘bicubic’ (4D only), ‘area’, ‘nearest-exact’(3D and 4D). Default: ‘nearest’.

  • align_corners (bool) –

    If True, rescale input by \((new\_height - 1) / (height - 1)\), which exactly aligns the corners of data and resized data. If False, rescale by \(new\_height / height\).

    old_i = new_length != 1 ? new_i * (old_length - 1) / (new_length - 1) : 0   # 'align_corners' = True
    
    old_i = new_length > 1 ? (new_x + 0.5) * old_length / new_length - 0.5 : 0  # 'align_corners' = False
    

    This is only valid for ‘linear’, ‘bilinear’, or ‘bicubic’ modes. Default: False.

  • recompute_scale_factor (bool, optional) – Recalculate scale_factor. If True, the parameter size will be calculated using the value of the scale_factor, and finally scaled using the value of size. If False, the value of size or scale_factor will be used for direct interpolation. Default: None.

Note

The ‘nearest-exact’ mode is the same as the nearest-neighbor interpolation algorithm used in scikit-image and PIL. The ‘nearest’ mode produces the same results as the INTER_NEAREST interpolation algorithm used in OpenCV.

Args Support List and Supported Platforms:

mode

input.dim

align_corners

scale_factor

device

nearest

3

-

×

Ascend,GPU,CPU

4

-

×

Ascend,GPU,CPU

linear

3

×

GPU,CPU

bilinear

4

×

Ascend,GPU,CPU

bicubic

4

×

GPU,CPU

area

3

-

Ascend,GPU,CPU

4

-

GPU

5

-

GPU,CPU

nearest-exact

3

-

×

Ascend,CPU

4

-

×

Ascend,CPU

  • - indicates that there is no such parameter.

  • × indicates that this parameter is not currently supported.

  • indicates that this parameter is supported.

Returns:

Tensor, resized, whose dimensions and dtype are the same as input.

Raises:
  • TypeErrorinput is not a Tensor.

  • ValueError – Both size and scale_factor are not empty.

  • ValueError – Both size and scale_factor are empty.

  • ValueError – When size is a tuple or list, its length is not equal to input.ndim - 2.

  • ValueError – When scale_factor is a tuple or list, its length is not equal to input.ndim - 2.

  • ValueErrormode is not in the list of supported modes.

  • ValueErrorinput.ndim is not in the list of supported dimensions for the corresponding mode.

  • ValueErrorsize is not empty, recompute_scale_factor is not empty.

  • ValueErrorscale_factor is not in the corresponding list of supported values.

  • ValueErroralign_corners is not in the corresponding list of supported values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
>>> output = ops.interpolate(input, size=(6,), mode='nearest')
>>> print(output)
    [[[1. 1. 2. 2. 3. 3.]
      [4. 4. 5. 5. 6. 6.]]]
tinyms.primitives.intopk(x1, x2, k)[source]

Determines whether the targets are in the top k predictions.

Parameters:
  • x1 (Tensor) – A 2D Tensor defines the predictions of a batch of samples with float16 or float32 data type.

  • x2 (Tensor) – A 1D Tensor defines the labels of a batch of samples with int32 data type. The size of x2 must be equal to the first dimension of x1. The values of x2 can not be negative and must be equal to or less than index of x1’s second dimension.

  • k (int) – Specifies the number of top elements to be used for computing precision along the last dimension.

Returns:

Tensor has 1 dimension of type bool and the same shape with x2. For labeling sample i in x2, if the label in the first k predictions for sample i is in x1, then the value is True, otherwise False.

Raises:
  • TypeError – If k is not an int.

  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If dtype of x1 is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> output = ops.intopk(x1, x2, 3)
>>> print(output)
[ True  False]
tinyms.primitives.inv(x)[source]

Computes Reciprocal of input tensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (Tensor) – Tensor of any dimension. Must be one of the following types: float16, float32 or int32.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = ops.inv(x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
tinyms.primitives.inverse(input)[source]

Compute the inverse of the input matrix.

Parameters:

input (Tensor) – A matrix to be calculated. Input input must be at least two dimensions, and the size of the last two dimensions must be the same size.

Returns:

Tensor, has the same type and shape as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the size of the last two dimensions of input is not the same.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([[1., 2.], [3., 4.]], mstype.float32)
>>> print(ops.inverse(x))
[[-2.   1. ]
 [ 1.5 -0.5]]
tinyms.primitives.invert(x)[source]

Flips all bits of input tensor element-wise.

\[out_i = \sim x_{i}\]
Parameters:

x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type should be one of the following types: int16, uint16.

Returns:

Tensor, has the same shape as x.

Raises:

TypeError – If dtype of x is neither int16 nor uint16.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = ops.invert(x)
>>> print(output)
[-26 -5 -14 -10]
tinyms.primitives.iou(anchor_boxes, gt_boxes, mode='iou')[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

\[ \begin{align}\begin{aligned}\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\\\text{IOF} = \frac{\text{Area of Overlap}}{\text{Area of Ground Truth}}\end{aligned}\end{align} \]

Warning

In Ascend, only computation of float16 data is supported. To avoid overflow, the input length and width are scaled by 0.2 internally.

Parameters:
  • anchor_boxes (Tensor) – Anchor boxes, tensor of shape \((N, 4)\) . “N” indicates the number of anchor boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be either float16, float32 or float64.

  • gt_boxes (Tensor) – Ground truth boxes, tensor of shape \((M, 4)\) . “M” indicates the number of ground truth boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be either float16, float32 or float64.

  • mode (string) – The mode is used to specify the calculation method, now supporting ‘iou’ (intersection over union) or ‘iof’ (intersection over foreground) mode. Default: ‘iou’.

Returns:

Tensor, the ‘iou’ values, tensor of shape \((M, N)\) , with the same data type as anchor_boxes.

Raises:

KeyError – When mode is not ‘iou’ or ‘iof’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> mode = 'iou'
>>> output = ops.iou(anchor_boxes, gt_boxes, mode)
>>> print(output.shape)
(3, 3)
tinyms.primitives.is_complex(input)[source]

Return True if the data type of the tensor is complex, otherwise return False.

Parameters:

input (Tensor) – The input tensor.

Returns:

Bool, return whether the data type of the tensor is complex.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> input = Tensor([1, 1+1j, 2+2j], mstype.complex64)
>>> output = ops.is_complex(input)
>>> print(output)
True
tinyms.primitives.is_floating_point(input)[source]

Judge whether the data type of input is a floating point data type i.e., one of mindspore.float64, mindspore.float32, mindspore.float16.

Parameters:

input (Tensor) – The input Tensor.

Returns:

Bool. If the dtype of input is a floating point data type, return True. Otherwise, return False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = ms.Tensor([1, 2, 3], ms.float32)
>>> y = ms.Tensor([1, 2, 3], ms.int64)
>>> output = ops.is_floating_point(x)
>>> output2 = ops.is_floating_point(y)
>>> print(output)
True
>>> print(output2)
False
tinyms.primitives.is_tensor(obj)[source]

Check whether the input object is a mindspore.Tensor .

Parameters:

obj (Object) – input object.

Returns:

Bool. Return True if obj is a Tensor, otherwise, return False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> a = Tensor([1.9, 2.2, 3.1])
>>> ops.is_tensor(a)
True
tinyms.primitives.isclose(x1, x2, rtol=1e-05, atol=1e-08, equal_nan=False)[source]

Returns a new Tensor with boolean elements representing if each element of x1 is “close” to the corresponding element of x2. Closeness is defined as:

\[∣x1−x2∣ ≤ atol + rtol × ∣x2∣\]
Parameters:
  • x1 (Tensor) – First Tensor to compare, with data type belongs to float32, float16, int32.

  • x2 (Tensor) – Second Tensor to compare, with data type belongs to float32, float16, int32.

  • rtol (float, optional) – Relative tolerance. Default: 1e-05.

  • atol (float, optional) – Absolute tolerance. Default: 1e-08.

  • equal_nan (bool, optional) – If True, then two NaNs will be considered equal. Default: False.

Returns:

A bool Tensor, with the shape as broadcasted result of the input x1 and x2.

Raises:
  • TypeError – If either of x1 and x2 is not Tensor.

  • TypeError – If either of x1 and x2 is not float16, float32 or int32.

  • TypeError – If either of atol and rtol is not float.

  • TypeError – If equal_nan is not bool.

  • TypeError – If the dtype of x1 is not same as the x2.

  • ValueError – If x1 and x2 can not be broadcast.

  • ValueError – If either of atol and rtol is less than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1.3, 2.1, 3.2, 4.1, 5.1]), mindspore.float16)
>>> other = Tensor(np.array([1.3, 3.3, 2.3, 3.1, 5.1]), mindspore.float16)
>>> output = ops.isclose(input, other)
>>> print(output)
[ True False False False  True]
tinyms.primitives.isfinite(x)[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (Tensor) – The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isfinite(x)
>>> print(output)
[False  True False]
tinyms.primitives.isinf(input)[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Inf} \\ & \ False,\ \text{ if } x_{i} \ne \text{Inf} \end{cases}\end{split}\]

where \(Inf\) means not a number.

Parameters:

input (Tensor) – The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isinf(x)
>>> print(output)
[False False True]
tinyms.primitives.isnan(x)[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (Tensor) – The input tensor.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isnan(x)
>>> print(output)
[ True False False]
tinyms.primitives.isneginf(input)[source]

Tests element-wise for negative infinity.

Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, true where input is negative infinity, false otherwise.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.isneginf(Tensor([[-float("inf"), float("inf")], [1, -float("inf")]], mstype.float32))
>>> print(output)
[[ True False]
 [False  True]]
tinyms.primitives.isposinf(input)[source]

Tests element-wise for positive infinity.

Parameters:

input (Tensor) – Input values.

Returns:

Tensor, true where input is positive infinity, false otherwise.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.isposinf(Tensor([[-float("inf"), float("inf")], [1, float("inf")]], mstype.float32))
>>> print(output)
[[False  True]
 [False  True]]
tinyms.primitives.isreal(input)[source]

Tests element-wise for real number. A complex value is considered real when its imaginary part is 0.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, true where input is real number, false otherwise.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> x = Tensor([1, 1+1j, 2+0j], mstype.complex64)
>>> output = ops.isreal(x)
>>> print(output)
[ True False True]
tinyms.primitives.jacfwd(fn, grad_position=0, has_aux=False)[source]

Compute Jacobian via forward mode, corresponding to forward-mode differentiation. When number of outputs is much greater than that of inputs, it’s better to calculate Jacobian via forward mode than reverse mode to get better performance.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[int, tuple[int]], optional) – If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. ‘grad_position’ begins with 0. Default: 0.

  • has_aux (bool, optional) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the Jacobian function for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (Jacobian, out2) and out2 does not contribute to the differentiation, otherwise Jacobian .

Raises:

TypeErrorgrad_position or has_aux does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import jacfwd
>>> from mindspore import Tensor
>>> class MultipleInputsMultipleOutputsNet(nn.Cell):
...     def construct(self, x, y, z):
...         return x ** 2 + y ** 2 + z ** 2, x * y * z
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> z = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> net = MultipleInputsMultipleOutputsNet()
>>> jac, aux = jacfwd(net, grad_position=0, has_aux=True)(x, y, z)
>>> print(jac)
[[[[ 2.,  0.]
   [ 0.,  0.]]
  [[ 0.,  4.]
   [ 0.,  0.]]]
 [[[ 0.,  0.]
   [ 6.,  0.]]
  [[ 0.,  0.]
   [ 0.,  8.]]]]
>>> print(aux)
[[ 1.  4.]
 [ 9. 16.]]
tinyms.primitives.jacrev(fn, grad_position=0, has_aux=False)[source]

Compute Jacobian via reverse mode, corresponding to reverse-mode differentiation. When number of inputs is much greater than that of outputs, it’s better to calculate Jacobian via reverse mode than forward mode to get better performance.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[int, tuple[int]], optional) – If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. ‘grad_position’ begins with 0. Default: 0.

  • has_aux (bool, optional) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the Jacobian function for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (Jacobian, out2) and out2 does not contribute to the differentiation, otherwise Jacobian .

Raises:

TypeErrorgrad_position or has_aux does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import jacrev
>>> from mindspore import Tensor
>>> class MultipleInputsMultipleOutputsNet(nn.Cell):
...     def construct(self, x, y, z):
...         return x ** 2 + y ** 2 + z ** 2, x * y * z
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> z = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> net = MultipleInputsMultipleOutputsNet()
>>> jac, aux = jacrev(net, grad_position=0, has_aux=True)(x, y, z)
>>> print(jac)
[[[[ 2.,  0.]
   [ 0.,  0.]]
  [[ 0.,  4.]
   [ 0.,  0.]]]
 [[[ 0.,  0.]
   [ 6.,  0.]]
  [[ 0.,  0.]
   [ 0.,  8.]]]]
>>> print(aux)
[[ 1.  4.]
 [ 9. 16.]]
tinyms.primitives.jet(fn, primals, series)[source]

This function is designed to calculate the higher order differentiation of given composite function. To figure out first to n-th order differentiations, original inputs and first to n-th order derivative of original inputs must be provided together. Generally, it is recommended to set the values of given first order derivative to 1, while the other to 0, which is like the derivative of origin input with respect to itself.

Note

If primals is Tensor of int type, it will be converted to Tensor of float type.

Parameters:
  • fn (Union[Cell, function]) – Function to do TaylorOperation.

  • primals (Union[Tensor, tuple[Tensor]]) – The inputs to fn.

  • series (Union[Tensor, tuple[Tensor]]) – If tuple, the length and type of series should be the same as inputs. For each Tensor, the length of first dimension i represents the 1 to i+1-th order of derivative of output with respect to the inputs will be figured out.

Returns:

Tuple, tuple of out_primals and out_series.

  • out_primals (Union[Tensor, list[Tensor]]) - The output of fn(primals).

  • out_series (Union[Tensor, list[Tensor]]) - The 1 to i+1-th order of derivative of output with respect to the inputs.

Raises:
  • TypeError – If primals is not a tensor or tuple of tensors.

  • TypeError – If type of primals is not the same as type of series.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.sin = ops.Sin()
...         self.exp = ops.Exp()
...     def construct(self, x):
...         out1 = self.sin(x)
...         out2 = self.exp(out1)
...         return out2
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> series = Tensor(np.array([[[1, 1], [1, 1]], [[0, 0], [0, 0]], [[0, 0], [0, 0]]]).astype(np.float32))
>>> net = Net()
>>> out_primals, out_series = ops.jet(net, primals, series)
>>> print(out_primals, out_series)
[[2.319777  2.4825778]
 [1.1515628 0.4691642]] [[[ 1.2533808  -1.0331168 ]
  [-1.1400385  -0.3066662 ]]
 [[-1.2748207  -1.8274734 ]
  [ 0.966121    0.55551505]]
 [[-4.0515366   3.6724353 ]
  [ 0.5053504  -0.52061415]]]
tinyms.primitives.jvp(fn, inputs, v, has_aux=False)[source]

Compute the jacobian-vector-product of the given network. jvp matches forward-mode differentiation.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single Tensor or tuple of Tensors.

  • inputs (Union[Tensor, tuple[Tensor], list[Tensor]]) – The inputs to fn .

  • v (Union[Tensor, tuple[Tensor], list[Tensor]]) – The vector in jacobian-vector-product. The shape and type of v should be the same as inputs .

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

  • net_output (Union[Tensor, tuple[Tensor]]) - The output of fn(inputs) . Specially, when has_aux is set True, netout is the first output of fn(inputs) .

  • jvp (Union[Tensor, tuple[Tensor]]) - The result of jacobian-vector-product.

  • aux_value (Union[Tensor, tuple[Tensor]], optional) - When has_aux is True, aux_value will be returned. It means the second to last outputs of fn(inputs) . Specially, aux_value does not contribute to gradient.

Raises:

TypeErrorinputs or v does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import jvp
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def construct(self, x, y):
...         return x**3 + y
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> v = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> output = jvp(Net(), (x, y), (v, v))
>>> print(output[0])
[[ 2. 10.]
 [30. 68.]]
>>> print(output[1])
[[ 4. 13.]
 [28. 49.]]
>>>
>>> def fn(x, y):
...     return x ** 3 + y, y
>>> output, jvp_out, aux = jvp(fn, (x, y), (v, v), has_aux=True)
>>> print(output)
[[ 2. 10.]
 [30. 68.]]
>>> print(jvp_out)
[[ 4. 13.]
 [28. 49.]]
>>> print(aux)
[[ 1. 2.]
 [3. 4.]]
tinyms.primitives.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None)[source]

Generates a Kaiser window, which is also known as the Kaiser-Bessel window.

The Kaiser window is defined as

\[w(n) = \frac{I_{0}\left( \beta\sqrt{1 - \frac{4n^{2}}{(M - 1)^{2}}} \right)}{I_{0}(\beta)}\]

with

\[- \frac{M - 1}{2} \leq n \leq \frac{M - 1}{2}\]

where \(I_0\) is the modified zeroth-order Bessel function.

Parameters:
  • window_length (int) – Length of window.

  • periodic (bool, optional) – When set to True, generates a periodic window for spectral analysis. When set to False, generates a symmetric window for filter design. Default: True.

  • beta (float, optional) – Shape parameter, when beta gets large, the window narrows. Default: 12.0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type, it must be float. Default: None.

Returns:

Tensor, a Kaiser window.

Raises:
  • TypeError – If window_length or beta is not an integer.

  • TypeError – If periodic is not a variable of Boolean type.

  • ValueError – If window_length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = 5
>>> out = ops.kaiser_window(window_length)
>>> print(out.asnumpy())
[5.27734413e-05 1.01719688e-01 7.92939834e-01 7.92939834e-01
 1.01719688e-01]
tinyms.primitives.kl_div(logits, labels, reduction='mean')[source]

Computes the Kullback-Leibler divergence between the logits and the labels.

For input tensors \(x\) and \(target\) with the same shape, the updating formulas of KLDivLoss algorithm are as follows,

\[L(x, target) = target \cdot (\log target - x)\]

Then,

\[\begin{split}\ell(x, target) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{batchmean}(L), & \text{if reduction} = \text{'batchmean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(x\) represents logits. \(target\) represents labels. \(\ell(x, target)\) represents output.

Note

  • Currently it does not support float64 input on Ascend.

  • The output aligns with the mathematical definition of Kullback-Leibler divergence only when reduction is set to ‘batchmean’.

Parameters:
  • logits (Tensor) – The input Tensor. The data type must be float16, float32 or float64.

  • labels (Tensor) – The label Tensor which has the same shape and data type as logits.

  • reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘batchmean’ or ‘sum’. Default: ‘mean’.

Returns:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise, it is a scalar.

Raises:
  • TypeError – If reduction is not a str.

  • TypeError – If neither logits nor labels is a Tensor.

  • TypeError – If dtype of logits or labels is not float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = mindspore.ops.kl_div(logits, labels, 'mean')
>>> print(output)
-0.23333333
tinyms.primitives.kron(x, y)[source]

Computes the Kronecker product \(x ⊗ y\), denoted by ⊗, of x and y.

If x is a \((a_{0}\) x \(a_{1}\) x … x \(a_{n})\) Tensor and y is a \((b_{0}\) x \(b_{1}\) x … x \(b_{n})\) Tensor, the result will be a \((a_{0}*b_{0}\) x \(a_{1}*b_{1}\) x … x \(a_{n}*b_{n})\) Tensor with the following entries:

\[(x ⊗ y)_{k_{0},k_{1},...k_{n}} = x_{i_{0},i_{1},...i_{n}} * y_{j_{0},j_{1},...j_{n}},\]

where \(k_{t} = i_{t} * b_{t} + j_{t}\) for 0 ≤ tn. If one Tensor has fewer dimensions than the other it is unsqueezed until it has the same number of dimensions.

Note

Supports real-valued and complex-valued inputs.

Parameters:
  • x (Tensor) – Input Tensor, has the shape \((r0, r1, ... , rN)\).

  • y (Tensor) – Input Tensor, has the shape \((s0, s1, ... , sN)\).

Returns:

Tensor, has the shape \((r0 * s0, r1 * s1, ... , rN * sN)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]])).astype(np.float32)
>>> y = Tensor(np.array([[-1, -2, -3], [-4, -6, -8]])).astype(np.float32)
>>> output = ops.kron(x, y)
>>> print(output)
[[  0.   0.   0.  -1.  -2.  -3.  -2.  -4.  -6.]
 [  0.   0.   0.  -4.  -6.  -8.  -8. -12. -16.]
 [ -3.  -6.  -9.  -4.  -8. -12.  -5. -10. -15.]
 [-12. -18. -24. -16. -24. -32. -20. -30. -40.]]
tinyms.primitives.l1_loss(input, target, reduction='mean')[source]

Calculate the mean absolute error between the input value and the target value.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), reduction is set to “none” , then calculate the loss of \(x\) and \(y\) without dimensionality reduction.

The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad \text{with } l_n = \left| x_n - y_n \right|,\]

where \(N\) is the batch size.

If reduction is mean or sum, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, usually has the same shape as the input. If input and target have different shape, make sure they can broadcast to each other.

  • reduction (str, optional) – Type of reduction to be applied to loss. The optional value is “mean”, “sum” or “none”. Default: “mean”.

Returns:

Tensor or Scalar, if reduction is “none”, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If target is not a Tensor.

  • ValueError – If reduction is not one of “none”, “mean” or “sum”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = ms.Tensor([[1, 2, 3], [4, 5, 6]], mstype.float32)
>>> target = ms.Tensor([[6, 5, 4], [3, 2, 1]], mstype.float32)
>>> output = ops.l1_loss(x, target, reduction="mean")
>>> print(output)
3.0
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be the broadcasted shape of input shape and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore import ops as ops
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mindspore.float32)
>>> lambda_param = Tensor(1.0, mindspore.float32)
>>> output = ops.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.lcm(input, other)[source]

Computes least common multiplier of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64

Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type input or other is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([7, 8, 9]))
>>> other = Tensor(np.array([14, 6, 12]))
>>> y = ops.lcm(input, other)
>>> print(y)
[14 24 36]
tinyms.primitives.ldexp(x, other)[source]

Multiplies input Tensor by \(2^{other}\) element-wise.

It takes two arguments, a mantissa x and an exponent other, and returns their product as a floating-point number:

\[out_{i} = x_{i} * ( 2 ^{other_{i}} )\]

Note

This function is commonly used to construct floating-point numbers from their component parts, or to scale a floating-point number by a power of two.

Parameters:
  • x (Tensor) – The input Tensor.

  • other (Tensor) – A Tensor of integers that represent exponents.

Returns:

Tensor, the output Tensor.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> x = Tensor(np.array([1.]), mindspore.float32)
>>> other = Tensor(np.array([1, 2, 3, 4]), mindspore.int32)
>>> out = ops.ldexp(x, other)
>>> print(out)
[ 2.  4.  8. 16.]
>>> x = Tensor(np.array([[1.], [2]]), mindspore.float32)
>>> other = Tensor(np.array([[1.], [2]]), mindspore.int32)
>>> out = ops.ldexp(x, other)
>>> print(out)
[[2.]
 [8.]]
tinyms.primitives.le(x, y)[source]

Computes the boolean value of \(x <= y\) element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<=y_{i} \\ & \text{False, if } x_{i}>y_{i} \end{cases}\end{split}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • x (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • y (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.le(x, y)
>>> print(output)
[ True False  True]
tinyms.primitives.leaky_relu(input, alpha=0.2)[source]

leaky_relu activation function. The element of input less than 0 times alpha .

The activation function is defined as:

\[\text{leaky_relu}(input) = \begin{cases}input, &\text{if } input \geq 0; \cr {\alpha} * input, &\text{otherwise.}\end{cases}\]

where \(\alpha\) represents the alpha parameter.

For more details, see Rectifier Nonlinearities Improve Neural Network Acoustic Models.

Parameters:
  • input (Tensor) – The input of leaky_relu is a Tensor of any dimension.

  • alpha (Union[int, float]) – Slope of the activation function when the element of input is less than 0. Default: 0.2.

Returns:

Tensor, has the same type and shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If alpha is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> print(ops.leaky_relu(x, alpha=0.2))
[[-0.2  4.  -1.6]
 [ 2.  -1.   9. ]]
tinyms.primitives.lerp(input, end, weight)[source]

Does a linear interpolation of two tensors input and end based on a float or tensor weight.

If weight is a tensor, the shapes of three inputs need to be broadcast; If weight is a float, the shapes of input and end need to be broadcast.

\[output_{i} = input_{i} + weight_{i} * (end_{i} - input_{i})\]
Parameters:
  • input (Tensor) – The tensor with the starting points. Data type must be float16 or float32.

  • end (Tensor) – The tensor with the ending points. Data type must be the same as input.

  • weight (Union[float, Tensor]) – The weight for the interpolation formula. Must be a float or a scalar tensor with float16 or float32 data type.

Returns:

Tensor, has the same type and shape as input input.

Raises:
  • TypeError – If input or end is not a tensor.

  • TypeError – If weight is neither scalar(float) nor tensor.

  • TypeError – If dtype of input or end is neither float16 nor float32.

  • TypeError – If dtype of weight is neither float16 nor float32 when it is a tensor.

  • TypeError – If input and end have different data types.

  • TypeError – If input, end and weight have different data types when weight is a tensor.

  • ValueError – If end could not be broadcast to a tensor with shape of input.

  • ValueError – If weight could not be broadcast to tensors with shapes of input and end when it is a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> end = Tensor(np.array([10., 10., 10., 10.]), mindspore.float32)
>>> output = ops.lerp(input, end, 0.5)
>>> print(output)
[5.5 6. 6.5 7. ]
tinyms.primitives.less(x, y)[source]

Computes the boolean value of \(x < y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<y_{i} \\ & \text{False, if } x_{i}>=y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.less(x, y)
>>> print(output)
[False False True]
tinyms.primitives.less_equal(input, other)[source]

Computes the boolean value of \(input <= other\) element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } input_{i}<=other_{i} \\ & \text{False, if } input_{i}>other_{i} \end{cases}\end{split}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, Number, bool]) –

    The first input is a Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, Number, bool]) – The second input, when the first input is a Tensor, the second input should be a Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> other = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.less_equal(x, other)
>>> print(output)
[ True False  True]
tinyms.primitives.lgamma(input)[source]

Computes the natural logarithm of the absolute value of the gamma function on input.

\[\text{out}_{i} = \ln \Gamma(|\text{input}_{i}|)\]
Parameters:

input (Tensor) – The input tensor. With type of float16 or float32 or float64.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 3.2, 8.5]), mindspore.float32)
>>> output = ops.lgamma(x)
>>> print(output)
[0.5723649 0.8854049 9.549267 ]
tinyms.primitives.linearize(fn, inputs)[source]

Produces a linear approximation to fun using jvp() and partial eval. This function is mainly useful if you want to apply jvp multiple times.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single tensor or tuple of Tensors.

  • inputs (Union[Tensor, Tuple or List of Tensors]) – The inputs to fn.

Returns:

Tuple, tuple of output and jvp_fn.

  • netout (Tensor or Tuple of Tensors) - The output of “fn(inputs)”.

  • jvp_fn (Function) - The function that evaluates the Jacobian-vector product.

Raises:

TypeError – If the input is not a tensor or tuple or list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, Parameter, ops
>>> from mindspore import nn
>>> from mindspore.ops.functional import linearize
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...     def construct(self, x, y):
...         out = self.matmul(x, y)
...         return out
>>> x = Tensor(np.array([[1, 2, 3], [3, 4, 5]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4], [5, 6]]).astype(np.float32))
>>> v = (Tensor(np.array([[1, 1, 1], [1, 1, 1]]).astype(np.float32)),
...      Tensor(np.array([[1, 1], [1, 1], [0, 0]]).astype(np.float32)))
>>> output, jvp_fn = linearize(Net(), (x, y))
>>> print(output)
[[22. 28.]
 [40. 52.]]
>>> jvp = jvp_fn(v)
>>> print(jvp)
[[12. 15.]
 [16. 19.]]
tinyms.primitives.linspace(start, end, steps)[source]

Returns a Tensor whose value is steps evenly spaced in the interval start and end (including start and end), and the length of the output Tensor is steps.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [start, start+step, start+2*step, ... , end] \end{aligned}\end{split}\]
Parameters:
  • start (Union[Tensor, int, float]) – Start value of interval. The tensor data type must be float32 or float64 and with shape of 0-D.

  • end (Union[Tensor, int, float]) – Last value of interval. The tensor data type must be float32 or float64 and with shape of 0-D.

  • steps (Union[Tensor, int]) – Number of ticks in the interval, inclusive of start and end. Must be positive int number or 0D int32/int64 Tensor.

Returns:

Tensor, has the same dtype as start, and the shape of \((steps)\).

Raises:
  • TypeError – If start or end is not a Tensor.

  • TypeError – If dtype of start or dtype of end is not float32 or float64.

  • ValueError – If shape of start or shape of end is not 0-D.

  • TypeError – If steps is not int or 0D int32/int64 Tensor.

  • ValueError – If steps is not positive int number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> steps = 5
>>> output = ops.linspace(start, end, steps)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
tinyms.primitives.log(input)[source]

Returns the natural logarithm of a tensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

input (Tensor) – Input Tensor of any dimension. The value must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64 on CPU.

  • TypeError – If dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.log(x)
>>> print(output)
[0.        0.6931472 1.3862944]
tinyms.primitives.log10(input)[source]

Returns a new Tensor by taking the base 10 logarithm of the elements in the input Tensor.

\[y_i = log_{10}(input_i)\]

Warning

If the input value of operator log10 is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affected.

Parameters:

input (Tensor) – Input Tensor of any dimension. The each element in Tensor must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64 on CPU and GPU, if dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, 10]).astype(np.float16))
>>> output = ops.log10(x)
>>> print(output)
[0.301 0.602 1.   ]
tinyms.primitives.log1p(input)[source]

Returns the natural logarithm of one plus the input tensor element-wise.

\[out_i = {log_e}(input_i + 1)\]
Parameters:

input (Tensor) – The input tensor. With float16 or float32 data type. The value must be greater than -1. \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.log1p(x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
tinyms.primitives.log2(input)[source]

Returns a new Tensor by taking the base 2 logarithm of the elements in the input Tensor.

\[y_i = log_2(input_i)\]

Warning

If the input value of operator log2 is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affected.

Parameters:

input (Tensor) – Input Tensor of any dimension. The value must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64 on CPU and GPU, if dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, 8]).astype(np.float16))
>>> output = ops.log2(x)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.log_matrix_determinant(input)[source]

log_matrix_determinant is deprecated, please use matrix_solve instead.

tinyms.primitives.log_softmax(logits, axis=-1)[source]

Applies the Log Softmax function to the input tensor on the specified axis. Supposes a slice in the given axis, \(x\) for each element \(x_i\), the Log Softmax function is shown as follows:

\[\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),\]

where \(N\) is the length of the Tensor.

Parameters:
  • logits (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

  • axis (int) – The axis to perform the Log softmax operation. Default: -1.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • ValueError – If axis is not in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.log_softmax(logits)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
tinyms.primitives.log_uniform_candidate_sampler(true_classes, num_true=1, num_sampled=5, unique=True, range_max=5, seed=0)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Randomly samples a tensor of sampled classes from the range of integers [0, range_max).

Parameters:
  • true_classes (Tensor) – The target classes. With data type of int64 and shape \((batch\_size, num\_true)\) .

  • num_true (int) – The number of target classes per training example. Default: 1.

  • num_sampled (int) – The number of classes to randomly sample. Default: 5.

  • unique (bool) – Determines whether sample with rejection. If unique is True, all sampled classes in a batch are unique. Default: True.

  • range_max (int) – The number of possible classes. When unique is True, range_max must be greater than or equal to num_sampled. Default: 5.

  • seed (int) – Random seed, must be non-negative. Default: 0.

Returns:

Tuple of 3 Tensors.

  • sampled_candidates (Tensor) - A Tensor with shape \((num\_sampled,)\) and the same type as true_classes.

  • true_expected_count (Tensor) - A Tensor with the same shape as true_classes and type float32.

  • sampled_expected_count (Tensor) - A Tensor with the same shape as sampled_candidates and type float32.

Raises:
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If unique is not a bool.

  • TypeError – If neither range_max nor seed is an int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> output1, output2, output3 = ops.log_uniform_candidate_sampler(
... Tensor(np.array([[1, 7], [0, 4], [3, 3]])), 2, 5, True, 5)
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
tinyms.primitives.logaddexp(input, other)[source]

Computes the logarithm of the sum of exponentiations of the inputs.

\[out_i = log(exp(input_i) + exp(other_i))\]
Parameters:
  • input (Tensor) – Input Tensor. The dtype of input must be float.

  • other (Tensor) – Input Tensor. The dtype of input must be float. If the shape of input is not equal to the shape of other, they must be broadcastable to a common shape (which becomes the shape of the output).

Returns:

Tensor.

Raises:
  • TypeError – If input, other is not a Tensor.

  • TypeError – The dtype of input or other is not float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.float16))
>>> x2 = Tensor(np.array(2).astype(np.float16))
>>> output = ops.logaddexp(x1, x2)
>>> print(output)
[2.312 2.693 3.312]
tinyms.primitives.logaddexp2(input, other)[source]

Computes the logarithm of the sum of exponentiations in base of 2 of the inputs.

\[out_i = log_2(2^{input_i} + 2^{other_i})\]
Parameters:
  • input (Tensor) – Input tensor. The dtype of input must be float.

  • other (Tensor) – Input tensor. The dtype of other must be float. If input.shape != other.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

Returns:

Tensor.

Raises:
  • TypeError – If input, other is not a Tensor.

  • TypeError – If the dtype of input, other is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([2, 4, 8]).astype(np.float16))
>>> x2 = Tensor(np.array([2]).astype(np.float16))
>>> output = ops.logaddexp2(x1, x2)
>>> print(output)
[3. 4.32 8.02]
tinyms.primitives.logdet(input)[source]

Calculates log determinant of one or a batch of square matrices.

Parameters:

input (Tensor) – Input Tensor of any dimension.

Returns:

Tensor, the log determinant of input. If the matrix determinant is smaller than 0, nan will be returned. If the matrix determinant is 0, -inf will be returned.

Raises:

TypeError – If dtype of input is not float32, float64, Complex64 or Complex128.

Supported Platforms:

GPU CPU

Examples

>>> a = Tensor([[[8, 9], [1, 2]], [[5, 6], [3, 4]]], mindspore.float32)
>>> output = ops.logdet(a)
>>> print(output)
[1.9459091 0.6931454]
tinyms.primitives.logical_and(input, other)[source]

Computes the “logical AND” of two tensors element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = input_{i} \wedge other_{i}\]

Note

LogicalAnd supports broadcasting.

Parameters:
  • input (Union[Tensor, bool]) – The first input is a bool or a tensor whose data type can be implicitly converted to bool.

  • other (Union[Tensor, bool]) – The second input is a bool when the first input is a tensor or a tensor whose data type can be implicitly converted to bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_and(x, y)
>>> print(output)
[ True False False]
tinyms.primitives.logical_not(input)[source]

Computes the “logical NOT” of a tensor element-wise.

\[out_{i} = \neg input_{i}\]
Parameters:

input (Tensor) – The input tensor. \((N,*)\) where \(*\) means,any number of additional dimensions.

Returns:

Tensor, the shape is the same as the input, and the dtype is bool.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> output = ops.logical_not(x)
>>> print(output)
[False  True False]
tinyms.primitives.logical_or(input, other)[source]

Computes the “logical OR” of two tensors element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = x_{i} \vee y_{i}\]

Note

LogicalOr supports broadcasting.

Parameters:
  • input (Union[Tensor, bool]) – The first input is a bool or a tensor whose data type can be implicitly converted to bool.

  • other (Union[Tensor, bool]) – The second input is a bool when the first input is a tensor or a tensor whose data type can be implicitly converted to bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_or(x, y)
>>> print(output)
[ True  True  True]
tinyms.primitives.logical_xor(input, other)[source]

Computes the “logical XOR” of two tensors element-wise.

\[out_{i} = x_{i} \oplus y_{i}\]
Parameters:
  • input (Tensor) – The first input is a tensor whose data type can be implicitly converted to bool.

  • other (Tensor) – The second input is a tensor whose data type can be implicitly converted to bool to compute XOR with the first input.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:
  • TypeError – If neither input nor other is a Tensor whose data type is bool.

  • ValueError – If the shape of two inputs cannot be broadcast.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_xor(x, y)
>>> print(output)
[False True True]
tinyms.primitives.logit(input, eps=None)[source]

Calculate the logit of a tensor element-wise. When eps is not None, element in input is clamped to [eps, 1-eps]. When eps is None, input input is not clamped.

\[\begin{split}\begin{align} y_{i} & = \ln(\frac{z_{i}}{1 - z_{i}}) \\ z_{i} & = \begin{cases} input_{i} & \text{if eps is None} \\ \text{eps} & \text{if } input_{i} \lt \text{eps} \\ input_{i} & \text{if } \text{eps} \leq input_{i} \leq 1 - \text{eps} \\ 1 - \text{eps} & \text{if } input_{i} \gt 1 - \text{eps} \end{cases} \end{align}\end{split}\]
Parameters:
  • input (Tensor) – The input tensor.

  • eps (float, optional) – The epsilon. If eps is not None, the input clamp bound is defined as [eps, 1-eps], otherwise, the input input is not clamped. Default: None.

Returns:

Tensor, with the same shape and dtype as the input.

Raises:
  • TypeError – If eps is not a float.

  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.1, 0.2, 0.3]).astype(np.float32))
>>> output = ops.logit(x, eps=1e-5)
>>> print(output)
[-2.1972246 -1.3862944 -0.8472978]
tinyms.primitives.logsigmoid(x)[source]

Applies logsigmoid activation element-wise. The input is a Tensor with any valid shape.

Logsigmoid is defined as:

\[\text{logsigmoid}(x_{i}) = log(\frac{1}{1 + \exp(-x_i)}),\]

where \(x_{i}\) is the element of the input.

Parameters:

x (Tensor) – The input of LogSigmoid with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = ops.logsigmoid(x)
>>> print(output)
[-0.31326166 -0.12692806 -0.04858734]
tinyms.primitives.logspace(start, end, steps, base=10, *, dtype=mindspore.float32)[source]

Returns a Tensor whose value is evenly spaced on a logarithmic scale.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [base^{start}, base^{start + 1 * step}, ... , base^{start + (steps-2) * step}, base^{end}] \end{aligned}\end{split}\]

Note

  • Input base must be integer.

Parameters:
  • start (Union[float, Tensor]) – Start value of interval.

  • end (Union[float, Tensor]) – End value of interval.

  • steps (int) – The steps must be a non-negative integer.

  • base (int, optional) – The base must be a non-negative integer. Default: 10.

  • dtype (mindspore.dtype, optional) – The dtype of output. Default: mstype.float32.

Returns:

Tensor has the shape as (step, ). Its datatype is set by the attr ‘dtype’.

Raises:
  • TypeError – If start is not a float or a Tensor.

  • TypeError – If end is not a float or a Tensor.

  • TypeError – If steps is not an int.

  • TypeError – If base is not an int.

  • ValueError – If steps is not a non-negative integer.

  • ValueError – If base is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> output = ops.logspace(start, end, steps = 10, base = 10, dtype=mstype.float32)
>>> print(output)
[1.e+01 1.e+02 1.e+03 1.e+04 1.e+05 1.e+06 1.e+07 1.e+08 1.e+09 1.e+10]
tinyms.primitives.logsumexp(input, axis, keep_dims=False)[source]

Reduces a dimension of a tensor by calculating exponential for all elements in the dimension, then calculate logarithm of the sum.

\[logsumexp(input) = \log(\sum(e^{input-input_{max}})) + input_{max}\]
Parameters:
  • input (Tensor) – The input tensor. With float16 or float32 data type.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

  • keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False.

Returns:

Tensor, has the same dtype as the input.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.logsumexp(x, 1, keep_dims=True)
>>> print(output.shape)
(3, 1, 5, 6)
tinyms.primitives.lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 1D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Typically the input is of shape \((N, C, L_{in})\) or \((C, L_{in})\), the output is of shape \((N, C, L_{out})\) or \((C, L_{out})\).

\[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]

The operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • x (Tensor) – Tensor of shape \((N, C, L_{in})\) or \((C, L_{in})\).

  • norm_type (Union[int, float]) –

    Type of normalization, represents p in the formula, can not be 0,

    • if p = 1, the result obtained is the sum of elements in the pool nucleus(Proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (int) – The size of kernel window.

  • stride (int) – The distance of kernel moving, an int number that represents the width of movement is stride, if the value is None, the default value kernel_size is used;

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Returns:

  • output (Tensor) - LPPool1d result, with shape \((N, C, L_{out})\) or \((C, L_{out})\), It has the same data type as x, where

    \[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]

Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is not an int.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If length of shape of x is not equal to 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
>>> out = ops.lp_pool1d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
>>> print(out)
[[[ 3.  6.]
  [15. 18.]
  [27. 30.]]
 [[39. 42.]
  [51. 54.]
  [63. 66.]]]
tinyms.primitives.lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 2D LPPooling operation on an input Tensor can be regarded as forming a 2D input plane.

Typically the input is of shape \((N, C, H_{in}, W_{in})\), the output is of shape \((N, C, H_{in}, W_{in})\), with the same shape as input, the operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • x (Tensor) – Tensor of shape \((N, C, H_{in}, W_{in})\).

  • norm_type (Union[int, float]) –

    Type of normalization, represents p in the formula, can not be 0,

    • if p = 1, the result obtained is the sum of elements in the pool nucleus(Proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel window. The data type of kernel_size must be int and the value represents the height and width, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively, if the value is None, the default value kernel_size is used.

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Returns:

  • output (Tensor) - LPPool2d result, with shape \((N, C, H_{in}, W_{in})\), It has the same data type as x, where

    \[H_{out} = \left\lfloor\frac{H_{in} - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]
    \[W_{out} = \left\lfloor\frac{W_{in} - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]

Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is neither int nor tuple.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
>>> out = ops.lp_pool2d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
>>> print(out)
[[[[  54.   63.   72.]
   [  99.  108.  117.]]
  [[ 234.  243.  252.]
   [ 279.  288.  297.]]
  [[ 414.  423.  432.]
   [ 459.  468.  477.]]]
 [[[ 594.  603.  612.]
   [ 639.  648.  657.]]
  [[ 774.  783.  792.]
   [ 819.  828.  837.]]
  [[ 954.  963.  972.]
   [ 999. 1008. 1017.]]]]
tinyms.primitives.lrn(x, depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]

where the \(a_{c}\) indicates the specific value of the pixel corresponding to \(c\) in feature map; where the \(n/2\) indicates the depth_radius; where the \(k\) indicates the bias; where the \(\alpha\) indicates the alpha; where the \(\beta\) indicates the beta.

Parameters:
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default: 5.

  • bias (float) – An offset (usually positive to avoid dividing by 0). Default: 1.0.

  • alpha (float) – A scale factor, usually positive. Default: 1.0.

  • beta (float) – An exponent. Default: 0.5.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

  • x (Tensor) – A 4-D Tensor with float16 or float32 data type.

Returns:

Tensor, with the same shape and data type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> output = ops.lrn(input_x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
tinyms.primitives.lstsq(input, A)[source]

Computes the solutions of the least squares and minimum norm problems of full-rank matrix x of size \((m \times n)\) and matrix a of size \((m \times k)\).

If \(m \geq n\), lstsq solves the least-squares problem:

\[\begin{array}{ll} \min_y & \|xy-a\|_2. \end{array}\]

If \(m < n\), lstsq solves the least-norm problem:

\[\begin{array}{llll} \min_y & \|y\|_2 & \text{subject to} & xy = a. \end{array}\]

where y is the returned tensor.

Parameters:
  • input (Tensor) – The \((m \times n)\) matrix equivalent to \(x\) in above. The input tensor whose data type is float16, float32 or float64.

  • A (Tensor) – The \((m \times k)\) matrix equivalent to \(a\) in above. The input tensor whose data type is float16, float32 or float64.

Returns:

Tensor, the least squares or minimum norm problems solution, which has shape \((n \times k)\). The data type is the same with input.

Raises:
  • TypeError – If input or A is not a Tensor.

  • TypeError – If dtype of input or A is not one of: float16, float32, float64.

  • TypeError – If the dtypes of input and A are not the same.

  • ValueError – If the dimension of input is not equal to 2.

  • ValueError – If the dimension of A is not equal to 2 or 1.

  • ValueError – If the length of input_dims[0] is not equal to the length of A_dims[0].

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([[2,1,5],[3,5,1],[1,1,1]]),mindspore.float32)
>>> a = Tensor(np.array([[10,5],[15,8],[7,4]]),mindspore.float32)
>>> output = ops.lstsq(x, a)
>>> print(output)
[[17.000002  11.000002 ]
 [-6.5000005 -4.500001 ]
 [-3.500002  -2.5000017]]
tinyms.primitives.lt(input, other)[source]

Alias for mindspore.ops.less() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True)[source]

Converts LU_data and LU_pivots back into P, L and U matrices, where P is a permutation matrix, L is a lower triangular matrix, and U is an upper triangular matrix. Typically, LU_data and LU_pivots are generated from the LU decomposition of a matrix.

Parameters:
  • LU_data (Tensor) – The packed LU factorization data. A Tensor of shape \((*, M, N)\), where * is batch dimensions. The dim of LU_data must be equal to or greater than 2.

  • LU_pivots (Tensor) – The packed LU factorization pivots. A Tensor of shape \((*, min(M, N))\), where * is batch dimensions, with data type int8, uint8, int16, int32, int64.

  • unpack_data (bool, optional) – A flag indicating if the LU_data should be unpacked. If False, then the returned L and U are None. Default: True.

  • unpack_pivots (bool, optional) – A flag indicating if the LU_pivots should be unpacked into a permutation matrix P. If False, then the returned P is None. Default: True.

Returns:

  • pivots(Tensor) - The permutation matrix of LU factorization. The shape is \((*, M, M)\), the dtype is same as LU_data.

  • L (Tensor) - The L matrix of LU factorization. The dtype is same as LU_data.

  • U (Tensor) - The U matrix of LU factorization. The dtype is same as LU_data.

Raises:
  • TypeError – If the dtype of LU_data is int, uint or float.

  • TypeError – If the dtype of LU_pivots is not one of the following: int8, uint8, int16, int32, int64.

  • ValueError – If the dimension of LU_data is less than 2.

  • ValueError – If the dimension of LU_pivots is less than 1.

  • ValueError – If the size of the last dimension of LU_pivots is not equal to the minimum of the sizes of the last two dimensions of LU_data.

  • ValueError – If the batch dimensions of LU_data’s does not match LU_pivots’s batch dimensions.

  • ValueError – On the CPU platform, if the value of LU_pivots are out of range \([1, LU_data.shape[-2])\).

  • RuntimeError – On the Ascend platform, if the value of LU_pivots are out of range \([1, LU_data.shape[-2])\).

Supported Platforms:

GPU CPU

Examples

>>> LU_data = Tensor(np.array([[[-0.3806, -0.4872,  0.5536],
...                             [-0.1287,  0.6508, -0.2396],
...                             [ 0.2583,  0.5239,  0.6902]],
...                            [[ 0.6706, -1.1782,  0.4574],
...                             [-0.6401, -0.4779,  0.6701],
...                             [ 0.1015, -0.5363,  0.6165]]]), mstype.float64)
>>> LU_pivots = Tensor(np.array([[1, 3, 3],
...                              [2, 3, 3]]), mstype.int32)
>>> pivots, L, U = ops.lu_unpack(LU_data, LU_pivots)
>>> print(pivots)
[[[1. 0. 0.]
  [0. 0. 1.]
  [0. 1. 0.]]
 [[0. 0. 1.]
  [1. 0. 0.]
  [0. 1. 0.]]]
>>> print(L)
[[[ 1.       0.       0.]
  [-0.1287   1.       0.]
  [ 0.2583   0.5239   1.]]
 [[ 1.0000   0.       0.]
  [-0.6401   1.       0.]
  [ 0.1015  -0.5363   1.]]]
>>> print(U)
[[[-0.3806  -0.4872   0.5536]
  [ 0.       0.6508  -0.2396]
  [ 0.       0.       0.6902]]
 [[ 0.6706  -1.1782   0.4574]
  [ 0.      -0.4779   0.6701]
  [ 0.       0.       0.6165]]]
tinyms.primitives.make_row_tensor(indices, values, dense_shape)[source]

Call make_row_tensor_inner in this function.

tinyms.primitives.make_sparse_tensor(indices, values, dense_shape)[source]

Call make_coo_tensor in this function.

tinyms.primitives.margin_ranking_loss(input1, input2, target, margin=0.0, reduction='mean')[source]

MarginRankingLoss creates a criterion that measures the loss.

For details, please refer to mindspore.nn.MarginRankingLoss.

tinyms.primitives.masked_fill(input_x, mask, value)[source]

Fills elements of Tensor with value where mask is True. The shapes of input_x and mask need to be the same or broadcastable.

Parameters:
  • input_x (Tensor) – The source Tensor whose data type is one of bool, uint8, int8, int16, int32, int64, float16, float32, float64, complex64, complex128.

  • mask (Tensor[bool]) – The boolean mask.

  • value (Union[float, Tensor]) – The value to fill in with, which dtype is the same as input_x.

Returns:

Tensor, has the same type and shape as input_x.

Raises:
  • TypeError – If dtype of mask is not bool.

  • TypeError – If input_x or mask is not a Tensor.

  • ValueError – If the shapes of input_x and mask could not be broadcast.

  • TypeError – If dtype of input_x or value is not one of bool, uint8, int8, int16, int32, int64, float16, float32, float64, complex64, complex128.

  • TypeError – If dtype of value is different from that of input_x.

  • TypeError – If value is neither float number nor Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
>>> output = ops.masked_fill(input_x, mask, 0.5)
>>> print(output)
[0.5 0.5 3.  0.5]
tinyms.primitives.masked_select(input, mask)[source]

Returns a new 1-D Tensor which indexes the x tensor according to the boolean mask. The shapes of the mask tensor and the x tensor don’t need to match, but they must be broadcastable.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • mask (Tensor[bool]) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

Returns:

A 1-D Tensor, with the same type as input.

Raises:
  • TypeError – If input or mask is not a Tensor.

  • TypeError – If dtype of mask is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
>>> output = ops.masked_select(x, mask)
>>> print(output)
[1 3]
tinyms.primitives.matmul(input, other)[source]

Returns the matrix product of two tensors.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • input (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

  • other (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

Returns:

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both input, other are 1-d vectors.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of other, or if a scalar value is passed in.

  • ValueError – If the shape of input and other could not broadcast together.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : Reasonable application of broadcast mechanism
>>> input = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
>>> other = Tensor(np.arange(4*5).reshape(4, 5), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
>>> print(output.shape)
(2, 3, 5)
>>> # case 2 : the rank of `input` is 1
>>> input = Tensor(np.ones([1, 2]), mindspore.float32)
>>> other = Tensor(np.ones([2,]), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[2.]
>>> print(output.shape)
(1,)
tinyms.primitives.matrix_band_part(x, lower, upper)[source]

Copy a tensor setting everything outside a central band in each innermost matrix to zero.

Parameters:
  • x (Tensor) – Input tensor. \((*, m, n)\) where \(*\) means, any number of additional dimensions. The data type must be float16, float32, float64, int32 or int64.

  • lower (Union[int, Tensor]) – Number of subdiagonals to keep. The data type must be int32 or int64. If negative, keep entire lower triangle.

  • upper (Union[int, Tensor]) – Number of superdiagonals to keep. The data type must be int32 or int64. If negative, keep entire upper triangle.

Returns:

Tensor, has the same type and shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of float16, float32, float64, int32 or int64.

  • TypeError – If lower is neither a number nor a Tensor.

  • TypeError – If upper is neither a number nor a Tensor.

  • TypeError – If dtype of lower is neither int32 nor int64.

  • TypeError – If dtype of upper is neither int32 nor int64.

  • ValueError – If the shape of x is not greater than or equal to 2D.

  • ValueError – If the shape of lower is not equal to 0D.

  • ValueError – If the shape of upper is not equal to 0D.

Supported Platforms:

Examples

>>> x = Tensor(np.ones([2, 4, 4]).astype(np.float32))
>>> output = ops.matrix_band_part(x, 2, 1)
>>> print(output)
[[[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]
 [[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]]
tinyms.primitives.matrix_determinant(input)[source]

matrix_determinant is deprecated, please use det instead.

tinyms.primitives.matrix_diag(x, k=0, num_rows=-1, num_cols=-1, padding_value=0, align='RIGHT_LEFT')[source]

Returns a Tensor with the contents in x as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding_value. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix of output Tensor is square and infers its size from k and the innermost dimension of x. If the num_rows and num_cols specify only one of them, the operator will derive the smallest legal value as the dimension of output. Moreover, when only one diagonal is given (k is an integer or k[0] == k[1]), the first to the second innermost dimension of x is the batch size. Otherwise, the second innermost dimension is not a part of batch size.

Parameters:
  • x (Tensor) – The diagonal Tensor.

  • k (Union[int, Tensor], optional) – Diagonal offsets. A Tensor of type int32. Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The value must be in the range of given or derivated num_rows and num_cols, meaning value of k must be in (-num_rows, num_cols). Default: 0.

  • num_rows (Union[int, Tensor], optional) – The number of rows of the output Tensor. A Tensor of type int32 with only one value. If num_rows is -1, indicating that the innermost matrix of the output Tensor is a square matrix, and the real number of rows will be derivated by other inputs. That is \(num\_rows = x.shape[-1] - min(k[1], 0)\). Otherwise, the value must be equal or greater than \(x.shape[-1] - min(k[1], 0)\). Default: -1.

  • num_cols (Union[int, Tensor], optional) – The number of columns of the output Tensor. A Tensor of type int32 with only one value. If num_cols is -1, indicating that the innermost matrix of the output Tensor is a square matrix, and the real number of columns will be derivated by other inputs. That is \(num\_cols = x.shape[-1] + max(k[0], 0)\). Otherwise, the value must be equal or greater than \(x.shape[-1] - min(k[1], 0)\). Default: -1.

  • padding_value (Union[int, float, Tensor], optional) – The number to fill the area outside the specified diagonal band. A Tensor with only one value. Have the same dtype as x. Default: 0.

  • align (str, optional) –

    specifies how superdiagonals and subdiagonals should be aligned. Supported values:”RIGHT_LEFT”, “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Default: “RIGHT_LEFT”.

    • When set to “RIGHT_LEFT”, the alignment of superdiagonals will be towards the right side (padding the row on the left), while subdiagonals will be towards the left side (padding the row on the right)

    • When set to “LEFT_RIGHT”, the alignment of superdiagonals will be towards the left side (padding the row on the right), while subdiagonals will be towards the right side (padding the row on the left)

    • When set to “LEFT_LEFT”, the alignment of both superdiagonals and subdiagonals will be towards the left side(padding the row on the right).

    • When set to “RIGHT_RIGHT”, the alignment of both superdiagonals and subdiagonals will be towards the right side(padding the row on the left).

Returns:

A Tensor. Has the same type as x. Suppose x has r dimensions with shape \((I, J, ..., M, N)\) . The output Tensor has rank r + 1 with shape \((I, J, ..., M, num_rows, num_cols)\) when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape \((I, J, ..., num_rows, num_cols)\) .

Raises:
  • TypeError – If x is not Tensor.

  • TypeError – If input x and padding_value are not the same dtype.

  • TypeError – If k, num_rows or num_cols is not int32 dtype.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of num_rows, num_cols or padding_value is not equal to 0.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If the value of k is not in (-num_rows, num_cols).

  • ValueError – If k[1] is not greater equal to k[0] when k[0] != k[1].

  • ValueError – If rank of x is not greater than or is equal to 1 when k is an integer or k[0] == k[1].

  • ValueError – If rank of x is not greater than or is equal to 2 when k[0] != k[1].

  • ValueError – If x.shape[-2] is not equal to k[1] - k[0] + 1 when k[0] != k[1].

  • ValueError – If num_rows and num_cols do not match the dimensions of x and the values of k.

  • ValueError – If align is not a string or not in the valid set of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> x = Tensor(np.array([[8, 9, 0],
...                      [1, 2, 3],
...                      [0, 4, 5]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 1]), mindspore.int32)
>>> num_rows = Tensor(np.array(3), mindspore.int32)
>>> num_cols = Tensor(np.array(3), mindspore.int32)
>>> padding_value = Tensor(np.array(11), mindspore.float32)
>>> output = ops.matrix_diag(x, k, num_rows, num_cols, padding_value, align='LEFT_RIGHT')
>>> print(output)
[[ 1.  8. 11.]
 [ 4.  2.  9.]
 [11.  5.  3.]]
>>> print(output.shape)
(3, 3)
tinyms.primitives.matrix_diag_part(x, k=0, padding_value=0, align='RIGHT_LEFT')[source]

Returns the diagonal part of input tensor. Returns a tensor with the k[0]-th to k[1]-th diagonals of x. Some diagonals are shorter than max_diag_len and need to be padded. Input k and padding_value must be const Tensor when taking Graph mode.

Parameters:
  • x (Tensor) – The input Tensor with rank r, where r >= 2.

  • k (Union[int, Tensor], optional) – A Tensor of type int32. Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The value of k has restructions, meaning value of k must be in (-x.shape[-2], x.shape[-1]).

  • padding_value (Union[int, float, Tensor], optional) – A Tensor with only one value. Have the same dtype as x. The number to fill the area outside the specified diagonal band. Default: 0.

  • align (str, optional) – An optional string from: “RIGHT_LEFT”(default), “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row).

Returns:

A Tensor. Has the same type as x. Assume x has r dimensions \((I, J, ..., L, M, N)\) . Let max_diag_len be the maximum length among all diagonals to be extracted, \(max\_diag\_len = min(M + min(k[1], 0), N + min(-k[0], 0))\) Let num_diags be the number of diagonals to extract, \(num\_diags = k[1] - k[0] + 1\). If \(num\_diags == 1\), the output tensor is of rank r - 1 with shape \((I, J, ..., L, max\_diag\_len)\) Otherwise, the output tensor has rank r with dimensions \((I, J, ..., L, num\_diags, max\_diag\_len)\) .

Raises:
  • TypeError – If x is not Tensor.

  • TypeError – If input x and padding_value are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of padding_value is not equal to 0.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the value of k is not in (-x.shape[-2], x.shape[-1]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3, 4],
...                      [5, 6, 7, 8],
...                      [9, 8, 7, 6]]), mindspore.float32)
>>> k =Tensor(np.array([1, 3]), mindspore.int32)
>>> padding_value = Tensor(np.array(9), mindspore.float32)
>>> output = ops.matrix_diag_part(x, k, padding_value, align='RIGHT_LEFT')
>>> print(output)
[[9. 9. 4.]
 [9. 3. 8.]
 [2. 7. 6.]]
>>> print(output.shape)
(3, 3)
tinyms.primitives.matrix_exp(input)[source]

Computes the exponential of a single or a batch of square matrices.

\[matrix\_exp(x) = \sum_{k=0}^{\infty} \frac{1}{k !} x^{k} \in \mathbb{K}^{n \times n}\]

where \(x\) corresponds to input .

Parameters:

input (Tensor) – The shape of tensor is \((*, n, n)\) where * is zero or more batch dimensions. Must be one of the following types: float16, float32, float64, complex64, complex128.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is not one of the following dtype: float16, float32, float64, complex64, complex128.

  • ValueError – If the rank of input is less than 2.

  • ValueError – If the size of last two dimensions of input are not equal.

Supported Platforms:

Examples

>>> input = Tensor(np.array([[1, 2], [0, 1]]), mindspore.float32)
>>> output = ops.matrix_exp(input)
>>> print(output)
[[2.7182817 5.436563 ]
[0.        2.7182817]]
tinyms.primitives.matrix_power(input, n)[source]

Raises a square matrix to the (integer) power n .

  • When \(n=0\) , returns the identity matrix, which has the same shape as input .

  • When \(n<0\) and input is invertible, returns the inverse of input to the power of \(-n\) .

Parameters:
  • input (Tensor) – A 3-D Tensor. Supported data types are float16 and float32. The shape is \((b, m, m)\) , represents b m-D square matrices.

  • n (int) – The exponent, a required int.

Returns:

A 3-D Tensor. Data type and shape are the same as input ‘s.

Raises:
  • TypeError – If the data type of n is not int.

  • TypeError – If the data type of input is neither float32 nor float16.

  • TypeError – If input is not a Tensor.

  • ValueError – If input is not a 3-D tensor.

  • ValueError – If shape[1] and shape[2] of input are not the same.

  • ValueError – If n is negative but got input input has singular matrices.

Supported Platforms:

Examples

>>> input = Tensor([[[0, 1], [-1, 0]], [[1, 0], [0, -1]]], dtype=ms.float32)
>>> y = ops.matrix_power(input, 2)
>>> print(y)
[[[-1.  0.]
  [-0. -1.]]
 [[ 1.  0.]
  [ 0.  1.]]]
tinyms.primitives.matrix_set_diag(x, diagonal, k=0, align='RIGHT_LEFT')[source]

Returns a batched matrix tensor with new batched diagonal values. Given x and diagonal, this operation returns a tensor with the same shape and values as x, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal. Some diagonals are shorter than max_diag_len and need to be padded. The diagonal \(shape[-2]\) must be equal to num_diags calculated by \(k[1] - k[0] + 1\). The diagonal \(shape[-1]\) must be equal to the longest diagonal value max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\). Let x have r + 1 dimensions \((I, J, ..., L, M, N)\) . The diagonal tensor has rank r with shape \((I, J, ..., L, max\_diag\_len)\) when k is an integer or \(k[0] == k[1]\). Otherwise, it has rank r + 1 with shape \((I, J, ... L, num\_diags, max\_diag\_len)\) .

Parameters:
  • x (Tensor) – Rank r + 1, where r >= 1.

  • diagonal (Tensor) – A Tensor. Have the same dtype as x. Rank r when k is an integer or \(k[0] == k[1]\). Otherwise, it has rank r + 1.

  • k (Union[int, Tensor], optional) – A int32 Scalar or int32 Tensor. Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The alue of k has restructions, meaning value of k must be in \((-x.shape[-2], x.shape[-1])\). Input k must be const Tensor when taking Graph mode.

  • align (str, optional) – An optional string from: “RIGHT_LEFT”(default), “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row).

Returns:

Tensor, The same type as x. Let x has r+1 dimensions \((I, J, ..., L, M, N)\) . The output is a tensor of rank r+1 with dimensions \((I, J, ..., L, M, N)\) , the same as input x.

Raises:
  • TypeError – If input x or diagonal is not Tensor.

  • TypeError – If input x and diagonal are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the diagonal rank size don’t match with input x rank size.

  • ValueError – If the diagonal shape value don’t match with input x shape value.

  • ValueError – If the diagonal \(shape[-2]\) is not equal to num_diags calculated by \(k[1]-k[0]+1\).

  • ValueError – If the value of k is not in \((-x.shape[-2], x.shape[-1])\).

  • ValueError – If the diagonal.shape[-1] is not equal to the max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[7, 7, 7, 7],
...                      [7, 7, 7, 7],
...                      [7, 7, 7, 7]]), mindspore.float32)
>>> diagonal = Tensor(np.array([[0, 9, 1],
...                             [6, 5, 8],
...                             [1, 2, 3],
...                             [4, 5, 0]]), mindspore.float32)
>>> k = Tensor(np.array([-1, 2]), mindspore.int32)
>>> align = 'RIGHT_LEFT'
>>> output = ops.matrix_set_diag(x, diagonal, k, align)
>>> print(output)
[[1. 6. 9. 7.]
 [4. 2. 5. 1.]
 [7. 5. 3. 8.]]
>>> print(output.shape)
(3, 4)
tinyms.primitives.matrix_solve(matrix, rhs, adjoint=False)[source]

Solves systems of linear equations.

\[\begin{split}\begin{aligned} &matrix[..., M, M] * x[..., M, K] = rhs[..., M, K]\\ &adjoint(matrix[..., M, M]) * x[..., M, K] = rhs[..., M, K] \end{aligned}\end{split}\]

Warning

On GPU, if the matrix is irreversible, an error may be reported or an unknown result may be returned.

Parameters:
  • matrix (Tensor) – The shape of tensor is \((..., M, M)\) .

  • rhs (Tensor) – The shape of tensor is \((..., M, K)\) . rhs must have the same dtype as matrix.

  • adjoint (bool) – Indicating whether to solve with matrix or its (block-wise) adjoint. Default: False.

Returns:

x (Tensor), The dtype and shape is the same as ‘rhs’.

Raises:
  • TypeError – If adjoint is not the type of bool.

  • TypeError – If the type of matrix is not one of the following dtype: mstype.float16, mstype.float32, mstype.float64, mstype.complex64, mstype.complex128.

  • TypeError – If the type of matrix is not the same as that of rhs.

  • ValueError – If the rank of matrix less than 2.

  • ValueError – If the dimension of matrix is not the same as rhs.

  • ValueError – If the inner-most 2 dimension of matrix is not the same.

  • ValueError – If the inner-most 2 dimension of rhs does not match matrix.

  • ValueError – If the matrix is irreversible.

Supported Platforms:

Ascend CPU

Examples

>>> matrix = Tensor([[5, 4], [3, 1]], mindspore.float32)
>>> rhs = Tensor([[7], [2]], mindspore.float32)
>>> result = ops.matrix_solve(matrix, rhs)
>>> print(result)
[[0.14285707]
 [1.5714287 ]]
tinyms.primitives.max(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input”.

Also see: mindspore.ops.ArgMaxWithValue.

Parameters:
  • input (Tensor) – The input tensor, can be any dimension. Complex tensor is not supported for now.

  • axis (int) – The dimension to reduce. Default: 0.

  • keepdims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether you need to replace the primitive value in ‘input’ with the initial value. If True, do not replace, if False, replace. The ‘where’ position is False and the corresponding ‘initial’ value must be provided. Default value: None, which indicates True by default.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor.

  • values (Tensor) - The maximum value of input tensor, with the same shape as index, and same dtype as x.

  • index (Tensor) - The index for the maximum value of the input tensor, with dtype int32. If keepdims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index,  = ops.max(x, keepdims=True)
>>> print(output, index)
0.7 0
tinyms.primitives.max_pool2d(x, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

Performs a 2D max pooling on the input Tensor.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel_size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • x (Tensor) – Tensor of shape \((N_{in}, C_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. Default: kernel_size.

  • padding (Union[int, tuple[int]]) – An int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 0.

  • dilation (Union[int, tuple[int]]) – Control the stride of elements in the kernel. Default: 1.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • return_indices (bool) – Whether to output the indices of max value. Default: False.

Returns:

If return_indices is False, return a Tensor output, else return a tuple (output, argmax).

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64. It will be return only when return_indices is True.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 4.

  • TypeError – If kernel_size , stride , padding or dilation is not int or tuple.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • TypeError – If ceil_mode is not bool

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(20 * 16 * 50 * 32).reshape((20, 16, 50, 32)), mindspore.float32)
>>> output_tensor, argmax = ops.max_pool2d(x, kernel_size=(3, 2), stride=(2, 1), return_indices=True)
>>> print(output_tensor.shape)
(20, 16, 24, 31)
>>> print(argmax.shape)
(20, 16, 24, 31)
tinyms.primitives.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

Performs a 3D max pooling on the input Tensor.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel_size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows:

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • x (Tensor) – Tensor of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: kernel_size.

  • padding (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 0.

  • dilation (Union[int, tuple[int]]) – Control the stride of elements in the kernel. Default: 1.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • return_indices (bool) – Whether to output the indices of max value. Default: False.

Returns:

If return_indices is False, return a Tensor output, else return a tuple (output, argmax).

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64. It will be return only when return_indices is True.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 5.

  • TypeError – If kernel_size , stride , padding or dilation is not int or tuple.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(2 * 1 * 2 * 2 * 2).reshape((2, 1, 2, 2, 2)), mindspore.float32)
>>> output_tensor, argmax = ops.max_pool3d(x, kernel_size=2, stride=1, padding=1, return_indices=True)
>>> print(output_tensor.shape)
(2, 1, 3, 3, 3)
>>> print(argmax.shape)
(2, 1, 3, 3, 3)
tinyms.primitives.max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of max_pool1d.

max_unpool1d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in})\) or \((C, H_{in})\), and the output is of shape \((N, C, H_{out})\) or \((C, H_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, H_{in})\) or \((C, H_{in})\).

  • indices (Tensor) – Index of maximum value. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, If stride is 0, (0) or None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0.

  • output_size (tuple[int], optional) – The output shape. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H)\) , \((C, H)\) or \((H)\) and output_size must belong to \([(N, C, H_{out} - stride[0]), (N, C, H_{out} + stride[0])]\).

Returns:

Tensor, with shape \((N, C, H_{out})\) or \((C, H_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0)) or kernel_size is not positive.

  • ValueError – If the shapes of x and indices are not equal.

  • ValueError – If x whose length is not 2 or 3.

  • ValueError – If type of output_size is not tuple.

  • ValueError – If output_size whose length is not 0, 2 or 3.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2, 4, 6, 8]]).astype(np.float32))
>>> indices = Tensor(np.array([[1, 3, 5, 7]]).astype(np.int64))
>>> output = ops.max_unpool1d(x, indices, kernel_size =2, stride=2, padding=0)
>>> print(output.asnumpy())
[[0. 2. 0. 4. 0. 6. 0. 8.]]
tinyms.primitives.max_unpool2d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of max_pool2d.

max_unpool2d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\), and the output is of shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).

  • indices (Tensor) – Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of height and width are the same, equal to padding. If padding is a tuple of two integers, the padding of height and width equal to padding[0] and padding[1] correspondingly.

  • output_size (tuple[int], optional) – The target output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H, W)\) , \((C, H, W)\) or \((H, W)\) and output_size must belong to \([(N, C, H_{out} - stride[0], W_{out} - stride[1]), (N, C, H_{out} + stride[0], W_{out} + stride[1])]\).

Returns:

Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 2.

  • ValueError – If x whose length is not 3 or 4.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size whose length is not 0, 3 or 4.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0, 1], [8, 9]]]]).astype(np.float32))
>>> indices = Tensor(np.array([[[[0, 1], [2, 3]]]]).astype(np.int64))
>>> output = ops.max_unpool2d(x, indices, kernel_size=1, stride=1, padding=0)
>>> print(output.asnumpy())
[[[[0. 1.]
   [8. 9.]]]]
tinyms.primitives.max_unpool3d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of mindspore.ops.max_pool3d().

max_unpool3d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\), and the output is of shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).

  • indices (Tensor) – Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, D_{in} \times H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of depth, height and width are the same, equal to padding. If padding is a tuple of three integers, the padding of depth, height and width equal to padding[0], padding[1] and padding[2] correspondingly.

  • output_size (tuple[int], optional) – The output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, D, H, W)\) or \((C, D, H, W)\) or \((D, H, W)\) and output_size must belong to \([(N, C, D_{out} - stride[0], H_{out} - stride[1], W_{out} - stride[2]), (N, C, D_{out} + stride[0], H_{out} + stride[1], W_{out} + stride[2])]\).

Returns:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride or padding (also support 0 and (0, 0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 3.

  • ValueError – If x whose length is not 4 or 5.

  • ValueError – If output_size whose length is not 0, 4 or 5.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[[0, 1], [8, 9]]]]]).astype(np.float32))
>>> indices= Tensor(np.array([[[[[0, 1], [2, 3]]]]]).astype(np.int64))
>>> output = ops.max_unpool3d(x, indices, kernel_size=2, stride=1, padding=0)
>>> print(output)
[[[[[0. 1. 8.]
    [9. 0. 0.]
    [0. 0. 0.]]
   [[0. 0. 0.]
    [0. 0. 0.]
    [0. 0. 0.]]]]]
tinyms.primitives.maximum(x, y)[source]

Computes the maximum of input tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If one of the elements being compared is a NaN, then that element is returned.

\[output_i = max(x_i, y_i)\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.maximum(x, y)
>>> print(output)
[4. 5. 6.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.maximum(x, y)
>>> print(output.dtype)
Float32
tinyms.primitives.mean(x, axis=None, keep_dims=False)[source]

Reduces all dimension of a tensor by averaging all elements in the dimension, by default. And reduce a dimension of x along the specified axis. keep_dims determines whether the dimensions of the output and input are the same.

Parameters:
  • x (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r).

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.mean(x, 1, keep_dims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2]],
... [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
... [[6, 6, 6, 6, 6, 6], [8, 8, 8, 8, 8, 8], [10, 10, 10, 10, 10, 10]]]),
... mindspore.float32)
>>> output = ops.mean(x)
>>> print(output)
5.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along the axis 0
>>> output = ops.mean(x, 0, True)
>>> print(output)
[[[4. 4. 4. 4. 4. 4.]
  [5. 5. 5. 5. 5. 5.]
  [6. 6. 6. 6. 6. 6.]]]
>>> # case 3: Reduces a dimension along the axis 1
>>> output = ops.mean(x, 1, True)
>>> print(output)
[[[2. 2. 2. 2. 2. 2.]]
 [[5. 5. 5. 5. 5. 5.]]
 [[8. 8. 8. 8. 8. 8.]]]
>>> # case 4: Reduces a dimension along the axis 2
>>> output = ops.mean(x, 2, True)
>>> print(output)
[[[ 2.]
  [ 2.]
  [ 2.]]
 [[ 4.]
  [ 5.]
  [ 6.]]
 [[ 6.]
  [ 8.]
  [10.]]]
tinyms.primitives.median(input, axis=-1, keepdims=False)[source]

Computes the median and indices of input tensor.

Parameters:
  • input (Tensor) – A Tensor of any dimension whose data type is int16, int32, int64, float32 or float64.

  • axis (int, optional) – The dimension need to reduce. Default: -1.

  • keepdims (bool, optional) – Whether the output tensor need to retain axis dimension or not. Default: False.

Returns:

y (Tensor), has the same dtype as the input. If keepdims is true, the y has the same shape as the input except the shape of y in dimension axis is size 1. Otherwise, the y lacks axis dimension than input.

indices (Tensor), has the same shape as the y, but dtype is int64.

Raises:
  • TypeError – If dtype of input is not one of the following: int16, int32, int64, float32, float64.

  • TypeError – If input input is not a Tensor.

  • TypeError – If axis is not a int.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is not in range of [-x.dim, x.dim-1].

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[0.57, 0.11, 0.21],[0.38, 0.50, 0.57], [0.36, 0.16, 0.44]]).astype(np.float32))
>>> y = ops.median(x, axis=0, keepdims=False)
>>> print(y)
(Tensor(shape=[3], dtype=Float32, value= [ 3.79999995e-01,  1.59999996e-01,  4.39999998e-01]),
Tensor(shape=[3], dtype=Int64, value= [1, 2, 2]))
tinyms.primitives.meshgrid(*inputs, indexing='xy')[source]

Generates coordinate matrices from given coordinate tensors.

Given N one-dimensional coordinate tensors, returns a tuple outputs of N N-D coordinate tensors for evaluating expressions on an N-D grid.

Parameters:

inputs (List[Tensor]) – List of 1-D tensors. The length of inputs should be greater than 1. The data type is Number.

Keyword Arguments:

indexing (str, optional) – Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. Valid options: xy’ or ‘ij’. In the 2-D case with inputs of length M and N, the outputs are of shape \((N, M)\) for ‘xy’ indexing and \((M, N)\) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape \((N, M, P)\) for ‘xy’ indexing and \((M, N, P)\) for ‘ij’ indexing. Default: ‘xy’.

Returns:

Tensors, a Tuple of N N-D Tensor objects. The data type is the same with the Inputs.

Raises:
  • TypeError – If indexing is not a str or inputs is not a tuple.

  • ValueError – If indexing is neither ‘xy’ nor ‘ij’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> output = ops.meshgrid(x, y, z, indexing='xy')
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
tinyms.primitives.min(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Parameters:
  • input (Tensor) – The input tensor, can be any dimension. Complex tensor is not supported for now.

  • axis (int) – The dimension to reduce. Default: None.

  • keepdims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The maximum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor.

  • values (Tensor) - The minimum value of input tensor, with the same shape as index, and same dtype as x.

  • index (Tensor) - The index for the minimum value of the input tensor, with dtype int32. If keepdims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index = ops.min(x, keepdims=True)
>>> print(output, index)
0.0 0
tinyms.primitives.minimum(x, y)[source]

Computes the minimum of input tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Shapes of them are supposed to be broadcast.

  • If one of the elements being compared is a NaN, then that element is returned.

\[output_i = min(x_i, y_i)\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape after broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.minimum(x, y)
>>> print(output)
[1. 2. 3.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.minimum(x, y)
>>> print(output.dtype)
Float32
tinyms.primitives.mirror_pad(input_x, paddings, mode)[source]

Pads the input tensor according to the paddings and mode.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • paddings (Tensor) – Paddings requires constant tensor. The value of paddings is a matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input_x.dim_size(D) (or input_x.dim_size(D) - 1) if mode is SYMMETRIC (if REFLECT, respectively).

  • mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Returns:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]]. For a more intuitive understanding, please see the example below.

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]]. For a more intuitive understanding, please see the example below.

Raises:
  • TypeError – If input_x or paddings is not a Tensor.

  • TypeError – If mode is not a str.

  • ValueError – If paddings.size is not equal to 2 * rank of input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1,2,3], [4,5,6], [7,8,9]])
>>> mode = "REFLECT"
>>> paddings = Tensor([[1, 1], [2, 2]])
>>> output = ops.mirror_pad(input_x, paddings, mode)
>>> print(output)
[[6 5 4 5 6 5 4]
 [3 2 1 2 3 2 1]
 [6 5 4 5 6 5 4]
 [9 8 7 8 9 8 7]
 [6 5 4 5 6 5 4]]
tinyms.primitives.mish(x)[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

The function is shown as follows:

\[\text{output} = x * \tanh(\log(1 + \exp(\text{x})))\]

See more details in A Self Regularized Non-Monotonic Neural Activation Function.

Parameters:

x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.mish(input_x)
>>> print(output)
[[-3.0340147e-01  3.9974129e+00 -2.68311895e-03]
 [ 1.9439590e+00  -3.3576239e-02 8.99999990e+00]]
tinyms.primitives.mm(input, mat2)[source]

Returns the matrix product of two arrays. If input is a \((n \times m)\) Tensor, mat2 is a \((m \times p)\) Tensor, out will be a \((n \times p)\) Tensor.

Note

This function cannot support broadcasting. Refer to mindspore.ops.matmul() instead if you need a broadcastable function.

Parameters:
  • input (Tensor) – The first matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

  • mat2 (Tensor) – The second matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

Returns:

Tensor or scalar, the matrix product of the inputs.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of mat2.

  • ValueError – If input or mat2 is not a matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x1 = ms.Tensor(np.random.rand(2, 3))
>>> x2 = ms.Tensor(np.random.rand(3, 4))
>>> out = ops.mm(x1, x2)
>>> print(out.shape)
(2, 4)
tinyms.primitives.moveaxis(x, source, destination)[source]

Alias for ops.movedim. Moves axis of an array from source to destination.

Refer to mindspore.ops.movedim() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.moveaxis(x, 0, -1)
>>> print(output.shape)
(4, 5, 3)
tinyms.primitives.movedim(x, source, destination)[source]

Moves axis of an array from source to destination.

Other axis remain in their original order.

Parameters:
  • x (Tensor) – The tensor array whose axis should be reordered.

  • source (Union[int, sequence[int]]) – Original positions of the axis to move. These must be unique.

  • destination (Union[int, sequence[int]]) – Destination positions for each of the original axis. These must also be unique.

Returns:

Tensor, array with moved axis.

Raises:

ValueError – If axis are out of the range of [-a.ndim, a.ndim), or if the axis contain duplicates.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1 : moving single axis
>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.movedim(x, 0, -1)
>>> print(output.shape)
(4, 5, 3)
>>> # case 2 : moving multiple axes
>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.movedim(x, (0, 2), (1, 2))
>>> print(output.shape)
(4, 3, 5)
tinyms.primitives.mse_loss(input, target, reduction='mean')[source]

Calculates the mean squared error between the predicted value and the label value.

For detailed information, please refer to mindspore.nn.MSELoss.

Parameters:
  • input (Tensor) – Tensor of any dimension.

  • target (Tensor) – The input label. Tensor of any dimension, same shape as the input in common cases. However, it supports that the shape of input is different from the shape of target and they should be broadcasted to each other.

  • reduction (str, optional) – Type of reduction to be applied to loss. The optional values are “mean”, “none” and “sum”. Default: “mean”.

Returns:

Tensor, loss of type float, the shape is zero if reduction is ‘mean’ or ‘sum’, while the shape of output is the broadcasted shape if reduction is ‘none’.

Raises:
  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If input and target have different shapes and cannot be broadcasted.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = ops.mse_loss(logits, labels, reduction='none')
>>> print(output)
[[0. 1. 4.]
 [0. 0. 1.]]
tinyms.primitives.msort(input)[source]

Sorts the elements in Tensor in ascending order of value along its first dimension.

ops.msort(t) is equivalent to ops.Sort(axis=0)(t)[0]. See also mindspore.ops.Sort().

Parameters:

input (Tensor) – The input to sort, with float16 or float32 data type.

Returns:

A tensor whose values are the sorted values, with the same shape and data type as input.

Raises:

TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), ms.float16)
>>> output = ops.msort(input)
>>> print(output)
[[4. 2. 1.]
 [5. 6. 3.]
 [8. 9. 7.]]
tinyms.primitives.mul(input, other)[source]

Multiplies two tensors element-wise.

\[out_{i} = input_{i} * other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not one of the following: Tensor, number.Number, bool.

  • ValueError – If input and other are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> output = ops.mul(x, y)
>>> print(output)
[ 4. 10. 18.]
tinyms.primitives.multi_margin_loss(input, target, p=1, margin=1, weight=None, reduction='mean')[source]

Hinge loss for optimizing a multi-class classification.

Optimizes a multi-class classification hinge loss (margin-based loss) between input and output.

For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is:

\[\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)}\]

where \(i\in \{0,⋯,x.size(0)−1\}\) and \(i \ne y\).

Parameters:
  • input (Tensor) – Input , with shape \((N, C)\). Data type only support float32, float16 or float64. It is \(x\) in the above formula.

  • target (Tensor) – Ground truth labels, with shape \((N,)\). Data type only support int64. The value of target should be non-negative, less than C. It is \(y\) in the above formula.

  • p (int, optional) – The norm degree for pairwise distance. Should be 1 or 2. Default: 1.

  • margin (int, optional) – A parameter to change pairwise distance. Default: 1.

  • weight (Tensor, optional) – The rescaling weight to each class with shape \((C,)\). Data type only support float16, float32 or float64. Default: None.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Returns:

Tensor. If reduction is ‘none’, returns a Tensor with the same shape as target. Otherwise, it is a scalar.

Raises:
  • TypeError – If dtype of p or target is not int.

  • TypeError – If dtype of margin is not int.

  • TypeError – If dtype of reduction is not str.

  • TypeError – If dtype of input is not float16, float or float64.

  • TypeError – If dtype of weight and input is not the same.

  • ValueError – If p is not 1 or 2.

  • ValueError – If reduction is not one of {‘none’,’sum’,’mean’}.

  • ValueError – If shape[0] of input is not equal to shape[0] of target.

  • ValueError – If shape[1] of input is not equal to shape[0] of weight.

  • ValueError – If rank of weight is not 1 or rank of target is not 1 or input is not 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.ones(shape=[3, 3]), mindspore.float32)
>>> target = Tensor(np.array([1, 2, 1]), mindspore.int64)
>>> weight = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> output = ops.multi_margin_loss(inputs, target, weight=weight)
>>> print(output)
0.6666667
tinyms.primitives.multilabel_margin_loss(input, target, reduction='mean')[source]

Hinge loss for optimizing a multi-label classification.

Creates a criterion that optimizes a multi-label multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:

\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}\]

where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\). \(y\) and \(x\) must have the same size. The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes.

Parameters:
  • input (Tensor) – Predict data. Tensor of shape \((C)\) or \((N, C)\), where \(N\) is the batch size and \(C\) is the number of classes. Data type must be float16 or float32.

  • target (Tensor) – Ground truth data, with the same shape as input, data type must be int32 and label targets padded by -1.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Returns:

  • outputs (Union[Tensor, Scalar]) - The loss of MultilabelMarginLoss. If reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input or target is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If dtype of target is not int32.

  • ValueError – If length of shape of input is neither 1 nor 2.

  • ValueError – If shape of input is not the same as target.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend GPU

Examples

>>> inputs = Tensor(np.array([[0.1, 0.2, 0.4, 0.8], [0.2, 0.3, 0.5, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[1, 2, 0, 3], [2, 3, -1, 1]]), mindspore.int32)
>>> output = ops.multilabel_margin_loss(inputs, target)
>>> print(output)
0.325
tinyms.primitives.multilabel_soft_margin_loss(input, target, weight=None, reduction='mean')[source]

Calculates the MultiLabelSoftMarginLoss. The multi-label soft margin loss is a commonly used loss function in multi-label classification tasks where an input sample can belong to multiple classes. Given an input \(input\) and binary labels \(output\) of size \((N,C)\), where \(N\) denotes the number of samples and \(C\) denotes the number of classes.

\[\mathcal{loss\left( input , output \right)} = - \frac{1}{N}\frac{1}{C}\sum_{i = 1}^{N} \sum_{j = 1}^{C}\left(output_{ij}\log\frac{1}{1 + e^{- input_{ij}}} + \left( 1 - output_{ij} \right)\log\frac{e^{-input_{ij}}}{1 + e^{-input_{ij}}} \right)\]

where \(input_{ij}\) represents the predicted score of sample \(i\) for class \(j\). \(output_{ij}\) represents the binary label of sample \(i\) for class \(j\), where sample \(i\) belongs to class \(j\) if \(output_{ij}=1\) , and sample \(i\) does not belong to class \(j\) if \(output_{ij}=0\). For a multi-label classification task, each sample may have multiple labels with a value of 1 in the binary label \(output\). weight will multiply to the loss of each class if given.

Parameters:
  • input (Tensor) – A tensor of shape (N, C), where N is batch size and C is number of classes.

  • target (Tensor) – The label target Tensor which has the same shape as input.

  • weight (Union[Tensor, int, float]) – The manual rescaling weight given to each class. Default: None.

  • reduction (str) – Specifies which reduction to be applied to the output. It must be one of ‘none’, ‘mean’, and ‘sum’, meaning no reduction, reduce mean and sum on output, respectively. Default: ‘mean’.

Returns:

Tensor, the data type is the same as input, if the reduction is ‘none’, its shape is (N), otherwise it is zero.

Raises:

ValueError – If the rank of input or target is not 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([[0.3, 0.6, 0.6], [0.9, 0.4, 0.2]])
>>> target = Tensor([[0.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
>>> loss = ops.multilabel_soft_margin_loss(input, target, reduction='mean')
>>> print(loss.asnumpy())
0.84693956
tinyms.primitives.multinomial(input, num_samples, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • input (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_samples (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default: True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None.

Returns:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Raises:
  • TypeError – If input is not a Tensor whose dtype is not float32.

  • TypeError – If num_samples is not an int.

  • TypeError – If seed is neither an int nor None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> # case 1: The output is random, and the length of the output is the same as num_sample.
>>> input = Tensor([0, 9, 4, 0], mindspore.float32)
>>> output = ops.multinomial(input, 2)
>>> # print(output)
>>> # [1 2] or [2 1]
>>> # the case where the result is [2 1] in multiple times.
>>> # This is because the value corresponding to the index 1 is larger than the value of the index 2.
>>> print(len(output))
2
>>> # case 2: The output is random, and the length of the output is the same as num_sample.
>>> # replacement is False(Default).
>>> # If the extracted value is 0, the index value of 1 will be returned.
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4)
>>> print(output)
[1 1 2 1]
>>> # case 3: The output is random, num_sample == x_length = 4, and replacement is True,
>>> # Can extract the same elements。
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4, True)
>>> print(output)
[1 1 2 2]
tinyms.primitives.multinomial_with_replacement(x, seed, offset, numsamples, replacement=False)[source]

Returns a tensor where each row contains numsamples indices sampled from the multinomial distribution with replacement. It is different from multinomial in that it allows the same outcome to be chosen multiple times.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • x (Tensor) – the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions. Must be one of the following types: float16, float32, float64.

  • seed (int) – If seed is set to be -1, and offset is set to be 0, the random number generator is seeded by a random seed. Otherwise, it is seeded by the given seed.

  • offset (int) – Offset used to avoid seed collision.

  • numsamples (int) – the number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not. Default: False.

Returns:

Tensor with the same rows as x, each row has numsamples sampled indices.

Raises:
  • TypeError – If x is not a 1D or 2D Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

  • TypeError – If numsamples is not an int.

  • TypeError – If replacement is not a bool.

  • ValueError – If the value of numsamples is not greater than x_shape[-1] when replacement is False.

  • ValueError – If the sum of one row of x less than 0.

  • ValueError – If one of the element of each row of x less than 0.

  • ValueError – If numsamples equal or less than 0.

Supported Platforms:

CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> output = ops.multinomial_with_replacement(x, 2, 5, 2, True)
>>> print(output)
[[1 1]]
tinyms.primitives.multiply(input, other)[source]

Alias for mindspore.ops.asinh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.mv(mat, vec)[source]

Multiplies matrix mat and vector vec.

If mat is a Tensor with \((N, M)\), vec is a 1-D Tensor of size \(M\), out will be 1-D of size \(N\).

Parameters:
  • mat (Tensor) – Input matrix of shape \((N, M)\).

  • vec (Tensor) – Input vector of shape \((M,)\).

Returns:

Tensor, the shape of the output Tensor is \((N,)\).

Raises:
  • TypeError – If mat or vec is not a Tensor.

  • ValueError – If mat is not a 2-D Tensor or vec is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> mat = Tensor(np.array([[3., 4.], [1., 6.], [1., 3.]]).astype(np.float32))
>>> vec = Tensor(np.array([1., 2.]).astype(np.float32))
>>> output = ops.mv(mat, vec)
>>> print(output)
[11. 13. 7.]
tinyms.primitives.mvlgamma(input, p)[source]

Returns the results of the multivariate log-gamma function with dimension p element-wise.

The mathematical calculation process of Mvlgamma is shown as follows:

\[\log (\Gamma_{p}(input))=C+\sum_{i=1}^{p} \log (\Gamma(input-\frac{i-1}{2}))\]

where \(C = \log(\pi) \times \frac{p(p-1)}{4}\) and \(\Gamma(\cdot)\) is the Gamma function.

Parameters:
  • input (Tensor) – The input tensor of the multivariate log-gamma function, which must be one of the following types: float32, float64. The shape is \((N,*)\), where \(*\) means any number of additional dimensions. And the value of any element in input must be greater than \((p - 1) / 2\).

  • p (int) – The number of dimensions. And the value of p must be greater than or equal to 1.

Returns:

Tensor, has the same shape and type as input.

Raises:
  • TypeError – If dtype of input is neither float32 nor float64.

  • TypeError – If p is not an int.

  • ValueError – If p is less than 1.

  • ValueError – If not all elements of input are greater than \((p - 1) / 2\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
>>> y = ops.mvlgamma(x, p=3)
>>> print(y)
[[2.694925 5.402975 9.140645]
 [5.402975 1.596312 13.64045]]
tinyms.primitives.nan_to_num(input, nan=0.0, posinf=None, neginf=None)[source]

Replace the NaN, positive infinity and negative infinity values in ‘input’ with the specified values in nan, posinf and neginf respectively.

Parameters:
  • input (Tensor) – The shape of tensor is \((input_1, input_2, ..., input_R)\). With float32 or float16 data type.

  • nan (float) – The replace value of ‘NaN’. Default value is 0.0.

  • posinf (float) – the value to replace positive infinity values with. Default: None, replacing positive infinity with the maximum value supported by the data type of input.

  • neginf (float) – the value to replace negative infinity values with. Default: None, replacing negative infinity with the minimum value supported by the data type of input.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend CPU

Examples

>>> input = Tensor(np.array([float('nan'), float('inf'), -float('inf'), 5.0]), mindspore.float32)
>>> output = ops.nan_to_num(input, 1.0, 2.0, 3.0)
>>> print(output)
[1.  2.  3.  5.0]
tinyms.primitives.nanquantile(input, q, axis=None, keepdims=False)[source]

This operator is derived from mindspore.ops.quantile() that ‘ignores’ NaN values. It computes quantiles as though the input has no NaN values. If all values in a reduced dimension are NaN then the quantiles for that reduction will be NaN.

Refer to mindspore.ops.quantile() for more details.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). Supported dtypes: float32, float64.

  • q (Union[float, Tensor]) – A scalar or 1D tensor of quantile values in the range [0, 1]. Supported dtypes: float32, float64.

  • axis (int, optional) – The dimension to reduce. By default, axis is None resulting in the input tensor being flattened before computation. Default: None.

  • keepdims (bool, optional) – Whether the output tensor has dim retained or not. Default: False.

Returns:

Tensor, has the same dtype as the input.

Suppose the shape of input is \((m, x_0, x_1, ..., x_i, ..., X_R)\), axis = \(i\) and m is the element count of input q.

  • If q is scalar and keepdims is True, the shape of output is \((x_0, x_1, ..., 1, ..., X_R)\).

  • If q is scalar and keepdims is False, the shape of output is \((x_0, x_1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is True, the shape of output is \((m, x_0, x_1, ..., 1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is False, the shape of output is \((m, x_0, x_1, ..., X_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If q is not a Tensor or float.

  • TypeError – If dtype of input is not float32 or float64.

  • TypeError – If dtype of q is not float32 or float64.

  • TypeError – If dtype of input and the dtype of q is different.

  • ValueError – If the q values not in the range [0, 1].

  • ValueError – If the axis values out of range.

Supported Platforms:

Examples

>>> x = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = ops.nanquantile(x, q)
>>> print(output.asnumpy())
[-0.5446  0.07  0.9214]
tinyms.primitives.nansum(input, axis=None, keepdims=False, *, dtype=None)[source]

Computes sum of input over a given dimension, treating NaNs as zero.

Parameters:
  • input (Tensor) – The input Tensor.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Supposed the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keepdims (bool, optional) – Whether the output Tensor keeps dimensions or not. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The dtype of output Tensor. Default: None.

Returns:

Tensor, the sum of input input in the given dimension dim, treating NaNs as zero.

  • If axis is None, keepdims is False, the output is a 0-D Tensor representing the sum of all elements in the input Tensor.

  • If axis is int, set as 2, and keepdims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int) or list(int), set as (2, 3), and keepdims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If input is not Tensor.

  • TypeError – If keepdims is not a bool.

  • TypeError – If the dtype of input or dtype is complex type.

  • ValueError – If ‘axis’ not in [-rank(input), rank(input)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[float("nan"), 2, 3], [1, 2, float("nan")]]), mindspore.float32)
>>> output1 = ops.nansum(x, axis=0, keepdims=False, dtype=mindspore.float32)
>>> output2 = ops.nansum(x, axis=0, keepdims=True, dtype=mindspore.float32)
>>> print(output1)
[1. 4. 3.]
>>> print(output2)
[[1. 4. 3.]]
tinyms.primitives.narrow(input, axis, start, length)[source]

Returns a narrowed tensor from input tensor, and the dimension axis is input from start to start + length.

Parameters:
  • input (Tensor) – the tensor to narrow.

  • axis (int) – the axis along which to narrow.

  • start (int) – the starting dimension.

  • length (int) – the distance to the ending dimension.

Returns:

Tensor.

  • output (Tensors) - The narrowed tensor.

Raises:

TypeError – If the input is not a tensor or tuple or list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import ops
>>> from mindspore import Tensor
>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mindspore.int32)
>>> output = ops.narrow(x, 0, 0, 2)
>>> print(output)
[[ 1 2 3]
 [ 4 5 6]]
>>> output = ops.narrow(x, 1, 1, 2)
>>> print(output)
[[ 2 3]
 [ 5 6]
 [ 8 9]]
tinyms.primitives.ne(x, y)[source]

Computes the non-equivalence of two tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} \ne y_{i} \\ & \text{False, if } x_{i} = y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = ops.ne(x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> output = ops.ne(x, y)
>>> print(output)
[False False  True]
tinyms.primitives.neg(input)[source]

Returns a tensor with negative values of the input tensor element-wise.

\[out_{i} = - input_{i}\]
Parameters:

input (Tensor) – The input tensor with a dtype of Number.

Returns:

Tensor, has the same shape and dtype as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = ops.neg(input)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
tinyms.primitives.negative(input)[source]

Alias for mindspore.ops.neg() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.nextafter(input, other)[source]

Returns the next representable floating-point value after input towards other element-wise.

Say there are two float32 numbers \(a\), \(b\), and let the representable delta of float32 datatype is \(eps\). If \(a < b\), then the next representable of \(a\) towards \(b\) is \(a+eps\), the next representable of \(b\) towards \(a\) is \(b-eps\).

\[out_{i} = nextafter({input_{i}, other_{i}})\]
Parameters:
  • input (Tensor) – The first input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • other (Tensor) – The second input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

Returns:

Tensor, has the same shape and data type as input.

Raises:
  • TypeError – If neither input nor other is a Tensor.

  • TypeError – If the dtype of input and other is not one of: float32, float64.

  • TypeError – If the dtypes of input and other are not same.

  • ValueError – If input’s shape is not the same as other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_ = Tensor(np.asarray([0.0]), mindspore.float32)
>>> other_ = Tensor(np.asarray([0.1]), mindspore.float32)
>>> output_ = ops.nextafter(input_, other_)
>>> print(output_)
[1.e-45]
tinyms.primitives.nll_loss(inputs, target, weight=None, ignore_index=-100, reduction='mean', label_smoothing=0.0)[source]

Gets the negative log likelihood loss between inputs and target.

The nll loss with reduction=none can be described as:

\[\ell(x, t)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{t_{n}} x_{n, t_{n}}, \quad w_{c}=\text { weight }[c] \cdot \mathbb{1} \{c \not= \text{ignore_index}\},\]

where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

If reduction is not ‘none’ (default ‘mean’), then

\[\begin{split}\ell(x, t)=\left\{\begin{array}{ll} \sum_{n=1}^{N} \frac{1}{\sum_{n=1}^{N} w_{t n}} l_{n}, & \text { if reduction }=\text { 'mean', } \\ \sum_{n=1}^{N} l_{n}, & \text { if reduction }=\text { 'sum' } \end{array}\right.\end{split}\]
Parameters:
  • inputs (Tensor) – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). inputs is expected to be log-probabilities, data type must be float16 or float32.

  • target (Tensor) – \((N)\) or \((N, d_1, d_2, ..., d_K)\) for high-dimensional loss, data type must be int32.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\). The data type must be float16 or float32. Default: None.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default: -100

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

  • label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.

Returns:

Tensor, the computed loss value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
>>> output = ops.nll_loss(inputs, target)
tinyms.primitives.nonzero(input)[source]

Return a Tensor of the positions of all non-zero values.

Parameters:

input (Tensor) – The shape of Tensor is \((x_1, x_2, ..., x_R)\). The data type is int, float or bool.

Returns:

Tensor, a 2-D Tensor whose data type is int64, containing the positions of all non-zero values of the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> output = ops.nonzero(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
>>> x = Tensor(np.array([1, 0, 2, 0, 3]), mindspore.int32)
>>> output = ops.nonzero(x)
>>> print(output)
[[0]
 [2]
 [4]]
tinyms.primitives.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None)[source]

Returns the matrix norm or vector norm of a given tensor.

ord is the calculation mode of norm. The following norm modes are supported.

ord

norm for matrices

norm for vectors

None (default)

Frobenius norm

2-norm (see below)

‘fro’

Frobenius norm

– not supported –

‘nuc’

nuclear norm

– not supported –

inf

\(max(sum(abs(x), dim=1))\)

\(max(abs(x))\)

-inf

\(min(sum(abs(x), dim=1))\)

\(min(abs(x))\)

0

– not supported –

\(sum(x != 0)\)

1

\(max(sum(abs(x), dim=0))\)

as below

-1

\(min(sum(abs(x), dim=0))\)

as below

2

largest singular value

as below

-2

smallest singular value

as below

other int or float

– not supported –

\(sum(abs(x)^{ord})^{(1 / ord)}\)

Note

Currently, complex numbers are not supported.

Parameters:
  • A (Tensor) – Tensor of shape \((*, n)\) or \((*, m, n)\) where * is zero or more batch dimensions.

  • ord (Union[int, float, inf, -inf, 'fro', 'nuc'], optional) – norm’s mode. refer to the table above for behavior. Default: None.

  • dim (Union[int, Tuple(int)], optional) –

    calculate the dimension of vector norm or matrix norm. Default: None.

    • When dim is int, it will be calculated by vector norm.

    • When dim is a 2-tuple, it will be calculated by matrix norm.

    • If dim is None and ord is None, A will be flattened to 1D and the 2-norm of the vector will be calculated.

    • If dim is None and ord is not None, A must be 1D or 2D.

  • keepdim (bool) – whether the output Tensor retains the original dimension. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, A will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, the result of norm calculation on the specified dimension, dim, has the same dtype as A.

Raises:
  • ValueError – If dim is out of range.

  • TypeError – If dim is neither an int nor a tuple of int.

  • TypeError – If A is a vector and ord is a str.

  • ValueError – If A is a matrices and ord is not in valid mode.

  • ValueError – If A is a matrices and ord is an integer but not in [1, -1, 2, -2].

  • ValueError – If two elements of dim is same after normalize.

  • ValueError – If any elements of dim is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x = ops.arange(-12, 13, dtype=ms.float32)
>>> y = x.reshape(5, 5)
>>> print(ops.norm(x))
36.05551
>>> print(ops.norm(x, float('inf')))
12.0
>>> print(ops.norm(x, float('-inf')))
0.0
>>> print(ops.norm(x, 0))
24.0
>>> print(ops.norm(x, 1))
156.0
>>> print(ops.norm(x, -1))
0.0
>>> print(ops.norm(x, 2))
36.05551
>>> print(ops.norm(x, -2))
0.0
>>> print(ops.norm(x, 3))
23.000631
>>> print(ops.norm(x, -3))
0.0
>>> print(ops.norm(y))
36.05551
>>> print(ops.norm(y, 'fro'))
36.05551
>>> print(ops.norm(y, 'nuc'))
42.42641
>>> print(ops.norm(y, float('inf')))
50.0
>>> print(ops.norm(y, float('-inf')))
6.0
>>> print(ops.norm(y, 1))
32.0
>>> print(ops.norm(y, -1))
30.0
>>> print(ops.norm(y, 2))
35.355343
>>> m = ms.Tensor([[1., -1., 2.], [-2., 3., -4.]])
>>> print(ops.norm(m, dim=0))
[2.236068  3.1622777 4.472136 ]
>>> print(ops.norm(m, dim=1))
[2.4494898 5.3851647]
>>> print(ops.norm(m, ord=1, dim=1))
[4. 9.]
>>> n = ops.arange(27, dtype=ms.float32).reshape(3, 3, 3)
>>> print(ops.norm(n, dim=(1, 2)))
[14.282857 39.76179  66.45299 ]
>>> print(ops.norm(n[0, :, :]), ops.norm(n[1, :, :]), ops.norm(n[2, :, :]))
14.282857 39.76179 66.45299
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Union[Tensor, int, float]) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Union[Tensor, int, float]) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. The value must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[1, 2, 3], [3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 3, 3)
tinyms.primitives.not_equal(input, other)[source]

Alias for mindspore.ops.ne() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.numel(input)[source]

Returns a Scalar of type int that represents the total number of elements in the Tensor.

Parameters:

input (Tensor) – Input Tensor.

Returns:

int. A scalar representing the total of elements in the Tensor.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> print(ops.numel(input_x))
4
tinyms.primitives.one_hot(indices, depth, on_value, off_value, axis=-1)[source]

Computes a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters:
  • indices (Tensor) – A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be uint8, int32 or int64.

  • depth (int) – A scalar defining the depth of the one-hot dimension.

  • on_value (Union[Tensor, int, float]) – A value to fill in output when indices[j] = i. Support uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, bool, complex64, complex128.

  • off_value (Union[Tensor, int, float]) – A value to fill in output when indices[j] != i. Has the same data type as on_value.

  • axis (int) – Position to insert the value. e.g. If shape of self is \((N, C)\), and axis is -1, the output shape will be \((N, C, depth)\), If axis is 0, the output shape will be \((depth, N, C)\). Default: -1.

Returns:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Raises:
  • TypeError – If axis or depth is not an int.

  • TypeError – If dtype of indices is not uint8, int32 or int64.

  • TypeError – If indices, on_value or off_value is not a Tensor.

  • ValueError – If axis is not in range [-1, ndim].

  • ValueError – If depth is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> output = ops.one_hot(indices, depth, on_value, off_value, axis=-1)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
tinyms.primitives.ones(shape, dtype=None)[source]

Creates a tensor filled with value ones.

Creates a tensor with shape described by the first argument and fills it with value ones in type of the second argument.

Parameters:
  • shape (Union[tuple[int], int]) – The specified shape of output tensor. Only constant positive int is allowed.

  • dtype (mindspore.dtype) – The specified type of output tensor. If dtype is None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:

TypeError – If shape is neither tuple nor int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
tinyms.primitives.ones_like(input, *, dtype=None)[source]

Returns a Tensor with a value of 1 and its shape is the same as the input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, has the same shape as input but filled with ones.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = ops.ones_like(x)
>>> print(output)
[[1 1]
 [1 1]]
tinyms.primitives.orgqr(input, input2)[source]

Calculates the explicit representation of the orthogonal matrix \(Q\) returned by mindspore.ops.Geqrf.

Take the case of input without batch dimension as an example, computes the first \(N\) columns of a product of Householder matrices. Suppose input input is a matrix of size \((M, N)\) after householder transformation. When the diagonal of input is set to 1, every colunm of lower triangular in input is denoted as \(w_j\) for \(j\) for \(j=1, \ldots, M\), this function returns the first \(N\) columns of the matrix

\[H_{1} H_{2} \ldots H_{k} \quad \text { with } \quad H_{j}=\mathrm{I}_{M}-\tau_{j} w_{j} w_{j}^{\mathrm{H}}\]

where \(\mathrm{I}_{M}\) is the \(M\)-dimensional identity matrix. And when \(w\) is complex, \(w^{\mathrm{H}}\) is the conjugate transpose, otherwise the transpose. The output matrix is the same size as the input matrix input. \(tau\) is corresponding to input2.

Parameters:
  • input (Tensor) – Tensor of shape \((*, M, N)\), indicating 2D or 3D matrices, with float32, float64, complex64 and complex128 data type.

  • input2 (Tensor) – Tensor of shape \((*, K)\), where K is less than or equal to N, indicating the reflecting coefficient in Householder transformation, which have the same type as input.

Returns:

Tensor, has the same shape and data type as input.

Raises:
  • TypeError – If input or input2 are not Tensors.

  • TypeError – If dtype of input and input2 is not one of: float64, float32, complex64, complex128.

  • ValueError – If input and input2 have different batch size.

  • ValueError – If input.shape[-2] < input.shape[-1].

  • ValueError – If input.shape[-1] < input2.shape[-1].

  • ValueError – If rank(input) - rank(input2) != 1.

  • ValueError – If rank(input) != 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[-114.6, 10.9, 1.1], [-0.304, 38.07, 69.38], [-0.45, -0.17, 62.]]),
... mindspore.float32)
>>> input2 = Tensor(np.array([1.55, 1.94, 0.0]), mindspore.float32)
>>> net = ops.orgqr()
>>> y = net(input, input2)
>>> print(y)
[[-0.54999995 -0.2128925   0.8137956 ]
 [ 0.47119996 -0.8752807   0.08240613]
 [ 0.69749993  0.42560163  0.57772595]]
tinyms.primitives.outer(input, vec2)[source]

Return outer product of input and vec2. If input is a vector of size \(n\) and vec2 is a vector of size \(m\) , then output must be a matrix of shape \((n, m)\) .

Note

This function does not broadcast.

Parameters:
  • input (Tensor) – 1-D input vector.

  • vec2 (Tensor) – 1-D input vector.

Returns:

out (Tensor, optional), 2-D matrix, the outer product of two vectors.

Raises:
  • TypeError – If input or vec2 is not a Tensor.

  • ValueError – If input or vec2 is not an 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> vec2 = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> out = ops.outer(input, vec2)
>>> print(out)
[[1 2 3]
 [2 4 6]
 [3 6 9]]
tinyms.primitives.pad(input_x, padding, mode='constant', value=None)[source]

Pads the input tensor according to the padding.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • padding (Union[tuple[int], list[int], Tensor]) –

    Filling position of pad. \(\left\lfloor\frac{\text{len(padding)}}{2}\right\rfloor\) dimensions of input_x will be padded.

    Example: to pad only the last dimension of the input tensor, then padding has the form \((\text{padding_left}, \text{padding_right})\);

    Example: to pad the last 2 dimensions of the input tensor, then use \((\text{padding_left}, \text{padding_right}\), \(\text{padding_top}, \text{padding_bottom})\);

    Example: to pad the last 3 dimensions, use \((\text{padding_left}, \text{padding_right}\), \(\text{padding_top}, \text{padding_bottom}\), \(\text{padding_front}, \text{padding_back})\) and so on.

  • mode (str, optional) –

    Pad filling mode, “constant”, “reflect” or “replicate”. Default: “constant”.

    For “constant” mode, please refer to mindspore.nn.ConstantPad1d as an example to understand this filling pattern and extend the padding pattern to n dimensions.

    For “reflect” mode, please refer to mindspore.nn.ReflectionPad1d as an example to understand this filling pattern. The reflect mode is used to pad the last two dimensions of 3D or 4D input, or the last dimension of 2D or 3D input.

    For “replicate” mode, please refer to mindspore.nn.ReplicationPad1d as an example to understand this filling pattern. The replicate mode is used to pad the last three dimensions of 4D or 5D input, the last two dimensions of 3D or 4D input, or the last dimension of 2D or 3D input.

  • value (Union[int, float, None], optional) – Valid only in “constant” mode. Set the padding value in “constant” mode. If the value is None, 0 is used as the default padding value.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If paddings is not an int of tuple or int of list.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If length of padding is not even.

  • ValueError – If length of padding is greater than 6.

  • ValueError – If mode is not “constant” and value not None.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x = ms.Tensor(np.arange(1 * 2 * 2 * 2).reshape((1, 2, 2, 2)), dtype=ms.float64)
>>> output = ops.pad(x, [1, 0, 0, 1], mode='constant', value=6.0)
>>> print(output)
[[[[6. 0. 1.]
   [6. 2. 3.]
   [6. 6. 6.]]
  [[6. 4. 5.]
   [6. 6. 7.]
   [6. 6. 6.]]]]
>>> output1 = ops.pad(x, (1, 0, 0, 1), mode='reflect')
>>> print(output1)
[[[[1. 0. 1.]
   [3. 2. 3.]
   [1. 0. 1.]]
  [[5. 4. 5.]
   [7. 6. 7.]
   [5. 4. 5.]]]]
>>> output2 = ops.pad(x, (1, 1, 2, 1), mode='replicate')
>>> print(output2)
[[[[0. 0. 1. 1.]
   [0. 0. 1. 1.]
   [0. 0. 1. 1.]
   [2. 2. 3. 3.]
   [2. 2. 3. 3.]]
  [[4. 4. 5. 5.]
   [4. 4. 5. 5.]
   [4. 4. 5. 5.]
   [6. 6. 7. 7.]
   [6. 6. 7. 7.]]]]
tinyms.primitives.padding(x, pad_dim_size=8)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Parameters:
  • x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The rank of x must be at least 2. The last dimension of x must be 1. The data type is Number.

  • pad_dim_size (int) – The value of the last dimension of x to be extended, which must be positive. Default: 8.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.padding(x, pad_dim_size)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
tinyms.primitives.pdist(input, p=2.0)[source]

Calculates the distance between every pair of row vectors in the input using the p-norm. If the input input is a 2D Tensor with shape \((N, M)\), the output must be a 1D Tensor with shape \((N * (N - 1) / 2,)\). If input has batch dimension with shape \((*B, N, M)\), then the output must be a Tensor with shape \((*B, N * (N - 1) / 2)\).

\[y[n] = \sqrt[p]{{\mid x_{i} - x_{j} \mid}^p}\]

where \(x_{i}, x_{j}\) are two different row vectors in the input.

Parameters:
  • input (Tensor) – Input tensor of shape \((*B, N, M)\). \(*B\) is batch size, one-dim or multi-dim. dtype: float16, float32 or float64.

  • p (float) – The order of norm distance, \(p∈[0, ∞)\). Default: 2.0.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • TypeError – If p is not a float.

  • ValueError – If p is a negative float.

  • ValueError – If dimension of input is less than 2.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]).astype(np.float32))
>>> y = ops.pdist(x, p=2.0)
>>> print(y)
[1.4142135 2.828427 1.4142135]
tinyms.primitives.permute(input, axis)[source]

Permutes the dimensions of the input tensor according to input axis .

Parameters:
  • input (Tensor) – Input Tensor.

  • axis (Union[tuple(int), int]) – Permute will permute the tensor to the input axis order.

Returns:

Tensor, has the same dimension as input tensor, with axis suitably permuted.

Raises:
  • ValueError – If axis is None.

  • ValueError – If the number of elements of axis is not equal to input ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> print(ops.permute(input_x, input_perm))
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
tinyms.primitives.pinv(x, *, atol=None, rtol=None, hermitian=False)[source]

Computes the (Moore-Penrose) pseudo-inverse of a matrix. This function is computed using SVD. If \(x=U*S*V^{T}\) ,Than the pseudo-inverse of x is: \(x^{+}=V*S^{+}*U^{T}\) , \(S^{+}\) is the reciprocal of each non-zero element on the diagonal of S, and zero remains in place. Batch matrices are supported. If x is a batch matrix, the output has the same batch dimension when atol or rtol is float. If atol or rtol is a Tensor, its shape must be broadcast to the singular value returned by x.svd . If x.shape is \((B, M, N)\), and the shape of atol or rtol is \((K, B)\), the output shape is \((K, B, N, M)\). When the Hermitian is True, temporary support only real domain, x is treated as a real symmetric, so x is not checked internally, and only use the lower triangular part in the computations. When the singular value of x (or the norm of the eigenvalues when hermitian = True) that are below threshold (\(max(atol, \sigma \cdot rtol)\), \(\sigma\) as the largest singular value or characteristic value), it is set to zero, and is not used in the computations. If rtol is not specified and x is a matrix of dimensions (M, N), then rtol is set to be \(rtol=max(M, N)*\varepsilon\), \(\varepsilon\) is the eps value of x.dtype. If rtol is not specified and atol specifies a value larger than zero, rtol is set to zero.

Note

This function uses svd internally, (or eigh , when hermitian = True). So it has the same problem as these functions. For details, see the warnings in svd() and eigh().

Parameters:

x (Tensor) –

A matrix to be calculated. Only float32, float64 are supported Tensor dtypes. shape is \((*, M, N)\), * is zero or more batch dimensions.

  • When hermitian is true, batch dimensions are not supported temporarily.

Keyword Arguments:
  • atol (float, Tensor) – absolute tolerance value. Default: None.

  • rtol (float, Tensor) – relative tolerance value. Default: None.

  • hermitian (bool) – An optional bool. x is assumed to be symmetric if real. Default: False.

Outputs:
  • output (Tensor) - same type as input. Shape is \((*, N, M)\), * is zero or more batch dimensions.

Raises:
Supported Platforms:

CPU

Examples

>>> x = Tensor([[4., 0.], [0., 5.]], mindspore.float32)
>>> output = ops.pinv(x)
>>> print(output)
[[0.25  0. ]
[0.  0.2 ]]
tinyms.primitives.pixel_shuffle(input, upscale_factor)[source]

Applies the PixelShuffle operation over input input which implements sub-pixel convolutions with stride \(1/r\) . For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C \times r^2, H, W)\) , and the output is of shape \((*, C, H \times r, W \times r)\), where r is an upscale factor and * is zero or more batch dimensions.

Parameters:
  • input (Tensor) – Tensor of shape \((*, C \times r^2, H, W)\) . The dimension of input is larger than 2, and the length of third to last dimension can be divisible by upscale_factor squared.

  • upscale_factor (int) – factor to shuffle the input Tensor, and is a positive integer. upscale_factor is the above-mentioned \(r\).

Returns:

  • output (Tensor) - Tensor of shape \((*, C, H \times r, W \times r)\) .

Raises:
  • ValueError – If upscale_factor is not a positive integer.

  • ValueError – If the length of third to last dimension is not divisible by upscale_factor squared.

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(3 * 2 * 9 * 4 * 4).reshape((3, 2, 9, 4, 4))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_shuffle(input_x, 3)
>>> print(output.shape)
(3, 2, 1, 12, 12)
tinyms.primitives.pixel_unshuffle(input, downscale_factor)[source]

Applies the PixelUnshuffle operation over input input which is the inverse of PixelShuffle. For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C, H \times r, W \times r)\) , and the output is of shape \((*, C \times r^2, H, W)\) , where r is a downscale factor and * is zero or more batch dimensions.

Parameters:
  • input (Tensor) – Tensor of shape \((*, C, H \times r, W \times r)\) . The dimension of input is larger than 2, and the length of second to last dimension or last dimension can be divisible by downscale_factor .

  • downscale_factor (int) – factor to unshuffle the input Tensor, and is a positive integer. downscale_factor is the above-mentioned \(r\).

Returns:

  • output (Tensor) - Tensor of shape \((*, C \times r^2, H, W)\) .

Raises:
  • ValueError – If downscale_factor is not a positive integer.

  • ValueError – If the length of second to last dimension or last dimension is not divisible by downscale_factor .

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_unshuffle(input_x, 2)
>>> print(output.shape)
(1, 4, 4, 4)
tinyms.primitives.poisson(shape, mean, seed=None)[source]

The ops.poisson is deprecated, please use mindspore.ops.random_poisson Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

  • TypeError – If seed is not an int.

Supported Platforms:

deprecated

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> # case 1: It can be broadcast.
>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
>>> # case 2: It can not be broadcast. It is recommended to use the same shape.
>>> shape = (2, 2)
>>> mean = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(2, 2)
tinyms.primitives.polar(abs, angle)[source]

Converts polar coordinates to Cartesian coordinates.

Returns a complex tensor, its elements are Cartesian coordinates constructed with the polar coordinates which is specified by radial distance abs and polar angle angle.

\[y_{i} = abs_{i} * cos(angle_{i}) + abs_{i} * sin(angle_{i}) * j\]
Parameters:
  • abs (Tensor) – Radial distance. The shape of tensor is \((N,*)\) where \(N\) means the batchsize of the input tensor, \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • angle (Tensor) – Polar angle. It has the same shape and dtype as abs.

Returns:

Tensor, has the same shape as abs. - If the inputs are float32, data type must be complex64. - If the inputs are float64, data type must be complex128.

Raises:
  • TypeError – If neither abs nor angle is a Tensor.

  • TypeError – If the dtype of input is not one of: float32, float64.

  • TypeError – If the dtypes of abs and angle are not the same.

  • ValueError – If abs’s shape is not the same as angle.

Supported Platforms:

GPU CPU

Examples

>>> abs = Tensor(np.array([1, 2]), mindspore.float64)
>>> angle = Tensor(np.array([np.pi / 2, 5 * np.pi / 4]), mindspore.float64)
>>> output = ops.polar(abs, angle)
>>> print(output)
[ 6.12323400e-17+1.j         -1.41421356e+00-1.41421356j]
tinyms.primitives.polygamma(n, input)[source]

Computes the \(n\)-th derivative of the polygamma function on input.

\[\psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x)\]

where \(\psi(x)\) is the digamma function.

Parameters:
  • n (Tensor) – The order of the polygamma function. Supported dtypes: int32, int64. The shape of n is \(()\).

  • input (Tensor) – The tensor to compute the \(n\)-th derivative of the polygamma function with.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not one of: float16, float32, float64.

  • TypeError – If dtype of n is not one of: int32, int64.

  • TypeError – If shape of n is not \(()\).

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([3.14, -2.71]), mindspore.float64)
>>> a = Tensor(np.array(1), mindspore.int64)
>>> output = ops.polygamma(a, x)
>>> print(output)
[ 0.37446456 15.49884838]
tinyms.primitives.population_count(input_x)[source]

Computes element-wise population count(a.k.a bitsum, bitcount). For each entry in input_x, calculates the number of 1 bits in the binary representation of that entry.

Parameters:

input_x (Tensor) – Tensor of any dimension. The data type must be int16 or uint16 (Ascend). The data type must be int8, int16, int32, int64, uint8, uint16, uint32, uint64 (CPU and GPU).

Returns:

Tensor, with the same shape as the input, and the data type is uint8.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not int16, uint16 (Ascend).

  • TypeError – If dtype of input_x is not int8, int16, int32, int64, uint8, uint16, uint32, uint64 (CPU and GPU).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 3], mindspore.int16)
>>> output = ops.population_count(input_x)
>>> print(output)
[0 1 2]
tinyms.primitives.positive(input)[source]

Return self Tensor.

Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, self input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mstype.float32)
>>> print(ops.positive(x))
[ -5.    1.5   3.  100. ]
tinyms.primitives.pow(input, exponent)[source]

Calculates the exponent power of each element in input.

\[out_{i} = input_{i} ^{ exponent_{i}}\]

Note

  • Inputs of input and exponent comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • exponent (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and exponent is not one of the following: Tensor, number.Number or bool.

  • ValueError – If the shape of input and exponent are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = 3.0
>>> output = ops.pow(x, y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> output = ops.pow(x, y)
>>> print(output)
[ 1. 16. 64.]
tinyms.primitives.prelu(x, weight)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of a channel of the input, w is the weight of the channel.

Note

Scalar or 1-D Tensor is not supported on Ascend.

Parameters:
  • x (Tensor) – The input Tensor of the activation function. The data type is float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • weight (Tensor) – Weight Tensor. The data type is float16 or float32. The weight can only be a Tensor, and the length is the same as the number of channels C of the input_x. On GPU devices, when the input is a scalar, the shape is (1,).

Returns:

Tensor, with the same shape and dtype as x.

For detailed information, please refer to mindspore.nn.PReLU.

Raises:
  • TypeError – If dtype of x or weight is neither float16 nor float32.

  • TypeError – If the x or the weight is not a Tensor.

  • ValueError – If the x is a 0-D or 1-D Tensor on Ascend.

  • ValueError – If the weight is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> output = ops.prelu(x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]
tinyms.primitives.print_(*input_x)[source]

Outputs the inputs to stdout. The outputs are printed to screen by default. It can also be saved in a file by setting the parameter print_file_path in context. Once set, the output will be saved in the file specified by print_file_path. mindspore.parse_print() can be employed to reload the data. For more information, please refer to mindspore.set_context() and mindspore.parse_print().

Note

In pynative mode, please use python print function. In Ascend platform with graph mode, the bool, int and float would be converted into Tensor to print, and str remains unchanged. This function is used for debugging. When too much data is printed at the same time, in order not to affect the main process, the framework may discard some data. If you need to record the data completely, you are recommended to use the Summary function, and can check Summary.

Parameters:

input_x (Union[Tensor, bool, int, float, str, tuple, list]) – The inputs of print_. Supports multiple inputs which are separated by ‘,’.

Returns:

Invalid value, should be ignored.

Raises:

TypeError – If input_x is not one of the following: Tensor, bool, int, float, str, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> result = ops.print_('Print Tensor x and Tensor y:', x, y)
Print Tensor x and Tensor y:
Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [1]])
Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [1, 1]])
tinyms.primitives.prod(input, axis=None, keep_dims=False)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of input is r, and the value range is [-r,r).

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((input_0, input_2, ..., input_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((input_0, input_3, ..., input_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.prod(x, 1, keep_dims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by multiplying all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.prod(x)
>>> print(output)
2.2833798e+33
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.prod(x, 0, True)
>>> print(output)
[[[ 28.  28.  28.  28.  28.  28.]
  [ 80.  80.  80.  80.  80.  80.]
  [162. 162. 162. 162. 162. 162.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.prod(x, 1, True)
>>> print(output)
[[[  6.   6.   6.   6.   6.   6.]]
 [[120. 120. 120. 120. 120. 120.]]
 [[504. 504. 504. 504. 504. 504.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.prod(x, 2, True)
>>> print(output)
[[[1.00000e+00]
  [6.40000e+01]
  [7.29000e+02]]
 [[4.09600e+03]
  [1.56250e+04]
  [4.66560e+04]]
 [[1.17649e+05]
  [2.62144e+05]
  [5.31441e+05]]]
tinyms.primitives.qr(input, mode='reduced')[source]

Returns the QR decomposition of one or more matrices. If mode is ‘reduced’(the default), compute the P columns of Q where P is minimum of the 2 innermost dimensions of input. If mode is ‘complete’, compute full-sized Q and R.

Parameters:
  • input (Tensor) – A matrix to be calculated. The matrix must be at least two dimensions, the supported dtype are float16, float32, float64, complex64 and complex128. Define the shape of input as \((..., m, n)\), p as the minimum values of m and n.

  • mode (Union['reduced', 'complete'], optional) – If mode is ‘reduced’, computing reduce-sized QR decomposition, otherwise, computing the full-sized QR decomposition. Default: ‘reduced’.

Returns:

  • Q (Tensor) - The orthonormal matrices of input. If mode is ‘complete’, the shape is \((m, m)\), else the shape is \((m, p)\). The dtype of Q is same as input.

  • R (Tensor) - The upper triangular matrices of input. If mode is ‘complete’, the shape is \((m, n)\), else the shape is \((p, n)\). The dtype of R is same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If mode is neither ‘reduced’ nor ‘complete’.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[20., -31, 7], [4, 270, -90], [-8, 17, -32]]), mstype.float32)
>>> Q, R = ops.qr(input)
>>> print(Q)
[[-0.912871    0.16366126  0.37400758]
 [-0.18257418 -0.9830709  -0.01544376]
 [ 0.36514837 -0.08238228  0.92729706]]
>>> print(R)
[[ -21.908903  -14.788506  -1.6431675]
[    0.       -271.9031    92.25824  ]
[    0.          0.       -25.665514 ]]
tinyms.primitives.quantile(input, q, axis=None, keepdims=False)[source]

Computes the q-th quantiles of all elements in input, when the q-th quantile lies between two data points, a linear interpolation is implemented between them.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). Supported dtypes: float32, float64.

  • q (Union[float, Tensor]) – A scalar or 1D tensor of quantile values in the range [0, 1]. Supported dtypes: float32, float64.

  • axis (int, optional) – The dimension to reduce. By default, axis is None resulting in the input tensor being flattened before computation. Default: None.

  • keepdims (bool, optional) – Whether the output tensor has dim retained or not. Default: False.

Returns:

Tensor, has the same dtype as the input.

Suppose the shape of input is \((m, x_0, x_1, ..., x_i, ..., X_R)\), axis = \(i\) and m is the element count of input q.

  • If q is scalar and keepdims is True, the shape of output is \((x_0, x_1, ..., 1, ..., X_R)\).

  • If q is scalar and keepdims is False, the shape of output is \((x_0, x_1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is True, the shape of output is \((m, x_0, x_1, ..., 1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is False, the shape of output is \((m, x_0, x_1, ..., X_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If q is not a Tensor or float.

  • TypeError – If dtype of input is not float32 or float64.

  • TypeError – If dtype of q is not float32 or float64.

  • TypeError – If dtype of input and the dtype of q is different.

  • ValueError – If the q values not in the range [0, 1].

  • ValueError – If the axis values out of range.

Supported Platforms:

Examples

>>> x = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = ops.quantile(x, q)
>>> print(output.asnumpy())
[-0.5446  0.07  0.9214]
tinyms.primitives.rad2deg(x)[source]

Converts angles in radians to angles in degrees element-wise.

Parameters:

x (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x isn’t float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor([[6.283, -3.142],[1.570, -6.283],[3.142, -1.570]], mindspore.float32)
>>> output = ops.rad2deg(x)
>>> print(output)
[[ 359.98935 -180.02333]
 [  89.95438 -359.98935]
 [ 180.02333  -89.95438]]
tinyms.primitives.rand(*size, dtype=None, seed=None)[source]

Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype.

Parameters:

size (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g. \((2, 3)\) or \(2\).

Keyword Arguments:
  • dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be applied. Default: None.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Returns:

Tensor, with the designated shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.rand((2,3)))
[[4.1702199e-01 9.9718481e-01 7.2032452e-01]
 [9.3255734e-01 1.1438108e-04 1.2812445e-01]]
tinyms.primitives.rand_like(input, seed=None, *, dtype=None)[source]

Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype.

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, the same dtype of input will be applied. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Raises:
  • TypeError – If seed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> a = Tensor([[2, 3, 4], [1, 2, 3]])
>>> print(ops.rand_like(a, dtype=ms.float32))
[[4.1702199e-01 9.9718481e-01 7.2032452e-01]
 [9.3255734e-01 1.1438108e-04 1.2812445e-01]]
tinyms.primitives.randint(low, high, size, seed=None, *, dtype=None)[source]

Returns a Tensor whose elements are random integers in the range of [ low , high ) .

Parameters:
  • low (int) – Start value of interval.

  • high (int) – End value of interval.

  • size (tuple) – Shape of the new tensor.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be int type. If None, mindspore.int64 will be used. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with random integers from low (inclusive) to high (exclusive).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.randint(1, 10, (2,3)))
[[4 9 7]
 [9 1 2]]
tinyms.primitives.randint_like(input, low, high, seed=None, *, dtype=None)[source]

Returns a tensor with the same shape as Tensor input whose elements are random integers in the range of [ low , high ) .

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • low (int) – Start value of interval.

  • high (int) – End value of interval.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be int type. If None, mindspore.int64 will be used. Default is mindspore.int64.

Returns:

Tensor, with the designated shape and dtype, filled with random integers from low (inclusive) to high (exclusive).

Raises:
  • TypeErrorseed is not a non-negative integer.

  • TypeErrorlow or high is not an integer.

  • ValueError – If dtype is not a mstype.int_type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> a = Tensor([[1, 2, 3], [3, 2, 1]])
>>> print(ops.randint_like(a, 1, 10))
[[4 9 7]
 [9 1 2]]
tinyms.primitives.randn(*size, dtype=None, seed=None)[source]

Returns a new Tensor with given shape and dtype, filled with a sample (or samples) from the standard normal distribution.

Parameters:

size (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

Keyword Arguments:
  • dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be used. Default: None.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Returns:

Tensor, with the designated shape and dtype, filled with a sample (or samples) from the “standard normal” distribution.

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type.

  • ValueError – If size contains invalid number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.randn((2, 2)))
[[ 0.30639967 -0.42438635]
 [-0.4287376   1.3054721 ]]
tinyms.primitives.randn_like(input, seed=None, *, dtype=None)[source]

Returns a new Tensor with given shape and dtype, filled with a sample (or samples) from the standard normal distribution.

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with a sample (or samples) from the “standard normal” distribution.

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> a = Tensor([[1, 2, 3], [4, 5, 6]])
>>> print(ops.randn_like(a, dtype=ms.float32))
[[ 0.30639967 -0.42438635 -0.20454668]
 [-0.4287376   1.3054721   0.64747655]]
tinyms.primitives.random_categorical(logits, num_sample, seed=0, dtype=mindspore.int64)[source]

Generates random samples from a given categorical distribution tensor.

Parameters:
  • logits (Tensor) – The input tensor. 2-D Tensor with shape \((batch\_size, num\_classes)\).

  • num_sample (int) – Number of sample to be drawn. Only constant values is allowed.

  • seed (int) – Random seed. Only constant values is allowed. Default: 0.

  • dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

Tensor, The output Tensor with shape \((batch\_size, num\_samples)\).

Raises:
  • TypeError – If dtype is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.

  • TypeError – If logits is not a Tensor.

  • TypeError – If neither num_sample nor seed is an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> from mindspore import Tensor
>>> import mindspore.common.dtype as mstype
>>> import numpy as np
>>> logits = Tensor(np.random.random((10, 5)).astype(np.float32), mstype.float32)
>>> net = ops.random_categorical(logits, 8)
>>> result = net.shape
>>> print(result)
(10, 8)
tinyms.primitives.random_gamma(shape, alpha, seed=None)[source]

Outputs random values from the Gamma distribution(s) described by alpha.

Parameters:
  • shape (Tensor) – The shape of random tensor to be generated. Must be one of the following types: int32, int64. 1-D integer tensor.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. A Tensor. Must be one of the following types: half, float32, float64.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the concat shape between the input shape and the broadcast of alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If alpha is not a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha is not half, float32 or float64.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> shape = Tensor(np.array([7, 5]), mindspore.int32)
>>> alpha = Tensor(np.array([0.5, 1.5]), mindspore.float32)
>>> output = ops.random_gamma(shape, alpha, seed=5)
>>> result = output.shape
>>> print(result)
(7, 5, 2)
tinyms.primitives.random_poisson(shape, rate, seed=None, dtype=mindspore.float32)[source]

Generates random number Tensor with shape shape according to a Poisson distribution with mean rate.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (Tensor) – The shape of random tensor to be sampled from each poisson distribution, 1-D Tensor whose dtype is mindspore.dtype.int32 or mindspore.dtype.int64.

  • rate (Tensor) – The \(μ\) parameter the distribution is constructed with. It represents the mean of the distribution and also the variance of the distribution. It should be a Tensor whose dtype is mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – The data type of output: mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16. Default: mindspore.dtype.float32.

Returns:

A Tensor whose shape is mindspore.concat([‘shape’, mindspore.shape(‘rate’)], axis=0) and data type is equal to argument dtype.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If datatype of shape is not mindspore.dtype.int64 nor mindspore.dtype.int32.

  • ValueError – If shape of shape is not 1-D.

  • TypeError – If rate is not a Tensor nor a scalar.

  • TypeError – If datatype of rate is not in [mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16].

  • TypeError – If seed is not a non-negtive int.

  • TypeError – If dtype is not in [mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 nor mindspore.dtype.float16].

  • ValueError – If any element of input shape tensor is not positive.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: 1-D shape, 2-D rate, float64 output
>>> shape = Tensor(np.array([2, 2]), mindspore.int64)
>>> rate = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.random_poisson(shape, rate, seed=5, dtype=mindspore.float64)
>>> print(output.shape, output.dtype)
(2, 2, 2, 2) Float64
>>> # case 2: 1-D shape, scalar rate, int64 output
>>> shape = Tensor(np.array([2, 2]), mindspore.int64)
>>> rate = Tensor(5.0, mindspore.float64)
>>> output = ops.random_poisson(shape, rate, seed=5, dtype=mindspore.int64)
>>> print(output.shape, output.dtype)
(2, 2) Int64
tinyms.primitives.randperm(n, seed=0, offset=0, dtype=mindspore.int64)[source]

Generates random permutation of integers from 0 to n-1.

Returns the tensor with the determined shape inferred by n, the random numbers in it drawn from the data range that a given type can represent.

Parameters:
  • n (Union[Tensor, int]) – The input n Tensor with shape: () or (1,) and with data type of int64. The value of n must be greater than zero.

  • seed (int, optional) – Random seed. Default: 0. When seed is -1(only negative value), offset is 0, it’s determined by time.

  • offset (int, optional) – Offset to generate random numbers. Priority is higher than random seed. Default: 0. It must be non-negative.

  • dtype (mindspore.dtype, optional) – The type of output. Its value must be one of the following types: int32, int16, int8, uint8, int64, float64, float32, float16. Default: int64.

Returns:

Tensor. Its shape is specified by the required args n. Its type is spcified by dtype. Otherwise is default.

Raises:
  • TypeError – If dtype is not allowed.

  • ValueError – If n is a negative or 0 element.

  • ValueError – If seed is a negative element.

  • ValueError – If n is larger than the maximal data of the set dtype.

Supported Platforms:

CPU

Examples

>>> n = 4
>>> seed = 0
>>> offset = 0
>>> output = ops.randperm(n, seed, offset, dtype=mstype.int64)
>>> print(output)
[1 0 2 3]
tinyms.primitives.range(start, end, step)[source]

Creates a sequence of numbers that begins at start and extends by increments of limit up to but not including end.

The types of all 3 inputs must be the same. The type of the resulting tensor is the same as the type of the inputs.

Parameters:
  • start (Tensor) – A scalar Tensor. The first number in the sequence. Must have type: int32 ,int64, float32 or float64.

  • end (Tensor) – A scalar Tensor. Upper limit of the sequence, exclusive. Must have type: int32 ,int64, float32 or float64.

  • step (Tensor) – A scalar Tensor. Number that increments start. Must have type: int32 ,int64, float32 or float64.

Returns:

A 1-D Tensor, with the same type as the inputs.

Raises:
  • TypeError – If start, end or step is not scalar Tensor.

  • TypeError – If datatype of start, end or step is not same.

  • TypeError – If datatype of start, end or step is not supported.

  • ValueError – If step = 0.

  • ValueError – If start >= end when step > 0.

  • ValueError – If start <= end when step < 0.

Supported Platforms:

GPU CPU

Examples

>>> start = Tensor(0, mstype.int32)
>>> end = Tensor(10, mstype.int32)
>>> step = Tensor(4, mstype.int32)
>>> output = ops.range(start, end, step)
>>> print(output)
[0 4 8]
tinyms.primitives.rank(input_x)[source]

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor is the number of indices required to uniquely select each element of the tensor.

Parameters:

input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Returns:

Tensor. 0-D int32 Tensor representing the rank of input, i.e., \(R\). The data type is an int.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.rank(input_tensor)
>>> print(output)
2
>>> print(type(output))
<class 'int'>
tinyms.primitives.ravel(input)[source]

Expand the multidimensional Tensor into 1D along the 0 axis direction.

Parameters:

input (Tensor) – A tensor to be flattened.

Returns:

Tensor, a 1-D tensor, containing the same elements of the input.

Raises:

TypeError – If argument input is not Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = ops.ravel(x)
>>> print(output)
[0. 1. 2. 1.]
>>> print(output.shape)
(4,)
tinyms.primitives.real(input)[source]

Returns a Tensor that is the real part of the input. If input is real, it is returned unchanged.

Parameters:

input (Tensor) – The input tensor to compute to.

Returns:

Tensor, the shape is the same as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.asarray(np.complex(1.3+0.4j)), ms.complex64)
>>> output = ops.real(input)
>>> print(output)
1.3
tinyms.primitives.reciprocal(input)[source]

Returns reciprocal of a tensor element-wise.

\[out_{i} = \frac{1}{x_{i}}\]
Parameters:

input (Tensor) – The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.array([1.0, 2.0, 4.0]), ms.float32)
>>> output = ops.reciprocal(input)
>>> print(output)
[1.   0.5  0.25]
tinyms.primitives.relu(input)[source]

Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise.

It returns \(\max(input,\ 0)\) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(input) = (input)^+ = max(0, input)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

input (Tensor) –

Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, data type is number.

Returns:

Tensor of shape \((N, *)\), with the same dtype and shape as the input.

Raises:
  • TypeError – If dtype of input is not a number.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
tinyms.primitives.relu6(x)[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (Tensor) – Tensor of shape \((N, *)\) with float16 or float32 data type.

Returns:

Tensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> result = ops.relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
tinyms.primitives.remainder(input, other)[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[remainder(input, other) = input - input.div(other, rounding\_mode="floor") * other\]

Warning

  • When the elements of input exceed 2048, there might be accuracy problems.

  • The calculation results of this operator on Ascend and CPU might be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • input (Union[Tensor, numbers.Number, bool]) – The first input is a number, a bool or a tensor whose data type is number.

  • other (Union[Tensor, numbers.Number, bool]) – When the first input is a tensor, The second input could be a number, a bool or a tensor whose data type is number.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision.

Raises:
  • TypeError – If neither input nor other is one of the following: Tensor, number, bool.

  • ValueError – If the shape input and other cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-4.0, 5.0, 6.0]).astype(np.float16))
>>> y = Tensor(np.array([3.0, 2.0, 3.0]).astype(np.float16))
>>> output = ops.remainder(x, y)
>>> print(output)
[2.  1.  0.]
tinyms.primitives.renorm(input, p, axis, maxnorm)[source]

Renormalizes the sub-tensors along dimension axis, and each sub-tensor’s p-norm should not exceed the ‘maxnorm’. The values of current sub-tensor don’t need change if the p-norm of the sub-tensor is less than maxnorm. Otherwise the sub-tensor needs to be modified to the original value of the corresponding position divided by the p-norm of the substensor and then multiplied by maxnorm.

Parameters:
  • input (Tensor) – A Tensor, types: float32 or float16.

  • p (int) – Power of norm calculation.

  • axis (int) – The dimension that expected to get the slice-tensor.

  • maxnorm (float32) – Max norm.

Returns:

Tensor, has the same dtype and shape as input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
>>> y = ops.renorm(x, p=1, axis=0, maxnorm=5.)
>>> print(y)
[[1.       1.        1.        ]
[1.6666666 1.6666666 1.6666666 ]
[1.6666667 1.6666667 1.6666667 ]]
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat .

Parameters:
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive.

  • axis (int) – The axis along which to repeat, default 0.

Returns:

One tensor with values repeated along the specified axis. If x has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * rep, ..., sn)\). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : repeat on axis 0
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
>>> # case 2 : repeat on axis 1
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 1)
>>> print(output)
[[0 0 1 1 2 2]
 [3 3 4 4 5 5]]
tinyms.primitives.repeat_interleave(input, repeats, axis=None)[source]

Repeat elements of a tensor along an axis, like numpy.repeat.

Parameters:
  • input (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • repeats (int) – The number of times to repeat, must be positive.

  • axis (int, optional) – The axis along which to repeat, default: None. if dims is None, the input Tensor will be flattened and the output will alse be flattened.

Returns:

One tensor with values repeated along the specified axis. If input has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * repeats, ..., sn)\). The output type will be the same as the type of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_interleave(input, repeats=2, axis=0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
tinyms.primitives.reshape(input, shape)[source]

Rearranges the input Tensor based on the given shape.

The ‘shape’ can only have one -1 at most, in which case it’s inferred from the remaining dimensions and the number of elements in the input.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • shape (Union[tuple[int], Tensor[int]]) – Constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((y_1, y_2, ..., y_S)\).

Raises:

ValueError – Given a shape tuple, if it has several -1; or if the product of its elements is less than or equal to 0 or cannot be divided by the product of the input tensor shape; or if it does not match the input’s array size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> output = ops.reshape(input, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
tinyms.primitives.reverse(x, axis)[source]

Reverses specific dimensions of a tensor.

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters:
  • x (Tensor) – The target tensor. The data type is Number except float64. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[tuple(int), list(int)]) – The indices of the dimensions to reverse.

Outputs:

Tensor, has the same shape and type as x.

Raises:
  • TypeError – If axis is neither list nor tuple.

  • TypeError – If element of axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> output = ops.reverse(input_x, axis=[1])
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> output = ops.reverse(input_x, axis=[1, 0])
>>> print(output)
[[8 7 6 5]
 [4 3 2 1]]
tinyms.primitives.reverse_sequence(x, seq_lengths, seq_dim, batch_dim=0)[source]

Reverses variable length slices.

Parameters:
  • x (Tensor) – The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) – Specified reversing length, must be a 1-D vector with int32 or int64 types.

  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Returns:

Tensor, with the same shape and data type as x.

Raises:
  • TypeError – If seq_dim or batch_dim is not an int.

  • ValueError – If value of batch_dim is equal to or greater than length of shape of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=0, batch_dim=1)
>>> print(output)
[[1. 5. 9.]
 [4. 2. 6.]
 [7. 8. 3.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([2, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[2. 1. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([3, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[3. 2. 1.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([4, 4]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[4. 3. 2. 1.]
 [8. 7. 6. 5.]]
tinyms.primitives.roll(input, shifts, dims=None)[source]

Rolls the elements of a tensor along an axis.

Parameters:
  • input (Tensor) – Input tensor.

  • shifts (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • dims (Union[list(int), tuple(int), int], optional) – Specifies the dimension indexes of shape to be rolled. Default: None. If dims is None, the Tensor will be flattened before rolling and then restored to the original shape.

Returns:

Tensor, has the same shape and type as input.

Raises:
  • TypeError – If shifts is not an int, a tuple or a list.

  • TypeError – If dims is not an int, a tuple or a list.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> output = ops.roll(input_x, shifts=2, dims=0)
>>> print(output)
[3. 4. 0. 1. 2.]
tinyms.primitives.rot90(input, k, dims)[source]

Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.

Parameters:
  • input (Tensor) – Input tensor.

  • k (int) – Number of times to rotate.

  • dims (Union[list(int), tuple(int)]) – Axis to rotate.

Returns:

Tensor.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If k is not integer.

  • TypeError – If dims is not tuple of integers or list of ints.

  • ValueError – If the length of dims is not 2.

  • ValueError – If any dims is out of Tensor’s range [-input.ndim, input.ndim).

  • RuntimeError – If rotation dims are not different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 3]])).astype(np.float32)
>>> k = 1
>>> dims = [0, 1]
>>> output = ops.rot90(x, k, dims)
>>> print(output)
[[1. 3.]
[0. 2.]]
tinyms.primitives.round(input)[source]

Returns half to even of a tensor element-wise.

\[out_i \approx input_i\]
Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and type as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> output = ops.round(input)
>>> print(output)
[ 1.  2.  2.  2. -4.]
tinyms.primitives.rrelu(input, lower=0.125, upper=0.3333333333333333)[source]

Randomized Leaky ReLU activation function.

The activation function is defined as:

\[\text{rrelu}(input_{ji}) = \begin{cases}input_{ji}, &\text{if } input_{ji} \geq 0; \cr {\alpha_{ji}} * input_{ji}, &\text{otherwise.}\end{cases}\]

where \(\alpha_{ji}\) ~ \(U(l, u)\), \(l \le u\).

Applies the rrelu function elementally, as described in the paper: Empirical Evaluation of Rectified Activations in Convolution Network .

Parameters:
  • input (Tensor) – The input of rrelu is a Tensor of any dimension.

  • lower (Union[int, float]) – Slope of the activation function at x < 0. Default: 1.0/8.

  • upper (Union[int, float]) – Slope of the activation function at x < 0. Default: 1.0/3.

Returns:

Tensor, after rrelu, has the same type and shape as the input.

Raises:
  • TypeError – If lower is not a float or an int.

  • TypeError – If upper is not a float or an int.

  • TypeError – If input is not a Tensor.

  • TypeError – If input is not a Tensor of mindspore.float16 or mindpore.float32.

  • ValueError – If lower is greater than upper.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0], [2.0, 0]]), mindspore.float32)
>>> output = ops.rrelu(x)
>>> print(output)
[[-0.31465699  4.        ]
 [ 2.          0.        ]]
tinyms.primitives.rsqrt(input)[source]

Computes reciprocal of square root of input tensor element-wise.

\[out_{i} = \frac{1}{\sqrt{input_{i}}}\]
Parameters:

input (Tensor) – The input of rsqrt. Its each element must be a non-negative number, if an element is negative, the calculation result is nan.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([-0.0370,  0.2970,  1.5420, -0.9105])
>>> output = ops.rsqrt(input)
>>> print(output)
[       nan 1.8349396  0.80530024        nan]
tinyms.primitives.scalar_cast(input_x, input_y)[source]

Casts the input scalar to another type.

Parameters:
  • input_x (scalar) – The input scalar. Only constant value is allowed.

  • input_y (mindspore.dtype) – The type to be cast. Only constant value is allowed.

Returns:

Scalar. The type is the same as the python type corresponding to input_y.

Raises:

TypeError – If neither input_x nor input_y is a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
tinyms.primitives.scalar_to_array(input_x)[source]

The interface is deprecated. Please use the mindspore.ops.scalar_to_tensor() instead.

tinyms.primitives.scalar_to_tensor(input_x, dtype=mindspore.float32)[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Parameters:
  • input_x (Union[bool, int, float]) – The input is a scalar. Only constant value is allowed.

  • dtype (mindspore.dtype) – The target data type. Default: mindspore.float32. Only constant value is allowed.

Returns:

Tensor. 0-D Tensor and the content is the input.

Raises:

TypeError – If input_x is neither bool nor int nor float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = 1
>>> output = ops.scalar_to_tensor(data, mindspore.float32)
>>> print(output)
1.0
tinyms.primitives.scatter(input, axis, index, src)[source]

Update the value in src to input according to the specified index. Refer to mindspore.ops.tensor_scatter_elements() for more details.

Parameters:
  • input (Tensor) – The target tensor. The rank of input must be at least 1.

  • axis (int) – Which axis to scatter. Accepted range is [-r, r) where r = rank(input).

  • index (Tensor) – The index to do update operation whose data type must be mindspore.int32 or mindspore.int64. Same rank as input . And accepted range is [-s, s) where s is the size along axis.

  • src (Tensor) – The tensor doing the update operation with input , has the same type as input , and the shape of src should be equal to the shape of index .

Returns:

Tensor, has the same shape and type as input .

Raises:
  • TypeError – If index is neither int32 nor int64.

  • ValueError – If anyone of the rank among input , index and src less than 1.

  • ValueError – If the shape of src is not equal to the shape of index .

  • ValueError – If the rank of src is not equal to the rank of input .

  • RuntimeError – If the data type of input and src conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[1, 2, 3, 4, 5]]), dtype=ms.float32)
>>> src = Tensor(np.array([[8, 8]]), dtype=ms.float32)
>>> index = Tensor(np.array([[2, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=1, index=index, src=src)
>>> print(out)
[[1. 2. 8. 4. 8.]]
>>> input = Tensor(np.zeros((5, 5)), dtype=ms.float32)
>>> src = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), dtype=ms.float32)
>>> index = Tensor(np.array([[0, 0, 0], [2, 2, 2], [4, 4, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=0, index=index, src=src)
>>> print(out)
[[1. 2. 3. 0. 0.]
[0. 0. 0. 0. 0.]
[4. 5. 6. 0. 0.]
[0. 0. 0. 0. 0.]
[7. 8. 9. 0. 0.]]
>>> input = Tensor(np.zeros((5, 5)), dtype=ms.float32)
>>> src = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), dtype=ms.float32)
>>> index = Tensor(np.array([[0, 2, 4], [0, 2, 4], [0, 2, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=1, index=index, src=src)
>>> print(out)
[[1. 0. 2. 0. 3.]
[4. 0. 5. 0. 6.]
[7. 0. 8. 0. 9.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
tinyms.primitives.scatter_add(input_x, indices, updates)[source]

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) – The index to do add operation whose data type must be int32 or int64.

  • updates (Tensor) – The tensor doing the add operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_add(input_x, indices, updates)
>>> print(output)
[[ 1.  1.  1.]
 [19. 19. 19.]]
tinyms.primitives.scatter_div(input_x, indices, updates)[source]

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do divide operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) – The tensor doing the divide operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same type and shape as input_x.

Raises:
  • TypeError – If the type of indices is not one of the following dtype: int32, int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]
tinyms.primitives.scatter_max(input_x, indices, updates)[source]

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates follow the implicit type conversion rules to keep the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do max operation whose data type must be mindspore.int32.

  • updates (Tensor) – The tensor doing the max operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Returns:

Tensor, the updated input_x, the type and shape same as input_x.

Raises:
  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> output = ops.scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
tinyms.primitives.scatter_min(input_x, indices, updates)[source]

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) – The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> input_x = Parameter(Tensor(np.zeros((2, 3)), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([1, 0]), mindspore.int32)
>>> update = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> output = ops.scatter_min(input_x, indices, update)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.scatter_mul(input_x, indices, updates)[source]

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when the data types of parameters need to be converted.

Parameters:
  • input_x (Parameter) – The target tensor to be updated, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) – The index to do mul operation whose data type must be int32 or int64.

  • updates (Tensor) – The tensor doing the mul operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
>>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[  1.   1.   1.]
 [378. 378. 378.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
>>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[  3.   3.   3.]
 [126. 126. 126.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7.  7.  7.]
 [54. 54. 54.]]
tinyms.primitives.scatter_nd(indices, updates, shape)[source]

Scatters a tensor into a new tensor depending on the specified indices.

Creates an empty tensor with the given shape, and set values by scattering the update tensor depending on indices. The empty tensor has rank \(P\) and indices has rank \(Q\).

The shape is \((s_0, s_1, ..., s_{P-1})\), where \(P \ge 1\).

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\), where \(Q \ge 2\) and \(N \le P\).

The last dimension of indices (with length \(N\) ) indicates slices along the \(N\) th dimension of the empty tensor.

updates is a tensor of rank \(Q-1+P-N\), and its shape is \((i_0, i_1, ..., i_{Q-2}, s_N, s_{N+1}, ..., s_{P-1})\).

If indices contains duplicates, the duplicate updates are summed.

The following figure shows the calculation process of inserting two new value matrices into the first dimension with rank-3:

tinyms/ScatterNd.png
Parameters:
  • indices (Tensor) – Define the index of scattering in the new tensor with int32 or int64 data type. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – Define the source Tensor to be updated. It has shape indices.shape[:-1] + shape[indices.shape[-1]:].

  • shape (tuple[int]) – Define the shape of the output tensor, has the same data type as indices. shape can not be empty, and the elements in shape must be greater than or equal to 1.

Returns:

Tensor, the new tensor, has the same type as update and the same shape as shape.

Raises:
  • TypeError – If shape is not a tuple.

  • ValueError – If any element of shape is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
>>> shape = (4, 4, 4)
>>> output = ops.scatter_nd(indices, updates, shape)
>>> print(output)
[[[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]
 [[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = ops.scatter_nd(indices, updates, shape)
>>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
>>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
>>> # [
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> # ]
>>> # Step 2: Modify the data at the specified location according to the indicators
>>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
>>> # means that the empty tensor in the 0th row and 1st col set to 3.2
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 0.   0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
>>> # means that the empty tensor in the 1th row and 1st col set to 1.1
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 1.1  0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # The final result is as follows:
>>> print(output)
[[0. 3.2 0.]
 [0. 1.1 0.]
 [0. 0.  0.]]
tinyms.primitives.scatter_nd_add(input_x, indices, updates, use_locking=False)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor doing the addition operation with input_x, the data type is same as input_x, the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_add(input_x, indices, updates, False)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_add(input_x, indices, updates, False)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]]
tinyms.primitives.scatter_nd_div(input_x, indices, updates, use_locking=False)[source]

Applying sparse division to individual values or slices in a tensor.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q, where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do div operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the div operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_div(input_x, indices, updates, False)
>>> print(output)
[1.         0.25       0.5        4.         0.71428573 6.
 7.         0.8888889 ]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.float32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.float32)
>>> output = ops.scatter_nd_div(input_x, indices, updates, False)
>>> print(output)
[[[1.         1.         1.         1.        ]
  [0.5        0.5        0.5        0.5       ]
  [0.33333334 0.33333334 0.33333334 0.33333334]
  [0.25       0.25       0.25       0.25      ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]
 [[0.2        0.2        0.2        0.2       ]
  [0.16666667 0.16666667 0.16666667 0.16666667]
  [0.14285715 0.14285715 0.14285715 0.14285715]
  [0.125      0.125      0.125      0.125     ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]]
tinyms.primitives.scatter_nd_max(input_x, indices, updates, use_locking=False)[source]

Applying sparse maximum to individual values or slices in a tensor.

Using given values to update parameter value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do maximum operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the max operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_max(input_x, indices, updates, False)
>>> print(output)
[1. 8. 6. 4. 7. 6. 7. 9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_max(input_x, indices, updates, False)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
tinyms.primitives.scatter_nd_min(input_x, indices, updates, use_locking=False)[source]

Applying sparse minimum to individual values or slices in a tensor.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter. The shape is \((N,*)\), where \(*\) means any number of additional dimensions.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the min operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.ones(8) * 10, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_min(input_x, indices, updates, False)
>>> print(output)
[10.  8.  6. 10.  7. 10. 10.  9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)) * 10, mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_min(input_x, indices, updates, False)
>>> print(output)
[[[ 1  1  1  1]
  [ 2  2  2  2]
  [ 3  3  3  3]
  [ 4  4  4  4]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]
 [[ 5  5  5  5]
  [ 6  6  6  6]
  [ 7  7  7  7]
  [ 8  8  8  8]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]]
tinyms.primitives.scatter_nd_mul(input_x, indices, updates, use_locking=False)[source]

Applies sparse multiplication to individual values or slices in a tensor.

Using given values to update parameter value through the multiplication operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q, where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – Input parameter.

  • indices (Tensor) – The index to do multiplication operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the multiplication operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[ 1. 16. 18.  4. 35.  6.  7. 72.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
tinyms.primitives.scatter_nd_sub(input_x, indices, updates, use_locking=False)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index of input tensor, with int32 or int64 data type. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor doing the subtraction operation with input_x, has the same type as input. The shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_sub(input_x, indices, updates, False)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_sub(input_x, indices, updates, False)
>>> print(output)
[[[-1 -1 -1 -1]
  [-2 -2 -2 -2]
  [-3 -3 -3 -3]
  [-4 -4 -4 -4]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]
 [[-5 -5 -5 -5]
  [-6 -6 -6 -6]
  [-7 -7 -7 -7]
  [-8 -8 -8 -8]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]]
tinyms.primitives.scatter_update(input_x, indices, updates)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index of input tensor. With int32 or int64 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> output = ops.scatter_update(input_x, indices, updates)
>>> print(output)
[[2. 1.2  1.]
 [3. 1.2  1.]]
tinyms.primitives.searchsorted(sorted_sequence, values, *, out_int32=False, right=False)[source]

Return the position indices such that after inserting the values into the sorted_sequence, the order of innermost dimension of the sorted_sequence remains unchanged.

Parameters:
  • sorted_sequence (Tensor) – The input tensor. It must contain a monotonically increasing sequence on the innermost dimension.

  • values (Tensor) – The value that should be inserted.

Keyword Arguments:
  • out_int32 (bool, optional) – Output datatype. If True, the output datatype will be int32; if False, the output datatype will be int64. Default: False.

  • right (bool, optional) – Search Strategy. If True, return the last suitable index found; if False, return the first such index. Default: False.

Returns:

Tensor containing the indices from the innermost dimension of sorted_sequence such that, if insert the corresponding value in the values tensor, the order of sorted_sequence would be preserved, whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.

Raises:

ValueError – If the dimension of sorted_sequence isn’t 1 and all dimensions except the last dimension of sorted_sequence and values are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sorted_sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
>>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
>>> output = ops.searchsorted(sorted_sequence, values)
>>> print(output)
[[2 4 5]
 [1 2 4]]
tinyms.primitives.select(cond, x, y)[source]

The conditional tensor determines whether the corresponding element in the output must be selected from x (if true) or y (if false) based on the value of each element.

It can be defined as:

\[\begin{split}out_i = \begin{cases} x_i, & \text{if } cond_i \\ y_i, & \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • cond (Tensor[bool]) – The condition tensor, decides which element is chosen. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • x (Union[Tensor, int, float]) – The first Tensor or number to be selected. If x is a Tensor, the shape is or can be broadcadt to \((x_1, x_2, ..., x_N, ..., x_R)\). If x is an int or a float, it will be cast to the type of int32 or float32, and broadcast to the same shape as y. One of x and y must be a Tensor.

  • y (Union[Tensor, int, float]) – The second Tensor or number to be selected. If y is a Tensor, The shape is or can be broadcadt to \((x_1, x_2, ..., x_N, ..., x_R)\). If y is an int or a float, it will be cast to the type of int32 or float32, and broadcast to the same shape as x. One of x and y must be a Tensor.

Returns:

Tensor, has the same shape as cond.

Raises:
  • TypeError – If x or y is not a Tensor, int or float.

  • ValueError – The shapes of inputs can not be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # 1) Both inputs are Tensor
>>>
>>> cond = Tensor([True, False])
>>> x = Tensor([2,3], mindspore.float32)
>>> y = Tensor([1,2], mindspore.float32)
>>> output = ops.select(cond, x, y)
>>> print(output)
[2. 2.]
>>> # 2) y is a float
>>> cond = Tensor([True, False])
>>> x = Tensor([2,3], mindspore.float32)
>>> y = 2.0
>>> output = ops.select(cond, x, y)
>>> print(output)
[2. 2.]
tinyms.primitives.selu(input_x)[source]

Activation function SeLU (Scaled exponential Linear Unit).

The activation function is defined as:

\[E_{i} = scale * \begin{cases} x_{i}, &\text{if } x_{i} \geq 0; \cr \text{alpha} * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(alpha\) and \(scale\) are pre-defined constants(\(alpha=1.67326324\) and \(scale=1.05070098\)).

See more details in Self-Normalizing Neural Networks.

Parameters:

input_x (Tensor) – Tensor of any dimension, the data type is float16 or float32.

Returns:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
tinyms.primitives.sequence_mask(lengths, maxlen=None)[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape \((d_1, d_2, ..., d_n)\), then the resulting tensor mask has type and shape \((d_1, d_2, ..., d_n, maxlen)\), with mask \([i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\).

Parameters:
  • lengths (Tensor) – Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen.

  • maxlen (int) – size of the last dimension of returned tensor. Must be positive and same type as elements in lengths. Default is None.

Returns:

One mask tensor of shape lengths.shape + (maxlen,) .

Raises:
  • TypeError – If lengths is not a Tensor.

  • TypeError – If maxlen is not an int.

  • TypeError – If dtype of lengths is neither int32 nor int64.

Supported Platforms:

GPU CPU

Examples

>>> # case 1: When maxlen is assigned
>>> x = Tensor(np.array([1, 2, 3, 4]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[ True False False False False]
 [ True  True False False False]
 [ True  True  True False False]
 [ True  True  True  True False]]
>>> # case 2: When there is 0 in x
>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[[ True False False False False]
  [ True  True  True False False]]
 [[ True  True False False False]
  [False False False False False]]]
>>> # case 3: when the maxlen is not assigned
>>> x = Tensor(np.array([[1, 3], [2, 4]]))
>>> output = ops.sequence_mask(x)
>>> print(output)
[[[ True False False False ]
  [ True  True  True False ]]
 [[ True  True False False ]
  [ True  True  True  True ]]]
tinyms.primitives.sgn(input)[source]

Extension of mindspore.ops.sign() in complex domain. For real number input, this function is the same as mindspore.ops.sign(). For complex input, this function is calculated according to the following formula.

\[\begin{split}\text{out}_{i} = \begin{cases} 0 & |\text{input}_i| == 0 \\ \frac{{\text{input}_i}}{|{\text{input}_i}|} & \text{otherwise} \end{cases}\end{split}\]
Parameters:

input (Tensor) – The input value.

Returns:

Tensor, the sgn of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([[3 + 4j, 7 - 24j, 0, 6 + 8j, 8], [15 + 20j, 7 - 24j, 0, 3 + 4j, 20]], dtype=ms.complex64)
>>> output = ops.sgn(input)
>>> print(output)
[[0.6 +0.8j  0.28-0.96j 0.  +0.j   0.6 +0.8j  1.  +0.j  ]
 [0.6 +0.8j  0.28-0.96j 0.  +0.j   0.6 +0.8j  1.  +0.j  ]]
tinyms.primitives.shape(input_x)[source]

Returns the shape of the input tensor.

Parameters:

input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

Returns:

tuple[int], the output tuple is constructed by multiple integers, \((x_1, x_2, ..., x_R)\).

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.shape(input_x)
>>> print(output)
(3, 2, 1)
tinyms.primitives.shuffle(x, seed=None)[source]

Randomly shuffles a Tensor along its first dimension.

Parameters:
  • x (Tensor) – The Tensor need be shuffled.

  • seed (int, optional) – Random seed used for random number generation, must be non-negative. If seed is 0, which will be replaced with a randomly generated value. Default: None, which will be treated as 0.

Returns:

Tensor. The shape and type are the same as the input x.

Raises:

TypeError – If data type of seed is not None or non-negative int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mstype.float32)
>>> output = ops.shuffle(x, seed=1)
>>> print(output)
(3. 4. 2. 1.)
tinyms.primitives.sigmoid(input)[source]

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{sigmoid}(input_i) = \frac{1}{1 + \exp(-input_i)}\]

where \(input_i\) is an element of the input.

Parameters:

input (Tensor) – Tensor of any dimension, the data type is float16, float32, float64, complex64 or complex128.

Returns:

Tensor, with the same type and shape as the input.

Raises:
  • TypeError – If dtype of input is not float16, float32, float64, complex64 or complex128.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.sigmoid(input)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
tinyms.primitives.sign(input)[source]

Returns an element-wise indication of the sign of a number.

\[\begin{split}\text{out}_{i} = \begin{cases} -1 & \text{input}_{i} < 0 \\ 0 & \text{input}_{i} = 0 \\ 1 & \text{input}_{i} > 0 \end{cases}\end{split}\]
Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, the sign of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([[-1, 0, 2, 4, 6], [2, 3, 5, -6, 0]])
>>> output = ops.sign(input)
>>> print(output)
[[-1  0  1  1  1]
 [ 1  1  1 -1  0]]
>>> ms.set_context(device_target="CPU")
>>> x = ms.Tensor([[-1, 0, float('inf'), 4, float('nan')], [2, 3, float('-inf'), -6, 0]])
>>> output = ops.sign(x)
>>> print(output)
[[-1.  0.  1.  1.  0.]
 [ 1.  1. -1. -1.  0.]]
tinyms.primitives.signbit(input)[source]

Determine the symbol of each element. If the element value is less than 0, the corresponding output position is True; otherwise, it is False.

Parameters:

input (Tensor) – The input value.

Returns:

Tensor, the signbit of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([0.3, 1.2, 0., -2.5])
>>> output = ops.signbit(input)
>>> print(output)
[False False False  True]
tinyms.primitives.silu(x)[source]

Computes Sigmoid Linear Unit of input element-wise. The SiLU function is defined as:

\[\text{SiLU}(x) = x * \sigma(x),\]

where the Logistic Sigmoid function is defined as:

\[\text{sigma}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

where \(x_i\) is an element of the x.

For more details, please refer to mindspore.nn.SiLU.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> output = ops.silu(x)
>>> print(output)
[-0.269  1.762  -0.1423  1.762  -0.269]
tinyms.primitives.sin(input)[source]

Computes sine of the input element-wise.

\[out_i = sin(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sin(input)
>>> print(output)
[0.5810352 0.27635565 0.41687083 0.5810352]
tinyms.primitives.sinc(input)[source]

Computes the normalized sinc of input.

\[\begin{split}out_i = \begin{cases} \frac{sin(\pi input_i)}{\pi input_i} & input_i\neq 0\\ 1 & input_i=0 \end{cases}\end{split}\]
Parameters:

input (Tensor) – The input Tensor.

Returns:

Tensor, has the same shape as the input. The dtype of output is float32 when dtype of input is in [int, bool]. Otherwise output has the same dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sinc(input)
>>> print(output)
[0.47735003 0.8759357  0.7224278  0.47735003]
tinyms.primitives.sinh(input)[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(input_i)\]
Parameters:

input (Tensor) – The input tensor of hyperbolic sine function.

Returns:

Tensor, has the same shape as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sinh(input)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
tinyms.primitives.size(input_x)[source]

Returns a Scalar of type int that represents the size of the input Tensor and the total number of elements in the Tensor.

Parameters:

input_x (Tensor) –

Input parameters, the shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is number.

Returns:

int. A scalar representing the elements’ size of input_x, tensor is the number of elements in a tensor, \(size=x_1*x_2*...x_R\). The data type is an int.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.size(input_x)
>>> print(output)
4
tinyms.primitives.slice(input_x, begin, size)[source]

Slices a tensor in the specified shape.

Slice the tensor input_x in shape of size and starting at the location specified by begin. The slice begin represents the offset in each dimension of input_x. The slice size represents the size of the output tensor.

Note

begin is zero-based and size is one-based.

If size[i] is -1, all remaining elements in dimension i are included in the slice. This is equivalent to setting \(size[i] = input\_x.shape(i) - begin[i]\).

Parameters:
  • input_x (Tensor) – The target tensor.

  • begin (Union[tuple, list]) – The beginning of the slice. Only constant value(>=0) is allowed.

  • size (Union[tuple, list]) – The size of the slice. Only constant value is allowed.

Returns:

Tensor, the shape is input size, the data type is the same as input_x.

Raises:

TypeError – If begin or size is neither tuple nor list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 2))
>>> print(output)
[[[3 3]]]
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 1))
>>> print(output)
[[[3]]]
>>> output = ops.slice(data, (1, 1, 0), (1, 1, 3))
>>> print(output)
[[[4 4 4]]]
>>> output = ops.slice(data, (1, 0, 1), (1, 1, 2))
>>> print(output)
[[[3 3]]]
tinyms.primitives.slogdet(input)[source]

Computes the sign and the log of the absolute value of the determinant of one or more square matrices.

Parameters:

input (Tensor) – A matrix to be calculated, its shape is \([..., M, M]\). The matrix must be at least two dimensions, and the last two dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.

Returns:

Tensor. The signs of the log determinants. The shape is \(input.shape[:-2]\) , and the dtype is same as input.

Tensor. The absolute values of the log determinants. The shape is \(input.shape[:-2]\). The dtype always be real-value, even input is complex.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input not float32, float64, complex64 or complex128.

  • ValueError – If the last two dimensions of input is not same size.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> sign, output = ops.slogdet(input_x)
>>> print(sign)
[-1.   1.]
>>> print(output)
[2.80336046e+00    3.04452229e+00]
tinyms.primitives.smooth_l1_loss(input, target, beta=1.0, reduction='none')[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:

\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\beta}, & \text{if } |x_i - y_i| < \beta \\ |x_i - y_i| - 0.5 * \beta, & \text{otherwise. } \end{cases}\end{split}\]

If reduction is not none, then:

\[\begin{split}L = \begin{cases} \operatorname{mean}(L_{i}), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L_{i}), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. \(\text{beta}>0\) , its default value is 1.0. \(N\) is the batch size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • target (Tensor) – Ground truth data, tensor of shape \((N, *)\), same shape and dtype as the input.

  • beta (float) – A parameter used to control the point where the function will change between L1 to L2 loss. The value should be greater than zero. Default: 1.0.

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’ or ‘sum’. Default: ‘none’.

Returns:

Tensor, if reduction is ‘none’, then output is a tensor with the same shape as input. Otherwise, the shape of output tensor is (1,).

Raises:
  • TypeError – If beta is not a float.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • TypeError – If dtype of input or target is not one of float16, float32, float64.

  • ValueError – If beta is less than or equal to 0.

  • ValueError – If shape of input is not the same as target.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = ops.smooth_l1_loss(logits, labels)
>>> print(output)
[0.  0.  0.5]
tinyms.primitives.soft_shrink(input, lambd=0.5)[source]

soft_shrink is deprecated, please use softshrink instead.

tinyms.primitives.softmax(x, axis=-1, *, dtype=None)[source]

Applies the Softmax operation to the input tensor on the specified axis. Suppose a slice in the given axis \(x\), then for each element \(x_i\), the Softmax function is shown as follows:

\[\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},\]

where \(N\) is the length of the tensor.

Parameters:
  • axis (Union[int, tuple[int]], optional) – The axis to perform the Softmax operation. Default: -1.

  • x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, x will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int or a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape))

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.softmax(x)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
tinyms.primitives.softmin(x, axis=-1, *, dtype=None)[source]

Applies the Softmin operation to the input tensor on the specified axis. Suppose a slice in the given axis \(x\), then for each element \(x_i\), the Softmin function is shown as follows:

\[\text{output}(x_i) = \frac{exp(-x_i)}{\sum_{j = 0}^{N-1}\exp(-x_j)},\]

where \(N\) is the length of the tensor.

Parameters:
  • axis (Union[int, tuple[int]], optional) – The axis to perform the Softmin operation. Default: -1.

  • x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, x will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int or a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> output = ops.softmin(x)
>>> print(output)
[0.2341  0.636  0.0862  0.01165  0.03168 ]
tinyms.primitives.softshrink(x, lambd=0.5)[source]

Applies the Softshrink function element-wise.

\[\begin{split}\text{SoftShrink}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • x (Tensor) – The input of soft shrink with data type of float16 or float32.

  • lambd (float) – The \(\lambda\) must be no less than zero. Default: 0.5.

Returns:

Tensor, has the same shape and data type as x.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If lambd is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mindspore.float32)
>>> output = ops.softshrink(x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
tinyms.primitives.softsign(x)[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> output = ops.softsign(x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
tinyms.primitives.sort(input_x, axis=-1, descending=False)[source]

Sorts the elements of the input tensor along the given dimension in the specified order.

Parameters:
  • input_x (Tensor) – The input tensor to sort. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (int, optional) – The dimension to sort along. Default: -1.

  • descending (bool, optional) – Controls the sort order. If descending is True, the elements are sorted in descending order, or else sorted in ascending order. Default: False.

Warning

Currently, the data types of Float16, UInt8, Int8, Int16, Int32, Int64 are well supported. If use Float32, it may cause loss of accuracy.

Returns:

  • y1, a tensor whose values are the sorted values, with the same shape and data type as input.

  • y2, a tensor that consists of the indices of the elements in the original input tensor. Data type is int32.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If descending is not a bool.

  • TypeError – If dtype of input_x is neither float16, float32, uint8, int8, int16, int32, int64.

  • ValueError – If axis is not in range of [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> output = ops.sort(x)
>>> # The output below is based on the Ascend platform.
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
[ 3.0000e+00,  5.0000e+00,  9.0000e+00],
[ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
[2, 0, 1],
[0, 1, 2]]))
tinyms.primitives.space_to_batch_nd(input_x, block_size, paddings)[source]

Divides a tensor’s spatial dimensions into blocks and combines the block sizes with the original batch.

This operation will divide spatial dimensions into blocks with block_size, and after division, the output tensor’s spatial dimension is the corresponding number of blocks. The output tensor’s batch dimension is the product of the original batch and the product of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary. Assume input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\), then the shape of the output tensor will be \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n*(block\_size[0] * ... * block\_size[M]) \\ w'_i = (w_i + paddings[i][0] + paddings[i][1])//block\_size[i] \end{array}\end{split}\]
Parameters:
  • input_x (Tensor) – The input tensor. It must be a 4-D tensor on Ascend.

  • block_size (Union[list(int), tuple(int), int]) – The block size of dividing block with all value greater than 1. If block_size is a tuple or list, the length of block_size is M corresponding to the number of spatial dimensions. If block_size is an int, the block size of M dimensions are the same, equal to block_size. M must be 2 on Ascend.

  • paddings (Union[tuple, list]) – The padding values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i + offset. It is required that input_shape[i+offset]+paddings[i][0]+paddings[i][1] is divisible by block_size[i]. M must be 2 on Ascend.

Returns:

Tensor, the output tensor with the same data type as input.

Raises:
  • ValueError – If block_size is not one dimensional when block_size is a list or tuple.

  • ValueError – If the length of block_size is not 2 on Ascend.

  • ValueError – If the element of block_size is not an integer larger than 1.

  • ValueError – If shape of paddings is not (M, 2), where M is the length of block_size.

  • ValueError – If the element of paddings is not an integer larger than 0.

  • TypeError – If block_size is not one of list, tuple, int.

  • TypeError – If paddings is neither list nor tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_size = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = ops.space_to_batch_nd(input_x, block_size, paddings)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
tinyms.primitives.sparse_segment_mean(x, indices, segment_ids)[source]

Computes a Tensor such that \(output_i = \frac{\sum_j x_{indices[j]}}{N}\) where mean is over \(j\) such that \(segment\_ids[j] == i\) and \(N\) is the total number of values summed. If the mean is empty for a given segment ID \(i\), \(output[i] = 0\).

Note

  • On CPU, values in segment_ids are always validated to be sorted, and an error is thrown for indices that are not increasing. Moreover, values in indices are validated to be bounded, and an error is thrown when indices are out of range[0, x.shape[0]).

  • On GPU, this does not throw an error for unsorted segment_ids and out-of-bound indices. Out-of-order segment_ids result in safe but unspecified behavior, while out-of-range indices will be ignored.

Parameters:
  • x (Tensor) – A Tensor, and its rank must be greater than or equal to 1.

  • indices (Tensor) – A 1-D Tensor, with int32 or int64 data type.

  • segment_ids (Tensor) – A 1-D Tensor, must have the same dtype as indices. Values should be sorted and can be repeated.

Returns:

Tensor, whose dtype and rank is the same as x. The first dimension is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of x.

Raises:
  • TypeError – If x, indices or segment_ids is not a Tensor.

  • TypeError – If the dtype of x is not one of the following dtype: float16, float32, float64.

  • TypeError – If the dtype of indices and segment_ids are not one of the following dtype: int32, int64.

  • TypeError – If the dtype of indices and segment_ids are not the same.

  • ValueError – If the shape of x, ‘indices’ or segment_ids don’t meet the parameter description.

  • ValueError – If the size of ‘indices’ and segment_ids are not the same.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([[0, 1, 2], [1, 2, 3], [3, 6, 7]], dtype=mindspore.float32)
>>> indices = Tensor([0, 1, 2], dtype=mindspore.int32)
>>> segment_ids = Tensor([1,2,2], dtype=mindspore.int32)
>>> out = ops.sparse_segment_mean(x, indices, segment_ids)
>>> print(out)
[[0. 0. 0.]
 [0. 1. 2.]
 [2. 4. 5.]]
tinyms.primitives.split(tensor, split_size_or_sections, axis=0)[source]

Splits the Tensor into chunks along the given axis.

Parameters:
  • tensor (Tensor) – A Tensor to be divided.

  • split_size_or_sections (Union[int, tuple(int), list(int)]) – If split_size_or_sections is an int type, tensor will be split into equally sized chunks, each chunk with size split_size_or_sections. Last chunk will be smaller than split_size_or_sections if tensor.shape[axis] is not divisible by split_size_or_sections. If split_size_or_sections is a list type, then tensor will be split into len(split_size_or_sections) chunks with sizes split_size_or_sections along the given axis.

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument tensor is not Tensor.

  • TypeError – If argument axis is not Tensor.

  • ValueError – If argument axis is out of range of \([-tensor.ndim, tensor.ndim)\) .

  • TypeError – If each element in ‘split_size_or_sections’ is not integer.

  • TypeError – If argument indices_or_sections is not int, tuple(int) or list(int).

  • ValueError – The sum of ‘split_size_or_sections’ is not equal to x.shape[axis].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.split(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.sqrt(x)[source]

Returns sqrt of a tensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (Tensor) – The input tensor with a dtype of number.Number.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> output = ops.sqrt(x)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.square(input)[source]

Returns square of a tensor element-wise.

\[y_i = input_i ^ 2\]
Parameters:

input (Tensor) – The input tensor with a dtype of Number.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = ops.square(input)
>>> print(output)
[1. 4. 9.]
tinyms.primitives.squeeze(input, axis=None)[source]

Return the Tensor after deleting the dimension of size 1 in the specified axis.

If \(axis=None\), it will remove all the dimensions of size 1. If axis is specified, it will remove the dimensions of size 1 in the given axis. For example, if the dimension is not specified \(axis=None\), input shape is (A, 1, B, C, 1, D), then the shape of the output Tensor is (A, B, C, D). If the dimension is specified, the squeeze operation is only performed in the specified dimension. If input shape is (A, 1, B), input Tensor will not be changed when \(axis=0\) , but when \(axis=1\) , the shape of the input Tensor will be changed to (A, B).

Note

  • Please note that in dynamic graph mode, the output Tensor will share data with the input Tensor, and there is no Tensor data copy process.

  • The dimension index starts at 0 and must be in the range [-input.ndim, input.ndim].

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (Union[int, tuple(int)]) – Specifies the dimension indexes of shape to be removed, which will remove all the dimensions of size 1 in the given axis parameter. If specified, it must be int32 or int64. Default: None, an empty tuple will be used.

Returns:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_S)\).

Raises:
  • TypeError – If input is not a tensor.

  • TypeError – If axis is neither an int nor tuple.

  • TypeError – If axis is a tuple whose elements are not all int.

  • ValueError – If the corresponding dimension of the specified axis isn’t equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.squeeze(input)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
tinyms.primitives.stack(tensors, axis=0)[source]

Stacks a list of tensors in specified axis.

Stacks the list of input tensors with the same rank R, output is a tensor of rank (R+1).

Given input tensors of shape \((x_1, x_2, ..., x_R)\). Set the number of input tensors as N. If \(axis \ge 0\), the shape of the output tensor is \((x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)\).

Parameters:
  • tensors (Union[tuple, list]) – A Tuple or list of Tensor objects with the same shape and type.

  • axis (int) – Dimension to stack. Default: 0. Negative values wrap around. The range is [-(R+1), R+1).

Returns:

Tensor. A stacked Tensor with the same type as tensors.

Raises:
  • TypeError – If the data types of elements in tensors are not the same.

  • ValueError – If the length of tensors is not greater than 0; or if axis is out of the range [-(R+1), R+1); or if the shapes of elements in tensors are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> input_x2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> output = ops.stack((input_x1, input_x2), 0)
>>> print(output)
[[0. 1.]
 [2. 3.]]
tinyms.primitives.standard_laplace(shape, seed=None)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x) = \frac{1}{2}\exp(-|x|)\]
Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is a tuple containing non-positive items.

  • ValueError – If shape is a Tensor, and the rank of the Tensor is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (4, 4)
>>> output = ops.standard_laplace(shape)
>>> result = output.shape
>>> print(result)
(4, 4)
tinyms.primitives.standard_normal(shape, seed=None)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Returns the tensor with the given shape, the random numbers in it drawn from normal distributions whose mean is 0 and standard deviation is 1.

\[f(x)=\frac{1}{\sqrt{2 \pi}} e^{\left(-\frac{x^{2}}{2}\right)}\]
Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is a tuple containing non-positive items.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (4, 4)
>>> output = ops.standard_normal(shape)
>>> result = output.shape
>>> print(result)
(4, 4)
tinyms.primitives.std(input, axis=None, ddof=0, keepdims=False)[source]

Returns the standard-deviation of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the standard deviation. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, the standard deviation. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output = ms.ops.std(input, 1, 2, True)
>>> print(output)
[[1.5811388]
 [7.3824115]]
tinyms.primitives.std_mean(input, axis=None, ddof=0, keepdims=False)[source]

Returns the standard-deviation and mean of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – Specifies the dimensions from which to calculate the standard deviation and mean. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the standard deviation. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

A tuple containing the standard deviation and mean. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output_std, output_mean = ms.ops.std_mean(input, 1, 2, True)
>>> print(output_std)
[[1.5811388]
 [7.3824115]]
>>> print(output_mean)
[[ 2.5]
 [-1.5]]
tinyms.primitives.stft(x, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='REFLECT', normalized=False, onesided=None, return_complex=None)[source]

STFT segments the signal into narrow time intervals and takes the Fourier transform of each segment to quantify the change of a nonstationary signal’s frequency and phase content over time.

Ignoring the optional batch dimension, this operation computes the following expression:

\[X[\omega, m]=\sum_{k=0}^{\text {win_length-1 }} \text { window }[k] \text { input }[m \times \text { hop_length }+ k] \exp \left(-j \frac{2 \pi \cdot \omega k}{\text { win_length }}\right)\]

where \(m\) is the index of the sliding window, and \(ω\) is the frequency in range \(0 \leq \omega < \text{n\_fft}0≤ω<n_fft\).

Parameters:
  • x (Tensor) – Time sequences of stft, must be either a 1-D time tensor or a 2-D tensor.

  • n_fft (int) – The size of Fourier transform.

  • hop_length (int, optional) – The distance between neighboring sliding window frames. Default: None(treated as equal to \(floor(n_fft / 4)\)).

  • win_length (int, optional) – the size of window frame and STFT filter. Default: None(treated as equal to n_fft).

  • window (Tensor, optional) – the optional window function, 1-D tensor of size win_length. Default: None(treated as window of all \(1\) s). If win_length < n_fft, window will be padded on both sides with ones to length n_fft before it takes effect.

  • center (bool, optional) – whether to pad x on both sides. Default: True.

  • pad_mode (str, optional) – controls the padding method used when center is True. Default: ‘REFLECT’.

  • normalized (bool, optional) – controls whether to return the normalized STFT results Default: False.

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy for real inputs. Default: None. True for real x and window, False otherwise.

  • return_complex (bool, optional) – whether to return a complex tensor, or a real tensor with an extra last dimension for the real and imaginary components. Default: None. True for complex x or window, False otherwise.

Returns:

  • output (Tensor) - A tensor containing the STFT result.

    If return_complex is True, it returns a complex Tensor with shape \((*, N, T)\). If return_complex is False, it returns a real Tensor with shape \((*, N, T, 2)\).

    N is size of Fourier transform, it depends on parameter onesided: - If onesided is False, \(N = n_fft\). - If onesided is True, \(N = n_fft // 2 + 1\).

    T is the total number of frames used, calculated by this formula: \(T = 1 + (len - n_fft) / hop_length\), where len depends on parameter center: - If center is False, \(len = signal_length\). - If center is True, \(len = signal_length + (n_fft // 2) * 2\). where \(signal_length\) is the signal length, it equals to \(x.shape[-1]\).

Raises:
  • TypeError – If x is not a 1-D or 2-D tensor.

  • TypeError – If window is not a 1-D tensor.

  • TypeError – If any one of center , normalized , onesided and return_complex is assigned a nonboolean value.

  • TypeError – If pad_mode is is assigned a value that is not string.

  • TypeError – If n_fft , hop_length or hop_length is not an int.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> import numpy as np
>>> x = ms.Tensor(np.random.rand(2,7192), ms.float32)
>>> output = ops.stft(n_fft=64, x=x)
>>> print(output.shape)
(2, 33, 450, 2)
tinyms.primitives.stop_gradient(value)[source]

StopGradient is used for eliminating the effect of a value on the gradient, such as truncating the gradient propagation from an output of a function. For more details, please refer to Stop Gradient.

Parameters:

value (Any) – The value whose effect on the gradient to be eliminated.

Returns:

The same as value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> def net(x, y):
...     out1 = ops.MatMul()(x, y)
...     out2 = ops.MatMul()(x, y)
...     out2 = ops.stop_gradient(out2)
...     return out1, out2
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> grad_fn = ops.grad(net)
>>> output = grad_fn(x, y)
>>> print(output)
[[1.4100001 1.6       6.5999994]
 [1.4100001 1.6       6.5999994]]
tinyms.primitives.strided_slice(input_x, begin, end, strides, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0)[source]

Extracts a strided slice of a Tensor based on begin/end index and strides.

This operation extracts a fragment of size (end-begin)/strides from the given ‘input_tensor’. Starting from the beginning position, the fragment continues adding strides to the index until all dimensions are not less than the ending position.

Warning

  • begin , end and strides must have the same shape.

  • begin , end and strides are all 1-D Tensor, and their shape size must not greater than the dim of input_x.

During the slicing process, the fragment (end-begin)/strides are extracted from each dimension.

Example: For Tensor input_x with shape \((5, 6, 7)\), set begin, end and strides to (1, 3, 2), (3, 5, 6), (1, 1, 2) respectively, then elements from index 1 to 3 are extrected for dim 0, index 3 to 5 are extrected for dim 1 and index 2 to 6 with a stirded of 2 are extrected for dim 2, this process is equivalent to a pythonic slice input_x[1:3, 3:5, 2:6:2].

If the length of beginend and strides is smaller than the dim of input_x, then all elements are extracted from the missing dims, it behaves like all the missing dims are filled with zeros, size of that missing dim and ones.

Example: For Tensor input_x with shape \((5, 6, 7)\), set begin, end and strides to (1, 3), (3, 5), (1, 1) respectively, then elements from index 1 to 3 are extrected for dim 0, index 3 to 5 are extrected for dim 1 and index 3 to 5 are extrected for dim 2, this process is equivalent to a pythonic slice input_x[1:3, 3:5, 0:7].

Here’s how a mask works: For each specific mask, it will be converted to a binary representation internally, and then reverse the result to start the calculation. For Tensor input_x with shape \((5, 6, 7)\). Given mask value of 3 which can be represented as 0b011. Reverse that we get 0b110, which implies the first and second dim of the original Tensor will be effected by this mask. See examples below, for simplicity all mask mentioned below are all in their reverted binary form:

  • begin_mask and end_mask

    If the ith bit of begin_mask is 1, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask is analogous, except with the end range. For Tensor input_x with shape \((5, 6, 7, 8)\), if begin_mask is 0b110, end_mask is 0b011, the slice input_x[0:3, 0:6, 2:7:2] is produced.

  • ellipsis_mask

    If the ith bit of ellipsis_mask is 1, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask. For Tensor input_x with shape \((5, 6, 7, 8)\), input_x[2:,…,:6] is equivalent to input_x[2:5,:,:,0:6] , input_x[2:,…] is equivalent to input_x[2:5,:,:,:].

  • new_axis_mask

    If the ith bit of new_axis_mask is 1, begin, end and strides are ignored and a new length 1 dimension is added at the specified position in the output Tensor. For Tensor input_x with shape \((5, 6, 7)\), if new_axis_mask is 0b110, a new dim is added to the second dim, which will produce a Tensor with shape \((5, 1, 6, 7)\).

  • shrink_axis_mask

    If the ith bit of shrink_axis_mask is 1, begin, end and strides are ignored and dimension i will be shrunk to 0. For Tensor input_x with shape \((5, 6, 7)\), if shrink_axis_mask is 0b010, it is equivalent to slice x[:, 5, :] and results in an output shape of \((5, 7)\).

Note

new_axis_mask and shrink_axis_mask are not recommended to use at the same time, it might incur unexpected result.

Parameters:
  • input_x (Tensor) – The input Tensor to be extracted from.

  • begin (tuple[int]) – A tuple which represents the location where to start. Only non-negative int is allowed.

  • end (tuple[int]) – A tuple or which represents the maximum location where to end. Only non-negative int is allowed.

  • strides (tuple[int]) – A tuple which represents the strides is continuously added before reaching the maximum location. Only int is allowed, it can be negative which results in reversed slicing.

  • begin_mask (int, optional) – Starting index of the slice. Default: 0.

  • end_mask (int, optional) – Ending index of the slice. Default: 0.

  • ellipsis_mask (int, optional) – An int mask, ignore slicing operation when set to 1. Default: 0.

  • new_axis_mask (int, optional) – An int mask for adding new dims. Default: 0.

  • shrink_axis_mask (int, optional) – An int mask for shrinking dims. Default: 0.

Returns:

Tensor, return the extracts a strided slice of a Tensor based on begin/end index and strides.

Raises:
  • TypeError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is not an int.

  • TypeError – If begin, end or strides is not tuple[int].

  • ValueError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is less than 0.

  • ValueError – If begin, end and strides have different shapes.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> output = ops.strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
>>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
>>> # start = [1, 0, 2] , end = [3, 1, 3], strides = [1, 1, 1], Find a segment of (start, end),
>>> # note that end is an open interval
>>> # To facilitate understanding, this operator can be divided into three steps:
>>> # Step 1: Calculation of the first dimension:
>>> # start = 1, end = 3, strides = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
>>> # output_1th =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #         [4,4,4]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #         [6,6,6]
>>> #     ]
>>> # ]
>>> # Step 2: Calculation of the second dimension
>>> # 2nd dimension, start = 0, end = 1, strides = 1. So only 0th rows
>>> # can be taken, and the output at this time.
>>> # output_2nd =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #     ]
>>> # ]
>>> # Step 3: Calculation of the third dimension
>>> # 3nd dimension,start = 2, end = 3, strides = 1, So can take 2th cols,
>>> # and you get the final output at this time.
>>> # output_3ed =
>>> # [
>>> #     [
>>> #         [3]
>>> #     ]
>>> #     [
>>> #         [5]
>>> #     ]
>>> # ]
>>> # The final output after finishing is:
>>> print(output)
[[[3.]]
 [[5.]]]
>>> # another example like :
>>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
tinyms.primitives.sub(input, other)[source]

Subtracts the second input tensor from the first input tensor element-wise.

\[out_{i} = input_{i} - other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If input and other are not number.Number or bool or Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> other = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> output = ops.sub(input, other)
>>> print(output)
[-3 -3 -3]
tinyms.primitives.subtract(input, other, *, alpha=1)[source]

Performs the element-wise subtraction of input tensors.

\[output[i] = input[i] - alpha * other[i]\]
Parameters:
  • input (Union[Tensor, number.Number]) – Tensor or Number involved in subtraction.

  • other (Union[Tensor, number.Number]) – Tensor or Number involved in subtraction.

Keyword Arguments:

alpha (Number) – The multiplier for other. Default: 1.

Returns:

Tensor, has the same shape and dtype as input tensors.

Raises:

TypeErrorinput or other is neither Tensor nor number.Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> y = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> z = ops.subtract(input, y, alpha=1)
>>> print(z)
[3. 3. 3.]
tinyms.primitives.sum(input, dim=None, keepdim=False, *, dtype=None)[source]

Calculate sum of Tensor elements over a given dim.

Parameters:
  • input (Tensor) – The input tensor.

  • dim (Union[None, int, tuple(int), list(int)]) – Dimensions along which a sum is performed. If None, sum all the elements of the input tensor. If the dim is a tuple or list of ints, a sum is performed on all the dimensions specified in the tuple. Must be in the range \([-input.ndim, input.ndim)\) . Default: None.

  • keepdim (bool) – Whether the output tensor has dim retained or not. If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The desired data type of returned Tensor. Default: None.

Returns:

A Tensor, sum of elements over a given dim in input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dim is not an int, tulpe(int), list(int) or None.

  • ValueError – If dim is not in the range \([-input.ndim, input.ndim)\) .

  • TypeError – If keepdim is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mstype.float32)
>>> out = ops.sum(x)
>>> print(out)
270.0
>>> out = ops.sum(x, dim=2)
>>> print(out)
[[ 6. 12. 18.]
 [24. 30. 36.]
 [42. 48. 54.]]
>>> out = ops.sum(x, dim=2, keepdim=True)
>>> print(out)
[[[ 6.]
 [12.]
 [18.]]
[[24.]
 [30.]
 [36.]]
[[42.]
 [48.]
 [54.]]]
tinyms.primitives.svd(input, full_matrices=False, compute_uv=True)[source]

Computes the singular value decompositions of one or more matrices.

If \(A\) is a matrix, the svd returns the singular values \(S\), the left singular vectors \(U\) and the right singular vectors \(V\). It meets:

\[A=U*diag(S)*V^{T}\]
Parameters:
  • input (Tensor) – Tensor of the matrices to be decomposed. The shape should be \((*, M, N)\), the supported dtype are float32 and float64.

  • full_matrices (bool, optional) – If true, compute full-sized \(U\) and \(V\). If false, compute only the leading P singular vectors, with P is the minimum of M and N. Default: False.

  • compute_uv (bool, optional) – If true, compute the left and right singular vectors. If false, compute only the singular values. Default: True.

Returns:

  • s (Tensor) - Singular values. The shape is \((*, P)\).

  • u (Tensor) - Left singular vectors. If compute_uv is False, u will not be returned. The shape is \((*, M, P)\). If full_matrices is True, the shape will be \((*, M, M)\).

  • v (Tensor) - Right singular vectors. If compute_uv is False, v will not be returned. The shape is \((*, N, P)\). If full_matrices is True, the shape will be \((*, N, N)\).

Raises:
  • TypeError – If full_matrices or compute_uv is not the type of bool.

  • TypeError – If the rank of input less than 2.

  • TypeError – If the type of input is not one of the following dtype: float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, set_context
>>> from mindspore import ops
>>> set_context(device_target="CPU")
>>> input = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
>>> s, u, v = ops.svd(input, full_matrices=True, compute_uv=True)
>>> print(s)
[7.0652843 1.040081 ]
>>> print(u)
[[ 0.30821905 -0.48819482 0.81649697]
 [-0.90613353  0.11070572 0.40824813]
 [ 0.2896955   0.8656849  0.4082479 ]]
>>> print(v)
[[ 0.63863593 0.769509  ]
 [ 0.769509  -0.63863593]]
tinyms.primitives.swapaxes(input, axis0, axis1)[source]

Interchange two axes of a tensor.

Parameters:
  • input (Tensor) – Input tensor.

  • axis0 (int) – First axis.

  • axis1 (int) – Second axis.

Returns:

Transposed tensor, has the same data type as input.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If axis0 or axis1 is not integer.

  • ValueError – If axis0 or axis1 is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = ops.swapaxes(input, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.primitives.swapdims(input, dim0, dim1)[source]

Interchange two dims of a tensor. This function is equivalent to mindspore.ops.swapaxes() function.

Parameters:
  • input (Tensor) – Input tensor.

  • dim0 (int) – First dim.

  • dim1 (int) – Second dim.

Returns:

Transposed tensor, has the same data type as input.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If dim0 or dim1 is not integer.

  • ValueError – If dim0 or dim1 is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = ops.swapdims(input, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.primitives.t(input)[source]

Transposes a 2-D Tensor. 1-D Tensor are returned as it is.

Parameters:

input (Tensor) – The input Tensor.

Returns:

Tensor, the transpose of input .

Raises:

ValueError – If the dimension of input is larger than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [2, 3, 4]], mstype.float32)
>>> output = ops.t(x)
>>> print(output)
[[1. 2.]
 [2. 3.]
 [3. 4.]]
tinyms.primitives.tan(input)[source]

Computes tangent of input element-wise.

\[out_i = tan(input_i)\]
Parameters:

input (Tensor) – The input Tensor, valid for any dimensions.

Returns:

Tensor, has the same shape as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = ops.tan(input)
>>> print(output)
[-1.5574081 0. 1.5574081]
tinyms.primitives.tanh(input)[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Parameters:

input (Tensor) – Input of Tanh, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the input.

Raises:
  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.tanh(input)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
tinyms.primitives.tanhshrink(input)[source]

Tanhshrink Activation, \(Tanhshrink(x)=x-Tanh(x)\) , where \(x\) corresponds to input . See mindspore.nn.Tanhshrink for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> input = Tensor(np.array([1, 2, 3, 2, 1]), ms.float16)
>>> output = ops.tanhshrink(input)
>>> print(output)
[0.2383 1.036  2.004  1.036  0.2383]
tinyms.primitives.tensor_scatter_add(input_x, indices, updates)[source]

Creates a new tensor by adding the values from the positions in input_x indicated by indices, with values from updates. When multiple values are given for the same index, the updated result will be the sum of all values. This operation is almost equivalent to using ScatterNdAdd, except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

On GPU, if some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to self tensor. On CPU, if some values of the indices are out of bound, raising an index error. On Ascend, out of bound checking is not supported, if some values of the indices are out of bound, unknown errors may be caused.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates. Shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_add(input_x, indices, updates)
>>> print(output)
[[ 3.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_div(input_x, indices, updates)[source]

Creates a new tensor by dividing the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will be to divided these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

  • If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

  • The operator can’t handle division by 0 exceptions, so the user needs to make sure there is no 0 value in updates.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, nn, ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.0]), mindspore.float32)
>>> output = ops.tensor_scatter_div(input_x, indices, updates)
>>> print(output)
[[-0.05  0.3  3.6  ]
 [ 0.4   0.5  -3.2 ]]
tinyms.primitives.tensor_scatter_elements(input_x, indices, updates, axis=0, reduction='none')[source]

Updates the value of the input tensor through the reduction operation.

tensor_scatter_elements takes three inputs data, updates, and indices of the same rank r >= 1, an optional attribute axis that identifies an axis of data (default is 0), and another optional attribute reduction that identifies reduction operation. When reduction is set to “none”, the update value will be assigned to the output value according to the indices. When reduction is set to “add”, the update value will be added to the output value according to the indices.

For a 3-D tensor, the output is:

output[indices[i][j][k]][j][k] = updates[i][j][k]  # if axis == 0, reduction == "none"

output[i][indices[i][j][k]][k] += updates[i][j][k]  # if axis == 1, reduction == "add"

output[i][j][indices[i][j][k]] = updates[i][j][k]  # if axis == 2, reduction == "none"

Warning

  • The order in which updates are applied is nondeterministic, meaning that if there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

  • On Ascend, the reduction only support set to “none” for now.

  • On Ascend, the data type of input_x must be float16 or float32.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The rank of input must be at least 1.

  • indices (Tensor) – The index to do add operation whose data type must be mindspore.int32 or mindspore.int64. Same rank as input_x. And accepted range is [-s, s) where s is the size along axis.

  • updates (Tensor) – The tensor doing the add operation with input_x, has the same type as input_x, and update.shape should be equal to indices.shape.

  • axis (int) – Which axis to scatter, default is 0. Accepted range is [-r, r) where r = rank(input_x).

  • reduction (str) – Which reduction operation to scatter, default is “none”. Other option: “add”.

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is neither int32 nor int64.

  • ValueError – If anyone of the rank among input_x, indices and updates less than 1.

  • ValueError – If the shape of updates is not equal to the shape of indices.

  • ValueError – If the rank of updates is not equal to the rank of input_x.

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1, 2, 3, 4, 5]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2, 4]]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.float32)
>>> axis = 1
>>> reduction = "none"
>>> output = ops.tensor_scatter_elements(input_x, indices, updates, axis, reduction)
>>> print(output)
[[ 1  2  8  4  8]]
tinyms.primitives.tensor_scatter_max(input_x, indices, updates)[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the largest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices].

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> output = ops.tensor_scatter_max(input_x, indices, updates)
>>> # 5, Perform the max operation for the first time:
>>> #      first_input_x = Max(input_x[0][0], updates[0]) = [[1.0, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the max operation for the second time:
>>> #      second_input_x = Max(input_x[0][0], updates[1]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> print(output)
[[ 2.2  0.3  3.6]
 [ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_min(input_x, indices, updates)[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see case below.

Note

If some values of the indices are out of range, instead of raising an index error, the corresponding updates will not be hw to input_x.

Parameters:
  • input_x (Tensor) – The input tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_min(input_x, indices, updates)
>>> print(output)
[[ -0.1  0.3  3.6]
[ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_mul(input_x, indices, updates)[source]

Creates a new tensor by multiplying the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will multiply these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

  • If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> # 5, Perform the multiply operation for the first time:
>>> #      first_input_x = input_x[0][0] * updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the multiply operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.22, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = ops.tensor_scatter_mul(input_x, indices, updates)
>>> print(output)
[[-0.22  0.3   3.6  ]
 [ 0.4   0.5   -3.2 ]]
tinyms.primitives.tensor_scatter_sub(input_x, indices, updates)[source]

Creates a new tensor by subtracting the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using mindspore.ops.ScatterNdSub , except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

On GPU, if some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to self tensor. On CPU, if some values of the indices are out of bound, raising an index error. On Ascend, out of bound checking is not supported, if some values of the indices are out of bound, unknown errors may be caused.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_sub(input_x, indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
 [ 0.4        0.5       -3.2      ]]
tinyms.primitives.tensor_split(input, indices_or_sections, axis=0)[source]

Splits a tensor into multiple sub-tensors along the given axis.

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) –

    • If indices_or_sections is an integer n, input tensor will be split into n sections.

      • If \(input.size(axis)\) can be divisible by n, sub-sections will have equal size \(input.size(axis) / n\) .

      • If \(input.size(axis)\) is not divisible by n, the first \(input.size(axis) % n\) sections will have size \(x.size(axis) // n + 1\) , and the rest will have size \(input.size(axis) // n\) .

    • If indices_or_sections is of type tuple(int) or list(int), the input tensor will be split at the indices in the list or tuple. For example, given parameters \(indices\_or\_sections=[1, 4]\) and \(axis=0\) , the input tensor will be split into sections \(input[:1]\) , \(input[1:4]\) , and \(input[4:]\) .

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If argument axis is not int.

  • ValueError – If argument axis is out of range of \([-input.ndim, input.ndim)\) .

  • TypeError – If each element in ‘indices_or_sections’ is not integer.

  • TypeError – If argument indices_or_sections is not int, tuple(int) or list(int).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.tensor_split(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.threshold(input, thr, value)[source]

Returns each element of input after thresholding by thr as a Tensor.

The formula is defined as follows:

\[\begin{split}y = \begin{cases} input, &\text{ if } input > \text{thr} \\ \text{value}, &\text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – The input of threshold with data type of float16 or float32.

  • thr (Union[int, float]) – The value of the threshold.

  • value (Union[int, float]) – The value to replace with when element is less than threshold.

Returns:

Tensor, the same shape and data type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If thr is not a float or an int.

  • TypeError – If value is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = mindspore.Tensor([0.0, 2, 3], mindspore.float32)
>>> outputs = ops.threshold(inputs, 1, 100)
>>> print(outputs)
[100.   2.   3.]
tinyms.primitives.tile(input, multiples)[source]

Replicates an input tensor with given multiples times.

Creates a new tensor by replicating input multiples times. The i’th dimension of output tensor has input.shape[i] * multiples[i] elements, and the values of input are replicated multiples[i] times along the i’th dimension.

Note

The length of multiples must be greater or equal to the length of dimension in input.

Parameters:
  • input (Tensor) – 1-D or higher dimensional Tensor. Set the shape of input tensor as \((x_1, x_2, ..., x_S)\) .

  • multiples (tuple[int]) – The parameter that specifies the number of replications, the parameter type is tuple, and the data type is int, i.e., \((y_1, y_2, ..., y_S)\). The length of multiples cannot be smaller than the length of the shape of input. Only constant value is allowed.

Returns:

Tensor, has the same data type as the input. Suppose the length of multiples is d, the dimension of input is input.dim, and the shape of input is \((x_1, x_2, ..., x_S)\).

  • If input.dim = d, then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((x_1*y_1, x_2*y_2, ..., x_S*y_R)\).

  • If input.dim < d, fill in multiple 1 in the length of the shape of input until their lengths are consistent. Such as set the shape of input as \((1, ..., x_1, x_2, ..., x_S)\), then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((1*y_1, ..., x_R*y_R, x_S*y_S)\).

Raises:
  • TypeError – If multiples is not a tuple or its elements are not all int.

  • ValueError – If the elements of multiples are not all greater than 0.

  • ValueError – If the length of multiples are smaller than the length of dimension in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = ops.tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
>>> multiples = (2, 3, 2)
>>> output = ops.tile(input_x, multiples)
>>> print(output)
[[[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]
 [[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]]
tinyms.primitives.top_k(input_x, k, sorted=True)[source]

top_k is deprecated, please use ops.topk instead.

tinyms.primitives.topk(input, k, dim=None, largest=True, sorted=True)[source]

Finds values and indices of the k largest or smallest entries along a given dimension.

Warning

  • If sorted is set to False, it will use the aicpu operator, the performance may be reduced. In addition, due to different memory layout and traversal methods on different platforms, the display order of calculation results may be inconsistent when sorted is False.

If the input is a one-dimensional Tensor, finds the k largest or smallest entries in the Tensor, and outputs its value and index as a Tensor. values[k] is the k largest item in input, and its index is indices [k].

For a multi-dimensional matrix, calculates the first or last k entries in a given dimension, therefore:

\[values.shape = indices.shape\]

If the two compared elements are the same, the one with the smaller index value is returned first.

Parameters:
  • input (Tensor) – Input to be computed, data type must be float16, float32 or int32.

  • k (int) – The number of top or bottom elements to be computed along the last dimension, constant input is needed.

  • dim (int, optional) – The dimension to sort along. Default: None.

  • largest (bool, optional) – If largest is False then the k smallest elements are returned. Default: True.

  • sorted (bool, optional) – If True, the obtained elements will be sorted by the values in descending order. If False, the obtained elements will not be sorted. Default: True.

Returns:

A tuple consisting of values and indexes.

  • values (Tensor): The k largest or smallest elements in each slice of the given dimension.

  • indices (Tensor): The indices of values within the last dimension of input.

Raises:
  • TypeError – If sorted is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If k is not an int.

  • TypeError – If dtype of input is not one of the following: float16, float32 or int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> x = ms.Tensor([[0.5368, 0.2447, 0.4302, 0.9673],
...                [0.4388, 0.6525, 0.4685, 0.1868],
...                [0.3563, 0.5152, 0.9675, 0.8230]], dtype=ms.float32)
>>> output = ops.topk(x, 2, dim=1)
>>> print(output)
(Tensor(shape=[3, 2], dtype=Float32, value=
[[ 9.67299998e-01,  5.36800027e-01],
 [ 6.52499974e-01,  4.68499988e-01],
 [ 9.67499971e-01,  8.23000014e-01]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[3, 0],
 [1, 2],
 [2, 3]]))
>>> output2 = ops.topk(x, 2, dim=1, largest=False)
>>> print(output2)
(Tensor(shape=[3, 2], dtype=Float32, value=
[[ 2.44700000e-01,  4.30200011e-01],
 [ 1.86800003e-01,  4.38800007e-01],
 [ 3.56299996e-01,  5.15200019e-01]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[1, 2],
 [3, 0],
 [0, 1]]))
tinyms.primitives.trace(input)[source]

Returns a new tensor that is the sum of the input trace.

Note

Input must be matrix, and complex number is not supported at present.

Parameters:

input (Tensor) – A matrix to be calculated. The matrix must be two dimensional.

Returns:

Tensor, with the same data type as input input, and size equals to 1.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the dimension of input is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[10, 11, 12], [13, 14, 15], [16, 17, 18]]), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
42.0
>>> input = Tensor(np.arange(1, 13).reshape(3, 4), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
18.0
>>> input = Tensor(np.arange(12, 0, -1).reshape(4, 3), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
24.0
tinyms.primitives.transpose(input, input_perm)[source]

Permutes the dimensions of the input tensor according to input permutation.

For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector please refer the class: mindspore.ops.ExpandDims. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape is \((i[0], i[1], ... i[n-2], i[n-1])\), then a.transpose().shape is \((i[n-1], i[n-2], ... i[1], i[0])\).

Note

On GPU and CPU, if the value of input_perm is negative, its actual value is input_perm[i] + rank(input). Negative value of input_perm is not supported on Ascend.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_perm (tuple[int]) – The permutation to be converted. The elements in input_perm are composed of the indexes of each dimension of input. The length of input_perm and the shape of input must be the same. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Returns:

Tensor, the type of output tensor is the same as input and the shape of output tensor is decided by the shape of input and the value of input_perm.

Raises:
  • TypeError – If input_perm is not a tuple.

  • ValueError – If length of shape of input is not equal to length of shape of input_perm.

  • ValueError – If the same element exists in input_perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> output = ops.transpose(input, input_perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
tinyms.primitives.trapz(y, x=None, *, dx=1.0, dim=-1)[source]

Integrates y(x) along given dim using trapezoidal rule. By default x-dim distances between points will be 1.0, alternatively they can be provided with x array or with dx scalar.

\[\mathop{ \int }\nolimits_{{}}^{{}}{y}{ \left( {x} \right) } \text{d} x\]
Parameters:
  • y (Tensor) – Input tensor to integrate.

  • x (Tensor, optional) – The sample points corresponding to the y values. If x is None, the sample points are assumed to be evenly spaced dx apart. Default: None. If x is not None, after subtracting 1 from the axis specified by dim, the shape of x should be same as y or can broadcast to y.

Keyword Arguments:
  • dx (float, optional) – The spacing between sample points when x is None. If x is specified, dx does not take effect. Default: 1.0.

  • dim (int, optional) – The dim along which to integrate. Default: -1.

Returns:

Tensor of float, definite integral as approximated by trapezoidal rule. If y is a one-dimensional array, the result is a floating-point number. If y is an n-dimensional array, the result is an N-1 dimensional array.

Raises:
  • RuntimeError – If dim of x is 1, and x.shape[0] is not equal to y.shape[dim].

  • ValueError – If dim is out of range of \([-y.ndim, y.ndim)\).

  • TypeError – If y is not a Tensor.

  • TypeError – If x is not None and is not a Tensor.

  • TypeError – If dx is not a float number.

  • TypeError – If dim is not a Integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> y = Tensor(np.array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]).astype(np.float32))
>>> x = Tensor(np.array([[1, 2, 3], [1, 3, 5], [1, 4, 7]]).astype(np.float32))
>>> output = ops.trapz(y, x)
>>> print(output)
[2. 4. 6.]
tinyms.primitives.tril(input, diagonal=0)[source]

Returns the lower triangle part of ‘input’ (elements that contain the diagonal and below), and set the other elements to zeros.

Parameters:
  • input (Tensor) – A Tensor with shape \((x_1, x_2, ..., x_R)\). The rank must be at least 2. Supporting all number types including bool.

  • diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main diagonal.

Returns:

Tensor, the same shape and data type as the input x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If diagonal is not an int.

  • TypeError – If the type of x is neither number nor bool.

  • ValueError – If the rank of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x)
>>> print(result)
[[ 1  0  0  0]
 [ 5  6  0  0]
 [10 11 12  0]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x, diagonal=1)
>>> print(result)
[[ 1  2  0  0]
 [ 5  6  7  0]
 [10 11 12 13]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x, diagonal=-1)
>>> print(result)
[[ 0  0  0  0]
 [ 5  0  0  0]
 [10 11  0  0]
 [14 15 16  0]]
tinyms.primitives.tril_indices(row, col, offset=0, *, dtype=mindspore.int64)[source]

Calculates the indices of the lower triangular elements in a row * col matrix and returns them as a 2-by-N Tensor. The first row of the Tensor contains row coordinates, and the second row contains column coordinates. The coordinates are sorted by row and then by column.

The lower triangular part of the matrix consists of all elements on and below the diagonal.

Note

When running on CUDA, row * col must be less than 2^59 to prevent overflow during calculation.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type is specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.tril_indices(4, 3, -1, mindspore.int64)
>>> print(output)
[[1 2 2 3 3 3]
 [0 0 1 0 1 2]]
>>> print(output.dtype)
Int64
tinyms.primitives.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, reduction='mean')[source]

TripletMarginLoss operation. See mindspore.nn.TripletMarginLoss for details.

Parameters:
  • anchor (Tensor) – A sample randomly selected from the training set. Data type must be BasicType.

  • positive (Tensor) – A sample belonging to the same category as anchor, with the same type and shape as anchor.

  • negative (Tensor) – A sample belonging to the different class from anchor, with the same type and shape as anchor.

  • margin (float, optional) – Make a margin between the positive pair and the negative pair. Default: 1.0.

  • p (int, optional) – The degree of norm for pairwise distance. Default: 2.

  • eps (float, optional) – Add small value to avoid division by zero. Default: 1e-06.

  • swap (bool, optional) – The distance swap change the negative distance to the distance between positive sample and negative sample. Default: “False”.

  • reduction (str, optional) – Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: “mean”.

Returns:

Tensor. If reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If anchor or positive or ‘negative’ is not a Tensor.

  • TypeError – If dtype of anchor, positive and negative is not the same.

  • TypeError – If margin is not a float.

  • TypeError – If p is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If swap is not a bool.

  • ValueError – If dimensions of input anchor, positive and negative are less than or equal to 1 at the same time.

  • ValueError – If the dimension of input anchor or positive or negative is bigger than or equal to 8.

  • ValueError – If shape of anchor, positive and negative cannot broadcast.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

GPU

Examples

>>> anchor = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> positive = Tensor(np.array([[0.4, 0.6], [0.4, 0.6]]), mindspore.float32)
>>> negative = Tensor(np.array([[0.2, 0.9], [0.3, 0.7]]), mindspore.float32)
>>> output = ops.triplet_margin_loss(anchor, positive, negative)
>>> print(output)
0.8881968
tinyms.primitives.triu(input, diagonal=0)[source]

Returns the upper triangle part of ‘input’ (elements that contain the diagonal and below), and set the other elements to zeros.

Parameters:
  • input (Tensor) – The input tensor with shape \((N,∗)\) where ∗ means any number of additional dimensions.

  • diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main diagonal.

Returns:

Tensor, a tensor has the same shape and data type as input.

Raises:
  • TypeError – If diagonal is not an int.

  • TypeError – If input is not a Tensor.

  • ValueError – If length of shape of input is less than 1.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 0  6  7  8]
 [ 0  0 12 13]
 [ 0  0  0 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x, diagonal=1)
>>> print(result)
[[ 0  2  3  4]
 [ 0  0  7  8]
 [ 0  0  0 13]
 [ 0  0  0  0]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x, diagonal=-1)
>>> print(result)
[[ 1  2  3  4]
 [ 5  6  7  8]
 [ 0 11 12 13]
 [ 0  0 16 17]]
tinyms.primitives.triu_indices(row, col, offset=0, *, dtype=mindspore.int64)[source]

Calculates the indices of the upper triangular elements in a row * col matrix and returns them as a 2-by-N Tensor. The first row of the Tensor contains row coordinates, and the second row contains column coordinates. The coordinates are sorted by row and then by column.

The upper triangular part of the matrix consists of all elements on and above the diagonal.

Note

When running on CUDA, row * col must be less than 2^59 to prevent overflow during calculation.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

  • y (Tensor) - indices of the elements in upper triangular part of matrix. The type is specified by dtype. The shape of output is \((2, triu\_size)\), where \(triu\_size\) is the number of elements in the upper triangular matrix.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.triu_indices(4, 4, 2, mindspore.int64)
>>> print(output)
[[0 0 1]
 [2 3 3]]
>>> print(output.dtype)
Int64
tinyms.primitives.true_divide(dividend, divisor)[source]

Alias for mindspore.ops.div() with \(rounding\_mode=None\).

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.trunc(input)[source]

Returns a new tensor with the truncated integer values of the elements of the input tensor.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, the same shape and data type as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]),mindspore.float32)
>>> output = ops.trunc(x)
>>> print(output)
[3. 0. 0. -3.]
tinyms.primitives.truncate_div(x, y)[source]

Divides the first input tensor by the second input tensor element-wise and rounds the results of division towards zero. Equivalent to C-style integer division.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

Broadcasting is supported.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.truncate_div(x, y)
>>> print(output)
[0 1 0]
tinyms.primitives.truncate_mod(x, y)[source]

Returns the remainder of division element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • x (Union[Tensor, numbers.Number, bool]) – The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, numbers.Number, bool]) – The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.truncate_mod(x, y)
>>> print(output)
[ 2  1 -1]
tinyms.primitives.tuple_to_array(input_x)[source]

Converts a tuple to a tensor.

If the type of the first number in the tuple is integer, the data type of the output tensor is int. Otherwise, the data type of the output tensor is float.

Parameters:

input_x (tuple) – A tuple of numbers. These numbers have the same type. Only constant value is allowed. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

Returns:

Tensor, if the input tuple contains N numbers, then the shape of the output tensor is (N,).

Raises:
  • TypeError – If input_x is not a tuple.

  • ValueError – If length of input_x is less than or equal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = (1,2,3)
>>> print(type(input_x))
<class 'tuple'>
>>> output = ops.tuple_to_array(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
[1 2 3]
tinyms.primitives.unbind(input, dim=0)[source]

Removes a tensor dimension in specified axis.

Unstacks a tensor of rank R along axis dimension, and output tensors will have rank (R-1).

Given a tensor of shape \((n_1, n_2, ..., n_R)\) and a specified dim, shape of the output tensors is \((n_1, n_2, ..., n_{dim}, n_{dim+2}, ..., n_R)\).

Parameters:
  • input (Tensor) – The shape is \((n_1, n_2, ..., n_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

  • dim (int) – Dimension along which to unpack. Negative values wrap around. The range is [-R, R). Default: 0.

Returns:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-R, R).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
>>> output = ops.unbind(x, dim=0)
>>> print(output)
(Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]),
Tensor(shape=[3], dtype=Int64, value=[7, 8, 9]))
tinyms.primitives.unfold(input, kernel_size, dilation=1, padding=0, stride=1)[source]

Reshapes a tensor of format (N, C, H, W) by extracting sliding local blocks from the input Tensor and concatenating them along a new dimension.

Warning

  • Currently, only 4-D input tensors (batched image-like tensors) are supported.

Parameters:
  • input (Tensor) – 4-D Tensor. Support all real number data type.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The dilation of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The pad of the window, that must be a tuple/list of one or two int for height and width. If one int, pad_height = pad_width. If two int, pad_height = padding[0], pad_width = padding[1]. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The stride of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

Returns:

A Tensor, with same type as input.

Raises:
  • TypeError – If any data type of kernel_size, stride, dilation, kernel_size is not int, tuple or list.

  • ValueError – If kernel_size, dilation, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding value is less than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(4, 4, 32, 32), mindspore.float64)
>>> output = ops.unfold(x, kernel_size=3, dilation=1, stride=1)
>>> print(output.shape)
(4, 4, 9, 900)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – Type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mindspore.float32.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If ‘minval’ or ‘maxval’ is neither int32 nor float32 and dtype of ‘minval’ is not the same as ‘maxval’.

  • TypeError – If seed is not an int.

  • TypeError – If ‘dtype’ is neither int32 nor float32.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mindspore.int32)
>>> maxval = Tensor(2, mindspore.int32)
>>> output = ops.uniform(shape, minval, maxval, seed=5, dtype=mindspore.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> maxval = Tensor([8.0, 10.0], mindspore.float32)
>>> output = ops.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=0, remove_accidental_hits=False)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution. If unique=True, candidates are drawn without replacement, else unique=False with replacement.

Parameters:
  • true_classes (Tensor) – A Tensor. The target classes with a Tensor shape of \((batch_size, num_true)\) .

  • num_true (int) – The number of target classes in each training example.

  • num_sampled (int) – The number of classes to randomly sample. The sampled_candidates will have a shape of num_sampled. If unique=True, num_sampled must be less than or equal to range_max.

  • unique (bool) – Whether all sampled classes in a batch are unique.

  • range_max (int) – The number of possible classes, must be positive.

  • seed (int) – Used for random number generation, must be non-negative. If seed has a value of 0, the seed will be replaced with a randomly generated value. Default: 0.

  • remove_accidental_hits (bool) – Whether accidental hit is removed. Default: False.

Returns:

  • sampled_candidates (Tensor) - The sampled_candidates is independent of the true classes. Shape: \((num_sampled, )\) .

  • true_expected_count (Tensor) - The expected counts under the sampling distribution of each of true_classes. Shape: \((batch_size, num_true)\) .

  • sampled_expected_count (Tensor) - The expected counts under the sampling distribution of each of sampled_candidates. Shape: \((num_sampled, )\) .

Raises:
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If neither unique nor remove_accidental_hits is a bool.

  • TypeError – If neither range_max nor seed is an int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int64))
>>> output1, output2, output3 = ops.uniform_candidate_sampler(data, 1, 3, False, 4, 1)
>>> print(output1.shape)
(3,)
>>> print(output2.shape)
(5, 1)
>>> print(output3.shape)
(3,)
tinyms.primitives.unique(input)[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

The output contains Tensor y and Tensor idx, the format is probably similar to (y, idx). The shape of Tensor y and Tensor idx is different in most cases, because Tensor y will be deduplicated, and the shape of Tensor idx is consistent with the input.

To get the same shape between idx and y, please ref to :class:’mindspore.ops.UniqueWithPad’ operator.

Parameters:

input (Tensor) – The input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Warning

This is an experimental API that is subject to change or deletion.

Returns:

Tuple, containing Tensor objects (y, idx), y is a tensor with the same type as input, and contains the unique elements in input. idx is a tensor containing indices of elements in the input corresponding to the output tensor, have the same shape with input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.unique(x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>> y = output[0]
>>> print(y)
[1 2 5]
>>> idx = output[1]
>>> print(idx)
[0 1 2 1]
tinyms.primitives.unique_consecutive(input, return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Parameters:
  • input (Tensor) – The input tensor.

  • return_idx (bool, optional) – Whether to return the index of where the element in the original input maps to the position in the output. Default: False.

  • return_counts (bool, optional) – Whether to return the counts of each unique element. Default: False.

  • axis (int, optional) – The dimension to apply unique. If None, the unique of the flattened input is returned. If specified, it must be int32 or int64. Default: None.

Returns:

A tensor or a tuple of tensors containing tensor objects (output, idx, counts). output has the same type as input and is used to represent the output list of unique scalar elements. If return_idx is True, there will be an additional returned tensor, idx, which has the same shape as input and represents the index of where the element in the original input maps to the position in the output. If return_counts is True, there will be an additional returned tensor, counts, which represents the number of occurrences for each unique value or tensor.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not supported.

  • TypeError – If return_idx is not a bool.

  • TypeError – If return_counts is not a bool.

  • TypeError – If axis is not an int.

  • ValueError – If axis is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> output, idx, counts = ops.unique_consecutive(x, True, True, None)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
tinyms.primitives.unique_with_pad(x, pad_num)[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function. The returned tuple(y, idx) after the input Tensor x is processed by the unique operator, in which the shapes of y and idx are mostly not equal. Therefore, in order to solve the above situation, the UniqueWithPad operator will fill the y Tensor with the pad_num specified by the user to make it have the same shape as the Tensor idx.

Parameters:
  • x (Tensor) – The tensor need to be unique. Must be 1-D vector with types: int32, int64.

  • pad_num (int) – Pad num. The data type is an int.

Returns:

tuple(Tensor), tuple of 2 tensors, y and idx.

  • y (Tensor) - The unique elements filled with pad_num, the shape and data type same as x.

  • idx (Tensor) - The index of each value of x in the unique output y, the shape and data type same as x.

Raises:
  • TypeError – If dtype of x is neither int32 nor int64.

  • ValueError – If length of shape of x is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([1, 2, 2, 3, 5, 5]), mindspore.int32)
>>> output = ops.unique_with_pad(x, 0)
>>> print(output)
(Tensor(shape=[6], dtype=Int32, value= [1, 2, 3, 5, 0, 0]),
 Tensor(shape=[6], dtype=Int32, value= [0, 1, 1, 2, 3, 3]))
>>> y = output[0]
>>> print(y)
[1 2 3 5 0 0]
>>> idx = output[1]
>>> print(idx)
[0 1 1 2 3 3]
tinyms.primitives.unsorted_segment_max(x, segment_ids, num_segments)[source]

Computes the maximum along segments of a tensor.

The following figure shows the calculation process of unsorted_segment_max:

tinyms/UnsortedSegmentMax.png
\[\text { output }_i=\text{max}_{j \ldots} \text { data }[j \ldots]\]

where \(max\) over tuples \(j...\) such that \(segment\_ids[j...] == i\).

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with the minimum value of the x’s type.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_max(x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
tinyms.primitives.unsorted_segment_min(x, segment_ids, num_segments)[source]

Computes the minimum of a tensor along segments.

The following figure shows the calculation process of unsorted_segment_min:

tinyms/UnsortedSegmentMin.png
\[\text { output }_i=\text{min}_{j \ldots} \text { data }[j \ldots]\]

where \(min\) over tuples \(j...\) such that \(segment\_ids[j...] == i\).

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with the maximum value of the x’s type.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_min(x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
tinyms.primitives.unsorted_segment_prod(x, segment_ids, num_segments)[source]

Computes the product of a tensor along segments.

The following figure shows the calculation process of unsorted_segment_prod:

tinyms/UnsortedSegmentProd.png

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with 1.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_prod(x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
tinyms.primitives.unsorted_segment_sum(input_x, segment_ids, num_segments)[source]

Computes the sum of a tensor along segments.

Calculates a tensor such that \(\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]\), where \(j,...\) is a tuple describing the index of element in data. segment_ids selects which elements in data to sum up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range.

The following figure shows the calculation process of unsorted_segment_sum:

tinyms/UnsortedSegmentSum.png

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with 0.

  • On Ascend, if the value of segment_id is less than 0 or greater than the length of the input data shape, an execution error will occur.

If the sum of the given segment_ids \(i\) is empty, then \(\text{output}[i] = 0\). If the given segment_ids is negative, the value will be ignored. ‘num_segments’ must be equal to the number of different segment_ids.

Parameters:
  • input_x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) – Set the shape as \((x_1, x_2, ..., x_N)\), where 0 < N <= R.

  • num_segments (Union[int, Tensor], optional) – Set \(z\) as num_segments.

Returns:

Tensor, the shape is \((z, x_{N+1}, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int or 0-D Tensor.

  • ValueError – If length of shape of segment_ids is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.unsorted_segment_sum(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
>>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
>>> num_segments = 6
>>> output = ops.unsorted_segment_sum(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 2. 5. 0.]
tinyms.primitives.unsqueeze(input, dim)[source]

Adds an additional dimension to input at the given dim.

Parameters:
  • input (Tensor) – The shape of tensor is \((n_1, n_2, ..., n_R)\).

  • dim (int) – Specifies the dimension index at which to expand the shape of input. The value of dim must be in the range [-input.ndim-1, input.ndim]. Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((1, n_1, n_2, ..., n_R)\) if the value of dim is 0. It has the same data type as input.

Raises:
  • TypeError – If dim is not an int.

  • ValueError – If dim is not in the valid range \([-input.ndim-1, input.ndim]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.unsqueeze(input_tensor, dim=0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
tinyms.primitives.unstack(input_x, axis=0)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters:
  • input_x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

  • axis (int) – Dimension along which to unpack. Default: 0. Negative values wrap around. The range is [-R, R).

Returns:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = ops.unstack(input_x, 0)
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
tinyms.primitives.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]

Alias for mindspore.ops.interpolate() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.value_and_grad(fn, grad_position=0, weights=None, has_aux=False)[source]

A wrapper function to generate the function to calculate forward output and gradient for the input function.

As for gradient, three typical cases are included:

  1. gradient with respect to inputs. In this case, grad_position is not None while weights is None.

  2. gradient with respect to weights. In this case, grad_position is None while weights is not None.

  3. gradient with respect to inputs and weights. In this case, grad_position and weights are not None.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[NoneType, int, tuple[int]]) – Index to specify which inputs to be differentiated. If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. grad_position begins with 0. If None, none derivative of any input will be solved, and in this case, weights is required. Default: 0.

  • weights (Union[ParameterTuple, Parameter, list[Parameter]]) – The parameters of the training network that need to calculate the gradient. weights can be got through weights = net.trainable_params() . Default: None.

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the gradient function to calculate forward output and gradient for the input function or cell. For example, as for out1, out2 = fn(*args) , gradient function will return outputs like ((out1, out2), gradient) . When has_aux is set True, only out1 contributes to the differentiation.

Raises:
  • ValueError – If both grad_position and weights are None.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops, nn
>>> from mindspore import value_and_grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> grad_fn = value_and_grad(net, grad_position=1)
>>> output, inputs_gradient = grad_fn(x, y, z)
>>> print(output)
[-0. 18.]
>>> print(inputs_gradient)
[0. 6.]
>>>
>>> # Function object to be differentiated
>>> def fn(x, y, z):
...     res = x * ops.exp(y) * ops.pow(z, 2)
...     return res, z
>>> x = Tensor(np.array([3, 3]).astype(np.float32))
>>> y = Tensor(np.array([0, 0]).astype(np.float32))
>>> z = Tensor(np.array([5, 5]).astype(np.float32))
>>> output, inputs_gradient = value_and_grad(fn, grad_position=(1, 2), weights=None, has_aux=True)(x, y, z)
>>> print(output)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 5.00000000e+00,  5.00000000e+00]))
>>> print(inputs_gradient)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 3.00000000e+01,  3.00000000e+01]))
>>>
>>> # For given network to be differentiated with both inputs and weights, there are 3 cases.
>>> net = nn.Dense(10, 1)
>>> loss_fn = nn.MSELoss()
>>> def forward(inputs, labels):
...     logits = net(inputs)
...     loss = loss_fn(logits, labels)
...     return loss, logits
>>> inputs = Tensor(np.random.randn(16, 10).astype(np.float32))
>>> labels = Tensor(np.random.randn(16, 1).astype(np.float32))
>>> weights = net.trainable_params()
>>>
>>> # Case 1: gradient with respect to inputs.
>>> # For has_aux is set True, only loss contributes to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=0, weights=None, has_aux=True)
>>> (loss, logits), inputs_gradient = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(inputs.shape, inputs_gradient.shape)
(16, 10) (16, 10)
>>>
>>> # Case 2: gradient with respect to weights.
>>> # For has_aux is set True, only loss contributes to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=None, weights=weights, has_aux=True)
>>> (loss, logits), params_gradient = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(len(weights), len(params_gradient))
2 2
>>>
>>> # Case 3: gradient with respect to inputs and weights.
>>> # For has_aux is set False, both loss and logits contribute to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=0, weights=weights, has_aux=False)
>>> (loss, logits), (inputs_gradient, params_gradient) = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(inputs.shape, inputs_gradient.shape)
(16, 10) (16, 10)
>>> print(len(weights), len(params_gradient))
2 2
tinyms.primitives.var(input, axis=None, ddof=0, keepdims=False)[source]

Returns the variance of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate variance. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, the variance. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output = ms.ops.var(input, 1, 2, True)
>>> print(output)
[[ 2.5]
 [54.5]]
tinyms.primitives.var_mean(input, axis=None, ddof=0, keepdims=False)[source]

Returns the variance and mean of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the variance. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

A tuple containing the variance and mean. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output_var, output_mean = ms.ops.var_mean(input, 1, 2, True)
>>> print(output_var)
[[ 2.5]
 [54.5]]
>>> print(output_mean)
[[ 2.5]
 [-1.5]]
tinyms.primitives.view_as_real(input)[source]

View a complex Tensor as a real Tensor. The size of last dimension of the returned real Tensor is 2, and the last dimension is composed of the real and imaginary components of complex numbers.

Parameters:

input (Tensor) – the input must be a complex Tensor.

Returns:

A real Tensor.

Raises:

TypeError – If the input Tensor is not a complex Tensor.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([2+1j,2+3j,2-1j,2], mstype.complex64)
>>> print(ops.view_as_real(x))
[[ 2.  1.]
 [ 2.  3.]
 [ 2. -1.]
 [ 2.  0.]]
tinyms.primitives.vjp(fn, *inputs, has_aux=False)[source]

Compute the vector-jacobian-product of the given network. vjp matches reverse-mode differentiation.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single Tensor or tuple of Tensors.

  • inputs (Union[Tensor, tuple[Tensor], list[Tensor]]) – The inputs to fn .

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Forward outputs and function to calculate vjp.

  • net_output (Union[Tensor, tuple[Tensor]]) - The output of fn(inputs). Specially, when has_aux is set True, netout is the first output of fn(inputs).

  • vjp_fn (Function) - To calculate vector-jacobian-product. Its inputs are the vectors whose shape and type should be the same as netout .

  • aux_value (Union[Tensor, tuple[Tensor]], optional) - When has_aux is True, aux_value will be returned. It means the second to last outputs of fn(inputs). Specially, aux_value does not contribute to gradient.

Raises:

TypeErrorinputs or v does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import vjp
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def construct(self, x, y):
...         return x**3 + y
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> v = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> outputs, vjp_fn = vjp(Net(), x, y)
>>> print(outputs)
[[ 2. 10.]
 [30. 68.]]
>>> gradient = vjp_fn(v)
>>> print(gradient)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.00000000e+00,  1.20000000e+01],
 [ 2.70000000e+01,  4.80000000e+01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
>>> def fn(x, y):
...     return 2 * x + y, y ** 3
>>> outputs, vjp_fn, aux = vjp(fn, x, y, has_aux=True)
>>> gradient = vjp_fn(v)
>>> print(outputs)
[[ 3.  6.]
 [ 9. 12.]]
>>> print(aux)
[[ 1.  8.]
 [27. 64.]]
>>> print(gradient)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.00000000e+00,  2.00000000e+00],
 [ 2.00000000e+00,  2.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
tinyms.primitives.vmap(fn, in_axes=0, out_axes=0)[source]

Vectorizing map (vmap) is a kind of higher-order function to map fn along the parameter axes.

Vmap is pioneered by Jax and it removes the restriction of batch dimension on the operator, and provides a more convenient and unified operator expression. Moreover, it allows users to composite with other functional modules such as mindspore.grad(), to improve the development efficiency. In addition, the vectorizing map does not execute loops outside the function, but sinks loops into the primitive operations of the function for better performance. When combined with Graph Kernel Fusion, operational efficiency would be further improved.

Warning

This is an experimental API that is subject to change or deletion.

Note

1. The power of vmap comes from the implementation of VmapRules of primitives. Although we have designed a generalized rule for user custom operators, we can not guarantee that it works well for all operators, please be aware the risk of use. If you want to achieve a better performance, please refer to the tutorial to implement the specific VmapRule for the custom operator, which won’t take too much time. 2. When calling the random number generation methods within the scope of vmap, the same random number is generated among vector functions each time. If you expect each vector branch to use different random numbers, you need to generate batch random numbers externally in advance and then transfer them to vmap.

Parameters:
  • fn (Union[Cell, Function, CellList]) – Function to be mapped along the parameter axes, which takes at least one argument and returns one or more Tensors or the type of data supported by the MindSpore Tensor. When it is a CellList, the model ensembling scenario, please make sure that the structure of each cell is the same and the number of cells is consistent with the sizes of the mapped axes (axis_size).

  • in_axes (Union[int, list, tuple]) – Specifies which dimensions (axes) of the inputs should be mapped over. If in_axes is an integer, all arguments of fn are mapped over according to this axis index. If in_axes is a tuple or list, which only composed of integers or Nones and the length should equal to the number of positional arguments to fn, indicates which axis to map for each corresponding positional argument. Note that, axis integers must be in range \([-ndim, ndim)\) for each argument, where ndim is the number of dimensions of the corresponding argument. None means not mapping along any axis. Also the mapping axis index of the in_axes must have at least one positional parameter not None. The sizes of the mapped axes (axis_size) for all arguments must be equal. Default: 0.

  • out_axes (Union[int, list, tuple]) – Specifies where the mapped dimensions (axes) should appear in the outputs. If out_axes is an integer, all outputs of fn are specified according to this axis. If out_axes is a tuple or list, which only composed of integers or Nones. And its length also should be equal to the number of outputs of fn. Note that, axis integers must be in range \([-ndim, ndim)\) for each output, where ndim is the dimension of the output of the vmap-mapped function. All outputs with a non-None mapped axis must specify a non-None out_axes, and if outputs with None mapped axis specifies a non-None out_axes, the result broadcasts across the mapped axis. Default: 0.

Returns:

Function, returns the Vectorized/Batched version function of fn. The arguments and outputs of this function correspond to those of fn, but it adds an extra batch dimension at positions specified by in_axes and out_axes.

Raises:

RuntimeError – If base elements in in_axes or out_axes are not a None or an integer. If the all base elements in in_axes or out_axes are None. If in_axes is not single integer, and the length of in_axes is not equal to the arguments sizes. If out_axes is not single integer, and the length of out_axes is not equal to the outputs sizes. If the axis_size of each arguments in the scope of vmap are not equal. If the axis in in_axes or out_axes is out of bounds.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import vmap
>>> def test_vmap(x, y, z):                                              # ([a],[a],[a]) -> [a]
...     return x + y + z
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]).astype(np.float32))    # [b, a]
>>> y = Tensor(np.array([[-3, -2, -1], [3, 2, 1]]).astype(np.float32))   # [a, b]
>>> z = Tensor(np.array([0, 3]).astype(np.float32))                      # [a]
>>> output = vmap(test_vmap, in_axes=(0, 1, None), out_axes=1)(x, y, z)  # ([b, a],[a, b],[a]) -> [a, b]
>>> print(output)
[[-2  1  4]
 [ 8  9 10]]
tinyms.primitives.vsplit(input, indices_or_sections)[source]

Splits input with two or more dimensions, into multiple sub-tensors vertically according to indices_or_sections.

It is equivalent to ops.tensor_split with \(axis=0\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).reshape((3, 3)).astype('float32')
>>> output = ops.vsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[1, 3], dtype=Float32, value=[[ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32, value=[[ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32, value=[[ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]]))
tinyms.primitives.vstack(inputs)[source]

Stacks tensors in sequence vertically.

This is equivalent to concatenation along the first axis. 1-D tensors \((N,)\) should firstly be reshaped to \((1, N)\), and then be concatenated along the first axis.

Parameters:

inputs (Union(List[tensor], Tuple[tensor])) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the first axis. 1-D tensors must have the same shape.

Returns:

Tensor, formed by stacking the given tensors, will be at least 3-D. The output shape is similar to the output of numpy.vstack() function.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([3, 1, 4])
>>> x2 = np.array([1, 5, 9])
>>> out = ops.vstack([x1, x2])
>>> print(out)
[[3 1 4]
 [1 5 9]]
tinyms.primitives.where(condition, x, y)[source]

Selects elements from x or y based on condition and returns a tensor.

\[\begin{split}output_i = \begin{cases} x_i,\quad &if\ condition_i \\ y_i,\quad &otherwise \end{cases}\end{split}\]
Parameters:
  • condition (Tensor[bool]) – If True, yield x, otherwise yield y.

  • x (Union[Tensor, Scalar]) – When condition is True, values to select from.

  • y (Union[Tensor, Scalar]) – When condition is False, values to select from.

Returns:

Tensor, elements are selected from x and y.

Raises:
  • TypeError – If condition is not a Tensor.

  • TypeError – If both x and y are scalars.

  • ValueError – If condition, x and y can not broadcast to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.arange(4).reshape((2, 2)), mstype.float32)
>>> b = Tensor(np.ones((2, 2)), mstype.float32)
>>> condition = a < 3
>>> output = ops.where(condition, a, b)
>>> print(output)
[[0. 1.]
 [2. 1.]]
tinyms.primitives.xdivy(x, y)[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

When x and y are both of datatype complex, they should be both complex64 or complex128 at the same time.

Parameters:
  • x (Union[Tensor, Number, bool]) – Tensor of datatype number.Number或bool, or it can be a bool or number.

  • y (Union[Tensor, Number, bool]) – Tensor of datatype number.Number或bool, or it can be a bool or number. x and y can not be both bool at the same time.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If dtype of x and ‘y’ is not in [float16, float32, float64, complex64, complex128, bool].

  • ValueError – If x could not be broadcast to a tensor with shape of y.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> output = ops.xdivy(x, y)
>>> print(output)
[ 1.   2.  -0.5]
tinyms.primitives.xlogy(input, other)[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when input is zero.

\[out_i = input_{i}\ln{other_{i}}\]

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • On Ascend, the data type of input and other must be float16 or float32.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input is a number.Number or a bool when the first input is a tensor or a tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not a number.Number or a bool or a Tensor.

  • TypeError – If dtype of input and other is not in [float16, float32, float64, complex64, complex128].

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> other = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> output = ops.xlogy(input, other)
>>> print(output)
[-3.465736   0.        2.7725887]
tinyms.primitives.zeros(size, dtype=None)[source]

Creates a tensor filled with 0 with shape described by shape and fills it with value 0 in type of dtype.

Parameters:
  • size (Union[tuple[int], int]) – The specified shape of output tensor. Only constant positive int is allowed.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. If dtype is None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, has the same dtype and size as input.

Raises:

TypeError – If size is neither a tuple of int nor an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
tinyms.primitives.zeros_like(input, *, dtype=None)[source]

Creates a tensor filled with 0, with the same size as x, and the given dtype.

If dtype = None, the tensor will have the same dtype as input input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, filled with 0.

Raises:

TypeError – If dtype is not a MindSpore dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(4).reshape(2, 2))
>>> output = ops.zeros_like(x, dtype=mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]