tinyms

Top-level reference to dtype of common module. This module also provides Numpy-like interfaces in TinyMS.

Examples

>>> import tinyms as ts
>>>
>>> print(ts.ones([2, 3]))
[[1. 1. 1.]
[1. 1. 1.]]
tinyms.dtype_to_nptype(type_)[source]

Convert MindSpore dtype to numpy data type.

Parameters

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns

The data type of numpy.

tinyms.issubclass_(type_, dtype)[source]

Determine whether type_ is a subclass of dtype.

Parameters
  • type_ (mindspore.dtype) – Target MindSpore dtype.

  • dtype (mindspore.dtype) – Compare MindSpore dtype.

Returns

bool, True or False.

tinyms.dtype_to_pytype(type_)[source]

Convert MindSpore dtype to python data type.

Parameters

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns

Type of python.

tinyms.pytype_to_dtype(obj)[source]

Convert python type to MindSpore type.

Parameters

obj (type) – A python type object.

Returns

Type of MindSpore type.

tinyms.get_py_obj_dtype(obj)[source]

Get the MindSpore data type which corresponds to python type or variable.

Parameters

obj – An object of python type, or a variable in python type.

Returns

Type of MindSpore type.

class tinyms.MetaTensor(dtype, shape, init=None)[source]

The base class of the MetaTensor. Initialization of tensor basic attributes and model weight values.

Returns

Array, an array after being initialized.

property dtype

Get the MetaTensor’s dtype.

property shape

Get the MetaTensor’s shape.

to_tensor(slice_index=None, shape=None, opt_shard_group=None)[source]

Get the tensor format data of this MetaTensor.

Parameters
  • slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor.

  • shape (list[int]) – Shape of the slice, it is used when initialize a slice of the parameter.

  • opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice.

class tinyms.Tensor(input_data, dtype=None)[source]

Tensor is used for data storage.

Tensor inherits tensor object in C++. Some functions are implemented in C++ and some functions are implemented in Python.

Parameters
  • input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.

  • dtype (mindspore.dtype) – Input data should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be as same as the input_data. Default: None.

Outputs:

Tensor, with the same shape as input_data.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == mindspore.float32
...
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == mindspore.float64
abs()[source]

Return absolute value element-wisely.

Returns

Tensor, has the same data type as x.

all(axis=(), keep_dims=False)[source]

Check all array elements along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

Returns

Tensor, has the same data type as x.

any(axis=(), keep_dims=False)[source]

Check any array element along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

Returns

Tensor, has the same data type as x.

asnumpy()[source]

Convert tensor to numpy array.

assign_value(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.Tensor) → mindspore._c_expression.Tensor

Assign another tensor value to this.

Arg:

value (mindspore.tensor): The value tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data2 = mindspore.Tensor(np.ones((2, 2), np.float32))
>>> data.assign_value(data2)
>>> data.shape
(2, 2)
data_sync(self: mindspore._c_expression.Tensor, arg0: bool) → None
dim(self: mindspore._c_expression.Tensor) → int

Get tensor’s data dimension.

Returns

int, the dimension of tensor.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.dim()
2
property dtype

The dtype of tensor is a mindspore type.

expand_as(x)[source]

Expand the dimension of target tensor to the dimension of input tensor.

Parameters

shape (Tensor) – The input tensor. The shape of input tensor must obey the broadcasting rule.

Returns

Tensor, has the same dimension as input tensor.

static from_numpy(array)[source]

Convert numpy array to Tensor without copy data.

is_init(self: mindspore._c_expression.Tensor) → bool

Get tensor init_flag.

Returns

bool, whether the tensor init.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.is_init()
False
mean(axis=(), keep_dims=False)[source]

Reduces a dimension of a tensor by averaging all elements in the dimension.

Parameters
  • axis (Union[None, int, tuple(int), list(int)]) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

Returns

Tensor, has the same data type as x.

property ndim

The ndim of tensor is an integer.

set_cast_dtype(self: mindspore._c_expression.Tensor, dtype: mindspore::Type = None) → None
set_dtype(self: mindspore._c_expression.Tensor, arg0: mindspore::Type) → mindspore::Type

Set the tensor’s data type.

Arg:

dtype (mindspore.dtype): The type of output tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data.set_dtype(mindspore.int32)
mindspore.int32
set_init_flag(self: mindspore._c_expression.Tensor, arg0: bool) → None

Set tensor init_flag.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.set_init_flag(True)
property shape

The shape of tensor is a tuple.

property size

The size reflects the total number of elements in tensor.

view(*shape)[source]

Reshape the tensor according to the input shape.

Parameters

shape (Union(tuple[int], *int)) – Dimension of the output tensor.

Returns

Tensor, has the same dimension as the input shape.

property virtual_flag

Mark tensor is virtual.

class tinyms.RowTensor(indices, values, dense_shape)[source]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape [L0, D1, .. , DN] where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

RowTensor can only be used in the Cell’s construct method.

It is not supported in pynative mode at the moment.

Parameters
  • indices (Tensor) – A 1-D integer Tensor of shape [D0].

  • values (Tensor) – A Tensor of any dtype of shape [D0, D1, …, Dn].

  • dense_shape (tuple) – An integer tuple which contains the shape of the corresponding dense tensor.

Returns

RowTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = RowTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> out = Net((3, 2))(indices, values)
>>> print(out[0])
[[1. 2.]]
>>> print(out[1])
[0]
>>> print(out[2])
(3, 2)
class tinyms.SparseTensor(indices, values, dense_shape)[source]

A sparse representation of a set of nonzero elememts from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

Pynative mode not supported at the moment.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

Parameters
  • indices (Tensor) – A 2-D integer Tensor of shape [N, ndims], where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

  • values (Tensor) – A 1-D tensor of any type and shape [N], which supplies the values for each element in indices.

  • dense_shape (tuple) – A integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

Returns

SparseTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = SparseTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> out = Net((3, 4))(indices, values)
>>> print(out[0])
[1. 2.]
>>> print(out[1])
[[0 1]
 [1 2]]
>>> print(out[2])
(3, 4)
tinyms.ms_function(fn=None, obj=None, input_signature=None)[source]

Create a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Parameters
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • obj (Object) – The Python Object that provides the information for identifying the compiled function.Default: None.

  • input_signature (MetaTensor) – The MetaTensor which describes the input arguments. The MetaTensor specifies the shape and dtype of the Tensor and they will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

Returns

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Examples

>>> from mindspore.ops import functional as F
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling ms_function
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function
>>> @ms_function
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
 >>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function with input_signature parameter
>>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)),
...                               MetaTensor(mindspore.float32, (1, 1, 3, 3))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
class tinyms.Parameter(default_input, name=None, requires_grad=True, layerwise_parallel=False)[source]

Parameter types of cell models.

After initialized Parameter is a subtype of Tensor.

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by an MetaTensor, the type of Parameter will be MetaTensor not Tensor. MetaTensor_ only saves the shape and type info of a tensor with no memory usage. The shape can be changed while compiling for auto-parallel. Call init_data will return a Tensor Parameter with initialized data.

Note

Each parameter of Cell is represented by Parameter class. A Parameter has to belong to a Cell. If there is an operator in the network that requires part of the inputs to be Parameter, then the Parameters as this part of the inputs are not allowed to be cast. It is recommended to use the default value of name when initialize a parameter as one attribute of a cell, otherwise, the parameter name may be different than expected.

Parameters
  • default_input (Union[Tensor, MetaTensor, Number]) – Parameter data, to be set initialized.

  • name (str) – Name of the child parameter. Default: None.

  • requires_grad (bool) – True if the parameter requires gradient. Default: True.

  • layerwise_parallel (bool) – A kind of model parallel mode. When layerwise_parallel is true in parallel mode, broadcast and gradients communication would not be applied to parameters. Default: False.

Example

>>> from mindspore import Parameter, Tensor
>>> from mindspore.common import initializer as init
>>> from mindspore.ops import operations as P
>>> from mindspore.nn import Cell
>>> import mindspore
>>> import numpy as np
>>> from mindspore import context
>>>
>>> class Net(Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.weight = Parameter(Tensor(np.ones((1,2))), name="w", requires_grad=True)
...
...     def construct(self, x):
...         out = self.matmul(self.weight, x)
...         return out
>>> context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> x = Tensor(np.ones((2,1)))
>>> print(net(x))
[[2.]]
>>> net.weight.set_data(Tensor(np.zeros((1,2))))
Parameter (name=w)
>>> print(net(x))
[[0.]]
clone(init='same')[source]

Clone the parameter.

Parameters

init (Union[Tensor, str, MetaTensor, numbers.Number]) – Initialize the shape of the parameter. Default: ‘same’.

Returns

Parameter, a new parameter.

property dtype

Get the MetaTensor’s dtype.

init_data(layout=None, set_sliced=False)[source]

Initialize the parameter data.

Parameters
  • layout (list[list[int]]) –

    Parameter slice layout [dev_mat, tensor_map, slice_shape].

    • dev_mat (list[int]): Device matrix.

    • tensor_map (list[int]): Tensor map.

    • slice_shape (list[int]): Shape of slice.

  • set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

Raises

RuntimeError – If it is from Initializer, and parallel mode has changed after the Initializer created.

Returns

Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

property inited_param

Get the new parameter after call the init_data.

Default is a None, If self is a Parameter with out data, after call the init_data the initialized Parameter with data will be recorded here.

property is_init

Get the initialization status of the parameter.

In GE backend, the Parameter need a “init graph” to sync the data from host to device. This flag indicates whether the data as been sync to the device.

This flag only work in GE, and it will be set to False in other backend.

property name

Get the name of the parameter.

property requires_grad

Return whether the parameter requires gradient.

set_data(data, slice_shape=False)[source]

Set set_data of current Parameter.

Parameters
  • data (Union[Tensor, MetaTensor, int, float]) – new data.

  • slice_shape (bool) – If slice the parameter is set to true, the shape is not checked for consistency. Default: False.

Returns

Parameter, the parameter after set data.

set_param_ps(init_in_server=False)[source]

Set whether the trainable parameter is updated by parameter server and whether the trainable parameter is initialized on server.

Note

It only works when a running task is in the parameter server mode.

Parameters

init_in_server (bool) – Whether trainable parameter updated by parameter server is initialized on server. Default: False.

property shape

Get the MetaTensor’s shape.

property sliced

Get slice status of the parameter.

property unique

whether the parameter is already unique or not.

class tinyms.ParameterTuple[source]

Class for storing tuple of parameters.

Note

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init='same')[source]

Clone the parameter.

Parameters
  • prefix (str) – Namespace of parameter.

  • init (str) – Initialize the shape of the parameter. Default: ‘same’.

Returns

Tuple, the new Parameter tuple.

count()

Return number of occurrences of value.

index()

Return first index of value.

Raises ValueError if the value is not present.

tinyms.set_seed(seed)[source]

Set global random seed.

Note

The global seed is used by numpy.random, mindspore.common.Initializer, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.

If global seed is not set, these packages will use their own default seed independently, numpy.random and mindspore.common.Initializer will choose a random seed, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution will use zero.

Seed set by numpy.random.seed() only used by numpy.random, while seed set by this API will also used by numpy.random, so just set all seed by this API is recommended.

Parameters

seed (int) – The seed to be set.

Raises

Examples

>>> from mindspore.ops import composite as C
>>> from mindspore import Tensor
>>>
>>> # Note: (1) Please make sure the code is running in PYNATIVE MODE;
>>> # (2) Becasuse Composite-level ops need parameters to be Tensors, for below examples,
>>> # when using C.uniform operator, minval and maxval are initialised as:
>>> minval = Tensor(1.0, mstype.float32)
>>> maxval = Tensor(2.0, mstype.float32)
>>>
>>> # 1. If global seed is not set, numpy.random and initializer will choose a random seed:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get diferent results:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4
>>>
>>> # 2. If global seed is set, numpy.random and initializer will use it:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>>
>>> # 3. If neither global seed nor op seed is set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will choose a random seed:
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get different results:
>>> c1 = C.uniform((1, 4), minval, maxval) # C3
>>> c2 = C.uniform((1, 4), minval, maxval) # C4
>>>
>>> # 4. If global seed is set, but op seed is not set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will caculate a seed according to global seed and
>>> # default op seed. Each call will change the default op seed, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>>
>>> # 5. If both global seed and op seed are set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will caculate a seed according to global seed and
>>> # op seed counter. Each call will change the op seed counter, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 6. If op seed is set but global seed is not set, 0 will be used as global seed. Then
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution act as in
>>> # condition 5.
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 7. Recall set_seed() in the program will reset numpy seed and op seed counter of
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> set_seed(1234)
>>> np_2 = np.random.normal(0, 1, [1]).astype(np.float32) # still get A1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # still get C1
tinyms.get_seed()[source]

Get global random seed.

tinyms.arange(*args, **kwargs)[source]

Returns evenly spaced values within a given interval.

Returns num evenly spaced samples, calculated over the interval [start, stop]. The endpoint of the interval can optionally be excluded. The current implementation is a direct wrapper on top of numpy.arange, except that the default dtype is float32 and int32, compare to float64 and int64 for numpy implementation.

Parameters
  • start (Union[int, float]) – Start of interval. The interval includes this value. When stop is provided as a position argument, start must be given, when stop is a normal argument, start can be optional, and default is 0. Please see additional examples below.

  • stop (Union[int, float], optional) – End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of out.

  • step (Union[int, float], optional) – Spacing between values. For any output out, this is the distance between two adjacent values, out[i+1] - out[i]. The default step size is 1. If step is specified as a position argument, start must also be given.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. If dtype is None, the data type of the new tensor will be inferred from start, stop and step. Default is None.

Returns

arangend tensor of evenly spaced values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.arange(0, 5, 1))
[0 1 2 3 4]
>>> print(np.arange(3))
[0 1 2]
>>> print(np.arange(start=0, stop=3))
[0 1 2]
>>> print(np.arange(0, stop=3, step=0.5))
[0.  0.5 1.  1.5 2.  2.5]
>>> print(np.arange(stop=3)) # This will lead to TypeError
tinyms.array(obj, dtype=None, copy=True, ndmin=0)[source]

Creates a tensor.

This function creates tensors from an array-like object.

Parameters
  • obj (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and numpy.ndarray. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or int32. If dtype is None, the data type of the new tensor will be inferred from obj. Default is None.

  • copy (bool) – If true, then the object is copied. Otherwise, a copy will only be made if necessary. Default: True.

  • ndmin (int) – Specifies the minimum number of dimensions that the resulting tensor should have. Ones will be pre-pended to the shape as needed to meet this requirement. Default: 0

Returns

Tensor, generated tensor with the specified dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.array([1,2,3]))
[1 2 3]
tinyms.asarray(a, dtype=None)[source]

Converts the input to tensor.

This function converts tensors from an array-like object.

Parameters
  • a (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and ndarrays. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or int32. If dtype is None, the data type of the new tensor will be inferred from a. Default is None.

Returns

Tensor, generated tensor with the specified dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.asarray([1,2,3]))
[1 2 3]
tinyms.asfarray(a, dtype=mindspore.float32)[source]

Similar to asarray, converts the input to a float tensor.

If non-float dtype is defined, this function will return a float32 tensor instead.

Parameters
  • a (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and numpy.ndarray. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

Returns

Tensor, generated tensor with the specified float dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.asfarray([1,2,3]))
[1. 2. 3.]
tinyms.concatenate(arrays, axis=0)[source]

Joins a sequence of tensors along an existing axis.

Parameters
  • arrays – Union[Tensor, tuple(Tensor), list(Tensor)], a tensor or a list

  • tensors to be concatenated. (of) –

  • axis (int, optional) – The axis along which the tensors will be joined, if axis is None, tensors are flattened before use. Default is 0.

Returns

Tensor, a tensor concatenated from a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.ones((1,2,3))
>>> x2 = np.ones((1,2,1))
>>> x = np.concatenate((x1, x2), axis=-1)
>>> print(x.shape)
(1, 2, 4)
tinyms.copy(a)

Returns a tensor copy of the given object.

Parameters

a (Tensor) – Input tensor.

Returns

Tensor, has the same data as a.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,2))
>>> print(np.copy(x))
[[1. 1.]
 [1. 1.]]
tinyms.expand_dims(a, axis)[source]

Expands the shape of a tensor.

Inserts a new axis that will appear at the axis position in the expanded tensor shape.

Parameters
  • a (Tensor) – Input tensor array.

  • Union[int, list (axis) – Position in the expanded axes where

  • new axis is placed, (the) –

Returns

Tensor, view of a tensor with the number of dimensions increased.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,2))
>>> x = np.expand_dims(x,0)
>>> print(x.shape)
(1, 2, 2)
tinyms.eye(N, M=None, k=0, dtype=mindspore.float32)[source]

Returns a 2-D tensor with ones on the diagnoal and zeros elsewhere.

Parameters
  • N (int) – Number of rows in the output, must be larger than 0.

  • M (int, optional) – Number of columns in the output. If None, defaults to N, if defined, must be larger than 0. Deault is None.

  • k (int, optional) – Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. Default is 0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

Returns

A tensor of shape (N,M). A tensor where all elements are equal to zero, except for the k-th diagonal, whose values are equal to one.

Return type

result (Tensor)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.eye(2, 2))
[[1. 0.]
[0. 1.]]
tinyms.identity(n, dtype=mindspore.float32)[source]

Returns the identity tensor.

Parameters
  • n (int) – Number of rows and columns in the output, must be larger than 0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

Returns

A tensor of shape (n,n). A tensor where all elements are equal to zero, except for the diagonal, whose values are equal to one.

Return type

result (Tensor)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.identity(2))
[[1. 0.]
[0. 1.]]
tinyms.inner(a, b)[source]

Inner product of two tensors.

Ordinary inner product of vectors for 1-D tensors (without complex conjugation), in higher dimensions a sum product over the last axes.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are mstype.float16, and mstype.float32. On CPU, the supported dtype is mstype.float32.

Parameters
  • a (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

  • b (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

Returns

-1] + b.shape[:-1].

Return type

Tensor or scalar, out.shape = a.shape[

Raises

ValueError – if x1.shape[-1] != x2.shape[-1].

Supported Platforms:

Supported Platforms: Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((5, 3))
>>> b = np.ones((2, 7, 3))
>>> output = np.inner(a, b)
>>> print(output)
[[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]]

tinyms.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0)[source]

Returns evenly spaced values within a given interval.

The current implementation is a direct wrapper on top of numpy.linspace, except the default dtype is float32, compare to float64 for numpy,

Parameters
  • start (Union[int, list(int), tuple(int),tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int),tensor]) – The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of ``num + 1` evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • retstep (bool, optional) – If True, return (samples, step), where step is the spacing between samples.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32.If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

Returns

There are num equally spaced samples in the closed interval

[start, stop] or the half-open interval [start, stop) (depending on whether endpoint is True or False).

step (float, optional): Only returned if retstep is True.

Size of spacing between samples.

Return type

samples (Tensor)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.linspace(0, 5, 6))
[0. 1. 2. 3. 4. 5.]
tinyms.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None, axis=0)[source]

Returns numbers spaced evenly on a log scale.

In linear space, the sequence starts at base ** start (base to the power of start) and ends with base ** stop (see endpoint below). The current implementation is a direct wrapper on top of numpy.logspace, except the default dtype is float32, compare to float64 for numpy,

Parameters
  • start (Union[int, list(int), tuple(int), tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int), tensor]) – The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of ``num + 1` evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • base (Union[int, float], optional) – The base of the log space. The step size between the elements in ln(samples) / ln(base) (or log_base(samples)) is uniform. Default is 10.0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32.If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop is array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

Returns

num samples, equally spaced on a log scale.

Return type

samples (Tensor)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.logspace(0, 5, 6, base=2.0))
[ 1.  2.  4.  8. 16. 32.]
tinyms.mean(a, axis=None, keepdims=False)[source]

Computes the arithmetic mean along the specified axis.

Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype and out are not supported. On GPU, the supported dtypes are mstype.float16, and mstype.float32. On CPU, the supported dtypes are mstype.float16, and mstype.float32.

Parameters
  • a (Tensor) – input tensor containing numbers whose mean is desired. If a is not an array, a conversion is attempted.

  • axis (None or int or tuple of ints) – optional. Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes.

  • keepdims (bool) – optional. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.

Returns

Tensor or scalar, an array containing the mean values.

Raises
  • ValueError – if axes are out of the range of [-a.ndim, a.ndim), or

  • if the axes contain duplicates.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(6, dtype='float32')
>>> output = np.mean(a, 0)
>>> print(output)
2.5
tinyms.ones(shape, dtype=mindspore.float32)[source]

Returns a new tensor of given shape and type, filled with ones.

Parameters
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

Returns

Tensor, with the designated shape and dtype, filled with ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.ones((2,2)))
[[1. 1.]
[1. 1.]]
tinyms.ravel(x)[source]

Returns a contiguous flattened tensor.

A 1-D tensor, containing the elements of the input, is returned.

Parameters

x (Tensor) – A tensor to be flattened.

Returns

Flattened tensor, has the same data type as the original tensor x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.ravel(x)
>>> print(output.shape)
(24,)
tinyms.reshape(x, new_shape)[source]

Reshapes a tensor without changing its data.

Parameters
  • x (Tensor) – A tensor to be reshaped.

  • new_shape (Union[int, list(int), tuple(int)]) – The new shape should be compatible with the original shape. If the tuple has only one element, the result will be a 1-D tensor of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the tensor and remaining dimensions.

Returns

Reshaped Tensor. Has the same data type as the original tensor x.

Raises
  • TypeError – If new_shape is not integer, list or tuple.

  • ValueError – If new_shape does not compatible with the original shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> output = np.reshape(x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (3, -1))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (6, ))
>>> print(output)
[-0.1  0.3  3.6  0.4  0.5 -3.2]
tinyms.rollaxis(x, axis, start=0)[source]

Rolls the specified axis backwards, until it lies in the given position. The positions of the other axes do not change relative to one another.

Parameters
  • x (Tensor) – A Tensor to be transposed.

  • axis (int) – The axis to be rolled.

  • start (int) –

    • When start >= 0:
      • When start <= axis: the axis is rolled back until it lies in this position (start).

      • When start > axis: the axis is rolled until it lies before this position (start).

    • When start < 0: the start will be normalized as follows:

      start ……….. Normalized start -(x.ndim+1) raise ValueError -x.ndim 0 … … -1 x.ndim-1 0 0 … … x.ndim x.ndim x.ndim+1 raise ValueError

Returns

Transposed Tensor. Has the same data type as the original tensor x.

Supported Platforms:

Ascend GPU CPU

Raises
  • TypeError – If axis or start is not integer.

  • ValueError – If axis is not in the range from -ndim to ndim-1 or start is not in the range from -ndim to ndim.

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.rollaxis(x, 0, 2)
>>> print(output.shape)
(3, 2, 4)
tinyms.squeeze(a, axis=None)[source]

Removes single-dimensional entries from the shape of an tensor.

This is a temporary solution to support CPU backend. Will be changed once CPU backend supports P.Squeeze().

Parameters
  • a (Tensor) – Input tensor array.

  • axis – Union[None, int, list(int), tuple(list)]. Default is None.

Returns

Tensor, with all or a subset of the dimensions of length 1 removed.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,2,1))
>>> x = np.squeeze(x)
>>> print(x.shape)
(2, 2)
tinyms.swapaxes(x, axis1, axis2)[source]

Interchanges two axes of a tensor.

Parameters
  • x (Tensor) – A tensor to be transposed.

  • axis1 (int) – First axis.

  • axis2 (int) – Second axis.

Returns

Transposed tensor, has the same data type as the original tensor x.

Raises
  • TypeError – If axis1 or axis2 is not integer.

  • ValueError – If axis1 or axis2 is not in the range from -ndim to ndim-1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.swapaxes(x, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.transpose(a, axes=None)[source]

Reverses or permutes the axes of a tensor; returns the modified tensor.

Parameters
  • a (Tensor) – a tensor to be transposed

  • axes (Union[None, tuple, list]) – the axes order, if axes is None, transpose

  • entire tensor. Default is None. (the) –

Returns

Tensor, the transposed tensor array.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,3))
>>> x = np.transpose(x)
>>> print(x.shape)
(3, 2, 1)
tinyms.zeros(shape, dtype=mindspore.float32)[source]

Returns a new tensor of given shape and type, filled with zeros.

Parameters
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

Returns

Tensor, with the designated shape and dtype, filled with zeros.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.zeros((2,2)))
[[0. 0.]
[0. 0.]]