tinyms

Top-level reference to dtype of common module. This module also provides Numpy-like interfaces in TinyMS.

实际案例

>>> import tinyms as ts
>>>
>>> print(ts.ones([2, 3]))
[[1. 1. 1.]
[1. 1. 1.]]
tinyms.dtype_to_nptype(type_)[源代码]

Convert MindSpore dtype to numpy data type.

参数

type_ (mindspore.dtype) – MindSpore’s dtype.

返回

The data type of numpy.

tinyms.issubclass_(type_, dtype)[源代码]

Determine whether type_ is a subclass of dtype.

参数
  • type_ (mindspore.dtype) – Target MindSpore dtype.

  • dtype (mindspore.dtype) – Compare MindSpore dtype.

返回

bool, True or False.

tinyms.dtype_to_pytype(type_)[源代码]

Convert MindSpore dtype to python data type.

参数

type_ (mindspore.dtype) – MindSpore’s dtype.

返回

Type of python.

tinyms.pytype_to_dtype(obj)[源代码]

Convert python type to MindSpore type.

参数

obj (type) – A python type object.

返回

Type of MindSpore type.

tinyms.get_py_obj_dtype(obj)[源代码]

Get the MindSpore data type which corresponds to python type or variable.

参数

obj – An object of python type, or a variable in python type.

返回

Type of MindSpore type.

class tinyms.MetaTensor(dtype, shape, init=None)[源代码]

The base class of the MetaTensor. Initialization of tensor basic attributes and model weight values.

返回

Array, an array after being initialized.

property dtype

Get the MetaTensor’s dtype.

property shape

Get the MetaTensor’s shape.

to_tensor(slice_index=None, shape=None, opt_shard_group=None)[源代码]

Get the tensor format data of this MetaTensor.

参数
  • slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor.

  • shape (list[int]) – Shape of the slice, it is used when initialize a slice of the parameter.

  • opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice.

class tinyms.Tensor(input_data, dtype=None)[源代码]

Tensor is used for data storage.

Tensor inherits tensor object in C++. Some functions are implemented in C++ and some functions are implemented in Python.

参数
  • input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.

  • dtype (mindspore.dtype) – Input data should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be as same as the input_data. Default: None.

Outputs:

Tensor, with the same shape as input_data.

实际案例

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == mindspore.float32
...
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == mindspore.float64
abs()[源代码]

Return absolute value element-wisely.

返回

Tensor, has the same data type as x.

all(axis=(), keep_dims=False)[源代码]

Check all array elements along a given axis evaluate to True.

参数
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

返回

Tensor, has the same data type as x.

any(axis=(), keep_dims=False)[源代码]

Check any array element along a given axis evaluate to True.

参数
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

返回

Tensor, has the same data type as x.

asnumpy()[源代码]

Convert tensor to numpy array.

assign_value(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.Tensor) → mindspore._c_expression.Tensor

Assign another tensor value to this.

Arg:

value (mindspore.tensor): The value tensor.

实际案例

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data2 = mindspore.Tensor(np.ones((2, 2), np.float32))
>>> data.assign_value(data2)
>>> data.shape
(2, 2)
data_sync(self: mindspore._c_expression.Tensor, arg0: bool) → None
dim(self: mindspore._c_expression.Tensor) → int

Get tensor’s data dimension.

返回

int, the dimension of tensor.

实际案例

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.dim()
2
property dtype

The dtype of tensor is a mindspore type.

expand_as(x)[源代码]

Expand the dimension of target tensor to the dimension of input tensor.

参数

shape (Tensor) – The input tensor. The shape of input tensor must obey the broadcasting rule.

返回

Tensor, has the same dimension as input tensor.

static from_numpy(array)[源代码]

Convert numpy array to Tensor without copy data.

is_init(self: mindspore._c_expression.Tensor) → bool

Get tensor init_flag.

返回

bool, whether the tensor init.

实际案例

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.is_init()
False
mean(axis=(), keep_dims=False)[源代码]

Reduces a dimension of a tensor by averaging all elements in the dimension.

参数
  • axis (Union[None, int, tuple(int), list(int)]) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

返回

Tensor, has the same data type as x.

property ndim

The ndim of tensor is an integer.

set_cast_dtype(self: mindspore._c_expression.Tensor, dtype: mindspore::Type = None) → None
set_dtype(self: mindspore._c_expression.Tensor, arg0: mindspore::Type) → mindspore::Type

Set the tensor’s data type.

Arg:

dtype (mindspore.dtype): The type of output tensor.

实际案例

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data.set_dtype(mindspore.int32)
mindspore.int32
set_init_flag(self: mindspore._c_expression.Tensor, arg0: bool) → None

Set tensor init_flag.

实际案例

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.set_init_flag(True)
property shape

The shape of tensor is a tuple.

property size

The size reflects the total number of elements in tensor.

view(*shape)[源代码]

Reshape the tensor according to the input shape.

参数

shape (Union(tuple[int], *int)) – Dimension of the output tensor.

返回

Tensor, has the same dimension as the input shape.

property virtual_flag

Mark tensor is virtual.

class tinyms.RowTensor(indices, values, dense_shape)[源代码]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape [L0, D1, .. , DN] where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

RowTensor can only be used in the Cell’s construct method.

It is not supported in pynative mode at the moment.

参数
  • indices (Tensor) – A 1-D integer Tensor of shape [D0].

  • values (Tensor) – A Tensor of any dtype of shape [D0, D1, …, Dn].

  • dense_shape (tuple) – An integer tuple which contains the shape of the corresponding dense tensor.

返回

RowTensor, composed of indices, values, and dense_shape.

实际案例

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = RowTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> out = Net((3, 2))(indices, values)
>>> print(out[0])
[[1. 2.]]
>>> print(out[1])
[0]
>>> print(out[2])
(3, 2)
class tinyms.SparseTensor(indices, values, dense_shape)[源代码]

A sparse representation of a set of nonzero elememts from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

Pynative mode not supported at the moment.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

参数
  • indices (Tensor) – A 2-D integer Tensor of shape [N, ndims], where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

  • values (Tensor) – A 1-D tensor of any type and shape [N], which supplies the values for each element in indices.

  • dense_shape (tuple) – A integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

返回

SparseTensor, composed of indices, values, and dense_shape.

实际案例

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = SparseTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> out = Net((3, 4))(indices, values)
>>> print(out[0])
[1. 2.]
>>> print(out[1])
[[0 1]
 [1 2]]
>>> print(out[2])
(3, 4)
tinyms.ms_function(fn=None, obj=None, input_signature=None)[源代码]

Create a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

参数
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • obj (Object) – The Python Object that provides the information for identifying the compiled function.Default: None.

  • input_signature (MetaTensor) – The MetaTensor which describes the input arguments. The MetaTensor specifies the shape and dtype of the Tensor and they will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

返回

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

实际案例

>>> from mindspore.ops import functional as F
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling ms_function
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function
>>> @ms_function
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
 >>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function with input_signature parameter
>>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)),
...                               MetaTensor(mindspore.float32, (1, 1, 3, 3))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
class tinyms.Parameter(default_input, name=None, requires_grad=True, layerwise_parallel=False)[源代码]

Parameter types of cell models.

After initialized Parameter is a subtype of Tensor.

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by an MetaTensor, the type of Parameter will be MetaTensor not Tensor. MetaTensor_ only saves the shape and type info of a tensor with no memory usage. The shape can be changed while compiling for auto-parallel. Call init_data will return a Tensor Parameter with initialized data.

注解

Each parameter of Cell is represented by Parameter class. A Parameter has to belong to a Cell. If there is an operator in the network that requires part of the inputs to be Parameter, then the Parameters as this part of the inputs are not allowed to be cast. It is recommended to use the default value of name when initialize a parameter as one attribute of a cell, otherwise, the parameter name may be different than expected.

参数
  • default_input (Union[Tensor, MetaTensor, Number]) – Parameter data, to be set initialized.

  • name (str) – Name of the child parameter. Default: None.

  • requires_grad (bool) – True if the parameter requires gradient. Default: True.

  • layerwise_parallel (bool) – A kind of model parallel mode. When layerwise_parallel is true in parallel mode, broadcast and gradients communication would not be applied to parameters. Default: False.

示例

>>> from mindspore import Parameter, Tensor
>>> from mindspore.common import initializer as init
>>> from mindspore.ops import operations as P
>>> from mindspore.nn import Cell
>>> import mindspore
>>> import numpy as np
>>> from mindspore import context
>>>
>>> class Net(Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.weight = Parameter(Tensor(np.ones((1,2))), name="w", requires_grad=True)
...
...     def construct(self, x):
...         out = self.matmul(self.weight, x)
...         return out
>>> context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> x = Tensor(np.ones((2,1)))
>>> print(net(x))
[[2.]]
>>> net.weight.set_data(Tensor(np.zeros((1,2))))
Parameter (name=w)
>>> print(net(x))
[[0.]]
clone(init='same')[源代码]

Clone the parameter.

参数

init (Union[Tensor, str, MetaTensor, numbers.Number]) – Initialize the shape of the parameter. Default: ‘same’.

返回

Parameter, a new parameter.

property dtype

Get the MetaTensor’s dtype.

init_data(layout=None, set_sliced=False)[源代码]

Initialize the parameter data.

参数
  • layout (list[list[int]]) –

    Parameter slice layout [dev_mat, tensor_map, slice_shape].

    • dev_mat (list[int]): Device matrix.

    • tensor_map (list[int]): Tensor map.

    • slice_shape (list[int]): Shape of slice.

  • set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

引发

RuntimeError – If it is from Initializer, and parallel mode has changed after the Initializer created.

返回

Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

property inited_param

Get the new parameter after call the init_data.

Default is a None, If self is a Parameter with out data, after call the init_data the initialized Parameter with data will be recorded here.

property is_init

Get the initialization status of the parameter.

In GE backend, the Parameter need a “init graph” to sync the data from host to device. This flag indicates whether the data as been sync to the device.

This flag only work in GE, and it will be set to False in other backend.

property name

Get the name of the parameter.

property requires_grad

Return whether the parameter requires gradient.

set_data(data, slice_shape=False)[源代码]

Set set_data of current Parameter.

参数
  • data (Union[Tensor, MetaTensor, int, float]) – new data.

  • slice_shape (bool) – If slice the parameter is set to true, the shape is not checked for consistency. Default: False.

返回

Parameter, the parameter after set data.

set_param_ps(init_in_server=False)[源代码]

Set whether the trainable parameter is updated by parameter server and whether the trainable parameter is initialized on server.

注解

It only works when a running task is in the parameter server mode.

参数

init_in_server (bool) – Whether trainable parameter updated by parameter server is initialized on server. Default: False.

property shape

Get the MetaTensor’s shape.

property sliced

Get slice status of the parameter.

property unique

whether the parameter is already unique or not.

class tinyms.ParameterTuple[源代码]

Class for storing tuple of parameters.

注解

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init='same')[源代码]

Clone the parameter.

参数
  • prefix (str) – Namespace of parameter.

  • init (str) – Initialize the shape of the parameter. Default: ‘same’.

返回

Tuple, the new Parameter tuple.

count()

Return number of occurrences of value.

index()

Return first index of value.

Raises ValueError if the value is not present.

tinyms.set_seed(seed)[源代码]

Set global random seed.

注解

The global seed is used by numpy.random, mindspore.common.Initializer, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.

If global seed is not set, these packages will use their own default seed independently, numpy.random and mindspore.common.Initializer will choose a random seed, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution will use zero.

Seed set by numpy.random.seed() only used by numpy.random, while seed set by this API will also used by numpy.random, so just set all seed by this API is recommended.

参数

seed (int) – The seed to be set.

引发

实际案例

>>> from mindspore.ops import composite as C
>>> from mindspore import Tensor
>>>
>>> # Note: (1) Please make sure the code is running in PYNATIVE MODE;
>>> # (2) Becasuse Composite-level ops need parameters to be Tensors, for below examples,
>>> # when using C.uniform operator, minval and maxval are initialised as:
>>> minval = Tensor(1.0, mstype.float32)
>>> maxval = Tensor(2.0, mstype.float32)
>>>
>>> # 1. If global seed is not set, numpy.random and initializer will choose a random seed:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get diferent results:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4
>>>
>>> # 2. If global seed is set, numpy.random and initializer will use it:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>>
>>> # 3. If neither global seed nor op seed is set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will choose a random seed:
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get different results:
>>> c1 = C.uniform((1, 4), minval, maxval) # C3
>>> c2 = C.uniform((1, 4), minval, maxval) # C4
>>>
>>> # 4. If global seed is set, but op seed is not set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will caculate a seed according to global seed and
>>> # default op seed. Each call will change the default op seed, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>>
>>> # 5. If both global seed and op seed are set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will caculate a seed according to global seed and
>>> # op seed counter. Each call will change the op seed counter, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 6. If op seed is set but global seed is not set, 0 will be used as global seed. Then
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution act as in
>>> # condition 5.
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 7. Recall set_seed() in the program will reset numpy seed and op seed counter of
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> set_seed(1234)
>>> np_2 = np.random.normal(0, 1, [1]).astype(np.float32) # still get A1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # still get C1
tinyms.get_seed()[源代码]

Get global random seed.

tinyms.arange(*args, **kwargs)[源代码]

Returns evenly spaced values within a given interval.

Returns num evenly spaced samples, calculated over the interval [start, stop]. The endpoint of the interval can optionally be excluded. The current implementation is a direct wrapper on top of numpy.arange, except that the default dtype is float32 and int32, compare to float64 and int64 for numpy implementation.

参数
  • start (Union[int, float]) – Start of interval. The interval includes this value. When stop is provided as a position argument, start must be given, when stop is a normal argument, start can be optional, and default is 0. Please see additional examples below.

  • stop (Union[int, float], optional) – End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of out.

  • step (Union[int, float], optional) – Spacing between values. For any output out, this is the distance between two adjacent values, out[i+1] - out[i]. The default step size is 1. If step is specified as a position argument, start must also be given.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. If dtype is None, the data type of the new tensor will be inferred from start, stop and step. Default is None.

返回

arangend tensor of evenly spaced values.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.arange(0, 5, 1))
[0 1 2 3 4]
>>> print(np.arange(3))
[0 1 2]
>>> print(np.arange(start=0, stop=3))
[0 1 2]
>>> print(np.arange(0, stop=3, step=0.5))
[0.  0.5 1.  1.5 2.  2.5]
>>> print(np.arange(stop=3)) # This will lead to TypeError
tinyms.array(obj, dtype=None, copy=True, ndmin=0)[源代码]

Creates a tensor.

This function creates tensors from an array-like object.

参数
  • obj (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and numpy.ndarray. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or int32. If dtype is None, the data type of the new tensor will be inferred from obj. Default is None.

  • copy (bool) – If true, then the object is copied. Otherwise, a copy will only be made if necessary. Default: True.

  • ndmin (int) – Specifies the minimum number of dimensions that the resulting tensor should have. Ones will be pre-pended to the shape as needed to meet this requirement. Default: 0

返回

Tensor, generated tensor with the specified dtype.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.array([1,2,3]))
[1 2 3]
tinyms.asarray(a, dtype=None)[源代码]

Converts the input to tensor.

This function converts tensors from an array-like object.

参数
  • a (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and ndarrays. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or int32. If dtype is None, the data type of the new tensor will be inferred from a. Default is None.

返回

Tensor, generated tensor with the specified dtype.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.asarray([1,2,3]))
[1 2 3]
tinyms.asfarray(a, dtype=mindspore.float32)[源代码]

Similar to asarray, converts the input to a float tensor.

If non-float dtype is defined, this function will return a float32 tensor instead.

参数
  • a (Union[int, float, bool, list, tuple, numpy.ndarray]) – Input data, in

  • form that can be converted to a tensor. This includes lists, lists of (any) –

  • tuples, tuples of tuples, tuples of lists and numpy.ndarray. (tuples,) –

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

返回

Tensor, generated tensor with the specified float dtype.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.asfarray([1,2,3]))
[1. 2. 3.]
tinyms.concatenate(arrays, axis=0)[源代码]

Joins a sequence of tensors along an existing axis.

参数
  • arrays – Union[Tensor, tuple(Tensor), list(Tensor)], a tensor or a list

  • tensors to be concatenated. (of) –

  • axis (int, optional) – The axis along which the tensors will be joined, if axis is None, tensors are flattened before use. Default is 0.

返回

Tensor, a tensor concatenated from a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x1 = np.ones((1,2,3))
>>> x2 = np.ones((1,2,1))
>>> x = np.concatenate((x1, x2), axis=-1)
>>> print(x.shape)
(1, 2, 4)
tinyms.copy(a)

Returns a tensor copy of the given object.

参数

a (Tensor) – Input tensor.

返回

Tensor, has the same data as a.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((2,2))
>>> print(np.copy(x))
[[1. 1.]
 [1. 1.]]
tinyms.expand_dims(a, axis)[源代码]

Expands the shape of a tensor.

Inserts a new axis that will appear at the axis position in the expanded tensor shape.

参数
  • a (Tensor) – Input tensor array.

  • Union[int, list (axis) – Position in the expanded axes where

  • new axis is placed, (the) –

返回

Tensor, view of a tensor with the number of dimensions increased.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((2,2))
>>> x = np.expand_dims(x,0)
>>> print(x.shape)
(1, 2, 2)
tinyms.eye(N, M=None, k=0, dtype=mindspore.float32)[源代码]

Returns a 2-D tensor with ones on the diagnoal and zeros elsewhere.

参数
  • N (int) – Number of rows in the output, must be larger than 0.

  • M (int, optional) – Number of columns in the output. If None, defaults to N, if defined, must be larger than 0. Deault is None.

  • k (int, optional) – Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. Default is 0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

返回

A tensor of shape (N,M). A tensor where all elements are equal to zero, except for the k-th diagonal, whose values are equal to one.

返回类型

result (Tensor)

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.eye(2, 2))
[[1. 0.]
[0. 1.]]
tinyms.identity(n, dtype=mindspore.float32)[源代码]

Returns the identity tensor.

参数
  • n (int) – Number of rows and columns in the output, must be larger than 0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

返回

A tensor of shape (n,n). A tensor where all elements are equal to zero, except for the diagonal, whose values are equal to one.

返回类型

result (Tensor)

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.identity(2))
[[1. 0.]
[0. 1.]]
tinyms.inner(a, b)[源代码]

Inner product of two tensors.

Ordinary inner product of vectors for 1-D tensors (without complex conjugation), in higher dimensions a sum product over the last axes.

注解

Numpy argument out is not supported. On GPU, the supported dtypes are mstype.float16, and mstype.float32. On CPU, the supported dtype is mstype.float32.

参数
  • a (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

  • b (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

返回

-1] + b.shape[:-1].

返回类型

Tensor or scalar, out.shape = a.shape[

引发

ValueError – if x1.shape[-1] != x2.shape[-1].

Supported Platforms:

Supported Platforms: Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> a = np.ones((5, 3))
>>> b = np.ones((2, 7, 3))
>>> output = np.inner(a, b)
>>> print(output)
[[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]

[[3. 3. 3. 3. 3. 3. 3.] [3. 3. 3. 3. 3. 3. 3.]]]

tinyms.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0)[源代码]

Returns evenly spaced values within a given interval.

The current implementation is a direct wrapper on top of numpy.linspace, except the default dtype is float32, compare to float64 for numpy,

参数
  • start (Union[int, list(int), tuple(int),tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int),tensor]) – The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of ``num + 1` evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • retstep (bool, optional) – If True, return (samples, step), where step is the spacing between samples.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32.If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

返回

There are num equally spaced samples in the closed interval

[start, stop] or the half-open interval [start, stop) (depending on whether endpoint is True or False).

step (float, optional): Only returned if retstep is True.

Size of spacing between samples.

返回类型

samples (Tensor)

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.linspace(0, 5, 6))
[0. 1. 2. 3. 4. 5.]
tinyms.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None, axis=0)[源代码]

Returns numbers spaced evenly on a log scale.

In linear space, the sequence starts at base ** start (base to the power of start) and ends with base ** stop (see endpoint below). The current implementation is a direct wrapper on top of numpy.logspace, except the default dtype is float32, compare to float64 for numpy,

参数
  • start (Union[int, list(int), tuple(int), tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int), tensor]) – The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of ``num + 1` evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • base (Union[int, float], optional) – The base of the log space. The step size between the elements in ln(samples) / ln(base) (or log_base(samples)) is uniform. Default is 10.0.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32.If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop is array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

返回

num samples, equally spaced on a log scale.

返回类型

samples (Tensor)

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.logspace(0, 5, 6, base=2.0))
[ 1.  2.  4.  8. 16. 32.]
tinyms.mean(a, axis=None, keepdims=False)[源代码]

Computes the arithmetic mean along the specified axis.

Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis.

注解

Numpy arguments dtype and out are not supported. On GPU, the supported dtypes are mstype.float16, and mstype.float32. On CPU, the supported dtypes are mstype.float16, and mstype.float32.

参数
  • a (Tensor) – input tensor containing numbers whose mean is desired. If a is not an array, a conversion is attempted.

  • axis (None or int or tuple of ints) – optional. Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes.

  • keepdims (bool) – optional. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.

返回

Tensor or scalar, an array containing the mean values.

引发
  • ValueError – if axes are out of the range of [-a.ndim, a.ndim), or

  • if the axes contain duplicates.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> a = np.arange(6, dtype='float32')
>>> output = np.mean(a, 0)
>>> print(output)
2.5
tinyms.ones(shape, dtype=mindspore.float32)[源代码]

Returns a new tensor of given shape and type, filled with ones.

参数
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

返回

Tensor, with the designated shape and dtype, filled with ones.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.ones((2,2)))
[[1. 1.]
[1. 1.]]
tinyms.ravel(x)[源代码]

Returns a contiguous flattened tensor.

A 1-D tensor, containing the elements of the input, is returned.

参数

x (Tensor) – A tensor to be flattened.

返回

Flattened tensor, has the same data type as the original tensor x.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.ravel(x)
>>> print(output.shape)
(24,)
tinyms.reshape(x, new_shape)[源代码]

Reshapes a tensor without changing its data.

参数
  • x (Tensor) – A tensor to be reshaped.

  • new_shape (Union[int, list(int), tuple(int)]) – The new shape should be compatible with the original shape. If the tuple has only one element, the result will be a 1-D tensor of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the tensor and remaining dimensions.

返回

Reshaped Tensor. Has the same data type as the original tensor x.

引发
  • TypeError – If new_shape is not integer, list or tuple.

  • ValueError – If new_shape does not compatible with the original shape.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.asarray([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> output = np.reshape(x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (3, -1))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (6, ))
>>> print(output)
[-0.1  0.3  3.6  0.4  0.5 -3.2]
tinyms.rollaxis(x, axis, start=0)[源代码]

Rolls the specified axis backwards, until it lies in the given position. The positions of the other axes do not change relative to one another.

参数
  • x (Tensor) – A Tensor to be transposed.

  • axis (int) – The axis to be rolled.

  • start (int) –

    • When start >= 0:
      • When start <= axis: the axis is rolled back until it lies in this position (start).

      • When start > axis: the axis is rolled until it lies before this position (start).

    • When start < 0: the start will be normalized as follows:

      start ……….. Normalized start -(x.ndim+1) raise ValueError -x.ndim 0 … … -1 x.ndim-1 0 0 … … x.ndim x.ndim x.ndim+1 raise ValueError

返回

Transposed Tensor. Has the same data type as the original tensor x.

Supported Platforms:

Ascend GPU CPU

引发
  • TypeError – If axis or start is not integer.

  • ValueError – If axis is not in the range from -ndim to ndim-1 or start is not in the range from -ndim to ndim.

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.rollaxis(x, 0, 2)
>>> print(output.shape)
(3, 2, 4)
tinyms.squeeze(a, axis=None)[源代码]

Removes single-dimensional entries from the shape of an tensor.

This is a temporary solution to support CPU backend. Will be changed once CPU backend supports P.Squeeze().

参数
  • a (Tensor) – Input tensor array.

  • axis – Union[None, int, list(int), tuple(list)]. Default is None.

返回

Tensor, with all or a subset of the dimensions of length 1 removed.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,2,1))
>>> x = np.squeeze(x)
>>> print(x.shape)
(2, 2)
tinyms.swapaxes(x, axis1, axis2)[源代码]

Interchanges two axes of a tensor.

参数
  • x (Tensor) – A tensor to be transposed.

  • axis1 (int) – First axis.

  • axis2 (int) – Second axis.

返回

Transposed tensor, has the same data type as the original tensor x.

引发
  • TypeError – If axis1 or axis2 is not integer.

  • ValueError – If axis1 or axis2 is not in the range from -ndim to ndim-1.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.swapaxes(x, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.transpose(a, axes=None)[源代码]

Reverses or permutes the axes of a tensor; returns the modified tensor.

参数
  • a (Tensor) – a tensor to be transposed

  • axes (Union[None, tuple, list]) – the axes order, if axes is None, transpose

  • entire tensor. Default is None. (the) –

返回

Tensor, the transposed tensor array.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,3))
>>> x = np.transpose(x)
>>> print(x.shape)
(3, 2, 1)
tinyms.zeros(shape, dtype=mindspore.float32)[源代码]

Returns a new tensor of given shape and type, filled with zeros.

参数
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mstype.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32. Default is mstype.float32.

返回

Tensor, with the designated shape and dtype, filled with zeros.

Supported Platforms:

Ascend GPU CPU

实际案例

>>> import mindspore.numpy as np
>>> print(np.zeros((2,2)))
[[0. 0.]
[0. 0.]]