Welcome to TinyMS’s documentation!

TinyMS is an Easy-to-Use deep learning development toolkit based on MindSpore, designed to providing quick-start guidelines for machine learning beginners.

What is TinyMS

TinyMS Project

Introduction

  • TinyMS is an open source deep learning development kit written in Python. It runs on top of deep learning framework like MindSpore, and provides high level API covers the entire lifecycle and workflow for AI development that ranges from data preparation to model deployment.

  • TinyMS is composed of several modules including data, model and serving. It provides transform data processing operators for different scenarios and reuses MindSpore datasets like cifar-10.

  • TinyMS are mainly designed to help user groups like deep learning beginners, researchers who conducts studies related to deep learning, and AI application developers with a crash course.

  • Combined with video tutorials (available in Chinese on Bilibili site and soon with English version on YouTube channel), TinyMS project offers the best deep learning crash course and entry level development experience todate.

TinyMS vs Keras

To quote from Keras’ website

Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result as fast as possible is key to doing good research.

Keras is known for its completeness. Keras is composed of modules such as dataset, layer, model and backend. It provides commonly used datasets and data processing functions for different scenarios. The layer module provides all encompassing functionalities such as convolution, embedding, pooling, backend (multiple support for Tensorflow, CNTK and Theano). The model module provides functionalities such as model types (sequential or functional), network construction (input, output, pooling, etc.), model compilation, model training, model verification and inference.

In comparison, TinyMS designs a set of more abstract high level APIs and therefore is less complex than Keras. For example one can complete dataset preprocessing with just one line of code in TinyMS. TinyMS also provides several individual tools and quick model deployment module which Keras has not yet offered.

TinyMS vs Fastai

To quote from fastai’s README

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches.

With the strength of PyTorch’s flexibility, Fastai could provide out-of-the-box development experience for model types like vision, text, tabular, collab. However unlike Keras’s multi-backend support, Fastai’s backend is tightly coupled with PyTorch and its versioning.

Fastai is known for its “petitness” which provides a great lightweight and easy-to-understand structure. Fastai consists of three major modules: data, models and learner. The data module provide transform data preprocessing operations which is convenient for developers. The model module provides many predefined networks like unet for quick model construction. The learner module defines the relationship between data and model with a set of callback functions to help developers quickly grasp the most common deep learning architectures. It provides a rich set of tools such as data downloading, decompression, figure verification and file processing.

In comparison, while sharing similar design concepts on high level APIs, TinyMS offers predefined MindSpore datasets which could help developers with dataset processing enormously as well as quick model deployment module, both of which Fastai has not yet provided.

TinyMS Community

Other than TinyMS project,the community at-large also includes the many other projects and activities:

  • Specification project:an attempt to standardize the format of model training scripts. Due to the abstract nature of TinyMS APIs, we found it necessary to have a standard or guideline for ModelZoo.

  • https://tinyms-ai.github.io:a simple website built in open source based upon GitHub Page mechanism.

  • RustedAI Team:only visible for org members at the moment, RustedAI is an initiative that TinyMS tries to build for more adoption of Rust-lang in the field of deep learning to meet the goal of low runtime footprint.

  • Community Activities:We will organize TinyMS model competitions and many other activities including Meetups, webinars, etc.

TinyMS and Developers

As a new open source project, TinyMS stands on the shoulder of the giants like Keras and Fastai. Although we hope to achieve many innovations in our design, it still depends on a vibrant community and ecosystem to make TinyMS reach the depth and broadness of its predecessors in order to better serve the academia, industry and developers in general.

Install TinyMS

Installation For Beginners

Pypi

For users who own a clean environment, it is recommended to use pypi to install TinyMS given that the following requirements are meet. For those who don’t, Anaconda is a good choice for setting up the python environment.

Prerequisites

  • OS: Ubuntu 18.04 or Windows 10

  • Python: 3.7.5

For China based users it is recommended to run the following command lines to help with faster download

mkdir -pv /root/.pip \
&& echo "[global]" > /root/.pip/pip.conf \
&& echo "trusted-host=mirrors.aliyun.com" >> /root/.pip/pip.conf \
&& echo "index-url=http://mirrors.aliyun.com/pypi/simple/" >> /root/.pip/pip.conf
pip install tinyms==0.3.1

Note: There may be some problems during the installation process. The following possible situations are for reference only. If you encounter other problems during the installation process, we welcome you to submit your issues and pull requests in our community, and we will reply you as soon as possible.

  1. Error 1: If you use the mirror source to execute the installation command, it may report Could not find a version that satisfies the requirement tinyms==0.3.1

    Solution:

    • You can try to use the default official source, directly append -i https://pypi.python.org/simple/ at the end, the download speed of the default official source may be slower, please be patient :smile:

  2. Error 2: If you are a windows user, please make sure that Microsoft VC++ 14.0 is installed. If not, it may reportERROR: Microsoft Visual C++ 14.0 or greater is required. Get it with “Microsoft C++ Build Tools” may be reported during the installation process. : https://visualstudio.microsoft.com/visual-cpp-build-tools/

    Solution:

    • Because TinyMS is dependent on the Python3.7.5 environment, and Python3 is compiled with VC++ 14.0. According to the error prompt, download Microsoft C++ Build Tools at the provided link . Note that during the installation process, the two components windows 10 SDK and C++ CMake Tools for Windows need to be checked in Desktop Development Module Using C++ Desktop. For installation details, please refer to Visual Studio Build Tool Installation.

Docker

For those who don’t want to affect the local develop environment due to difficulty of meeting the prerequisites, using docker to install is recommended

  • docker: v18.06.1-ce

If user wants to try the tutorials that are written in .ipynb files,please pull jupyter version of TinyMS in which jupyter components are installed by default

If user wants to experience the image inference service in a visual WEB UI,please pull nginx version of TinyMS in which nginx components are installed by default

  • Default version

docker pull tinyms/tinyms:0.3.1
docker run -it tinyms/tinyms:0.3.1
  • Jupyter version

If user wants to try jupyter, run the following command line

docker pull tinyms/tinyms:0.3.1-jupyter
docker run -it --net=host tinyms/tinyms:0.3.1-jupyter

Open a browser on the local machine, type in

<Your_external_IP_address>:8888

Example: 188.8.8.88:8888, the default password is tinyms,then user can log in to jupyter

  • Nginx version

If user wants to experience the image inference service in a visual WEB UI, run the following command line

docker pull tinyms/tinyms:0.3.1-nginx
docker run -itd --name=tinyms-nginx -p 80:80 tinyms/tinyms:0.3.1-nginx /bin/bash

docker exec -it tinyms-nginx /bin/bash
entrypoint.sh <Your_host_public_IP_address_not_docker_IP_address>

Open a browser on the local machine, type in

<Your_host_public_IP_address_not_docker_IP_address>:80

Installation For Experienced Developers

For developers who want to develop based on TinyMS, install from source

sudo apt-get install -y libssl-dev
git clone https://github.com/tinyms-ai/tinyms.git
cd tinyms
pip install -r requirements.txt
python setup.py install

Validate installation

Create a python, jupyter or nginx kernel, input the following codes

import tinyms as ts
from tinyms.primitives import tensor_add

x = ts.ones([2, 3])
y = ts.ones([2, 3])
print(tensor_add(x, y))

If the output is similar to below, then the installation is valid

[[2. 2. 2.]
 [2. 2. 2.]]

Notes

When we use TinyMS 0.3.1, the following error may be reported

Error Details:

[ERROR] ME(24148:23792,MainProcess):2022-01-25-21:59:25.562.448 [mindspore\_extends\parse\parser.py:565] When eval 'P.tensor_add(identity, x)' by using Fallback feature, an error occurred: name 'identity' is not defined. You can try to turn off the Fallback feature by 'export MS_DEV_ENABLE_FALLBACK=0'.

Solution:

According to the error prompt, we can turn off the Fallback feature with the following command.

For general users, execute the following commands in the command line tool:

export MS_DEV_ENABLE_FALLBACK=0

For users using jupyter, execute the following command in the cell:

!export MS_DEV_ENABLE_FALLBACK=0

If you report other error while using TinyMS 0.3.1, after you try to solve the error, there is still a problem, we welcome you to submit your issues and pull requests in our community, and we will reply you as soon as possible.

Implementing an Image Classification App in One Minute

In this tutorial, constructing a LeNet5 model, downloading dataset, training, starting the server and making predictions of the model using TinyMS 0.3.1 API will be demonstrated.

Prerequisite

  • Ubuntu: 18.04

  • docker: v18.06.1-ce

  • Jupyter: Using tinyms 0.3.1-Jupyter, you can refer to Quick Install tinyms to deploy the environment.

Introduction

TinyMS is a high-level API which is designed for amateur of deep learning. It minimizes the number of actions of users required to construct, train, evaluate and serve a model. TinyMS also provides tutorials and documentations for developers.

This tutorial consists of six parts, constructing the model, downloading dataset, training, define servable json, starting server and making predictions in which the server will be run in a sub process.

[1]:
import os
import json
from PIL import Image

from tinyms import context
from tinyms.data import MnistDataset, download_dataset
from tinyms.vision import mnist_transform, ImageViewer
from tinyms.model import Model, lenet5
import tinyms.optimizers as opt
from tinyms.serving import Server, Client
from tinyms.metrics import Accuracy
from tinyms.losses import SoftmaxCrossEntropyWithLogits
from tinyms.callbacks import ModelCheckpoint, CheckpointConfig, LossMonitor
[WARNING] ME(12716:140477914756928,MainProcess):2021-03-19-15:58:38.621.652 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.

1. Construct the model

TinyMS encapsulates init and construct of the LeNet5 model, the line of the code is reduced to construct the LeNet5 model:

[2]:
# build the network
net = lenet5(class_num=10)
model = Model(net)

2. Download dataset

The MNIST dataset will be downloaded if mnist folder didn’t exist at the /home/jovyan folder. If mnist folder already exists, this step will not be performed.

[3]:
# download the dataset
mnist_path = '/home/jovyan/mnist'
if not os.path.exists(mnist_path):
    download_dataset('mnist', '/home/jovyan')
    print('************Download complete*************')
else:
    print('************Dataset already exists.**************')
************Dataset already exists.**************

3. Train the model & evaluation

The dataset for both training and evaluation will be defined here, and the parameters for training also set in this block. A trained ckpt file will be saved to /home/jovyan/tinyms/serving/lenet5 folder for later use, meanwhile the evaluation will be performed and the Accuracy can be checked.

[ ]:
# check lenet folder exists or not
ckpt_folder = '/home/jovyan/tinyms/serving/lenet5'
ckpt_path = '/home/jovyan/tinyms/serving/lenet5/lenet5.ckpt'
if not os.path.exists(ckpt_folder):
    !mkdir -p  /home/jovyan/tinyms/serving/lenet5
else:
    print('lenet5 ckpt folder already exists')

# set environment parameters
device_target = "CPU"
context.set_context(mode=context.GRAPH_MODE, device_target=device_target)
dataset_sink_mode = False

# define the training and evaluation dataset
train_dataset = MnistDataset(os.path.join(mnist_path, "train"), shuffle=True)
train_dataset = mnist_transform.apply_ds(train_dataset)
eval_dataset = MnistDataset(os.path.join(mnist_path, "test"), shuffle=True)
eval_dataset = mnist_transform.apply_ds(eval_dataset)

# parameters for training
lr = 0.01
momentum = 0.9
epoch_size = 1
batch_size = 32

# define the loss function
net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')

# define the optimizer
net_opt = opt.Momentum(net.trainable_params(), lr, momentum)
net_metrics={"Accuracy": Accuracy()}
model.compile(loss_fn=net_loss, optimizer=net_opt, metrics=net_metrics)

print('************************Start training*************************')
ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet", config=CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10))
model.train(epoch_size, train_dataset, callbacks=[ckpoint_cb, LossMonitor()],dataset_sink_mode=dataset_sink_mode)
print('************************Finished training*************************')
model.save_checkpoint(ckpt_path)


model.load_checkpoint(ckpt_path)
print('************************Start evaluation*************************')
acc = model.eval(eval_dataset, dataset_sink_mode=dataset_sink_mode)
print("============== Accuracy:{} ==============".format(acc))

4. Define servable.json

Define lenet5 servable json file: servable json file defines the servable name, model name, model format and classification quantity for subsequent reasoning.

[5]:
servable_json = [{'name': 'lenet5',
                  'description': 'This servable hosts a lenet5 model predicting numbers',
                  'model': {
                      "name": "lenet5",
                      "format": "ckpt",
                      "class_num": 10}}]
os.chdir("/home/jovyan/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

5. Start server

5.1 Introduction

TinyMS Serving is a C/S(client/server) structure. TinyMS using Flask which is a micro web framework written in python as the C/S communication tool. In order to serve a model, user must start server first. If successfully started, the server will be run in a subprocess and listening to POST requests from 127.0.0.1 port 5000 sent by client and handle the requests using MindSpore backend which constructs the model, run the prediction and send the result back to the client.

5.2 Start server

Run the following code block to start the server.

[ ]:
server = Server(serving_path="/home/jovyan/tinyms/serving/")
server.start_server()
 * Serving Flask app 'tinyms.serving.server.server' (lazy loading)

 * Environment: production

   WARNING: This is a development server. Do not use it in a production deployment.

   Use a production WSGI server instead.

 * Debug mode: off

The above prompt indicates that the server is started and running.

After the server is started, we need to go to the menu bar and click File=>new Notebook to create a new jupyter file, and then continue to complete the operation of the client.

[ ]:
import os
from PIL import Image

from tinyms.vision import ImageViewer
from tinyms.serving import Server, Client

6. Make predictions

6.1 Upload the pic

A picture of a single digit number is required to be the input. The picture we use in this tutorial can be found HERE, then save the picture to the /home/jovyan folder, and rename it to 7.png (or any other name you like).

Or run the following code to download the pic for this tutorial:

[7]:
if not os.path.exists('/home/jovyan/7.png'):
    !wget -P /home/jovyan https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/numbers/7.png
else:
    print('7.png already exists')
--2022-01-26 18:42:14--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/numbers/7.png
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 49.4.112.90, 121.36.121.44
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34970 (34K) [image/png]
Saving to: ‘/home/jovyan/7.png’

7.png               100%[===================>]  34.15K  --.-KB/s    in 0.07s

2022-01-26 18:42:14 (470 KB/s) - ‘/home/jovyan/7.png’ saved [34970/34970]

6.2 List servables

Use list_servables function to check what model is being served right now.

[8]:
client=Client()
client.list_servables()
[8]:
[{'description': 'This servable hosts a lenet5 model predicting numbers',
  'model': {'class_num': 10, 'format': 'ckpt', 'name': 'lenet5'},
  'name': 'lenet5'}]

If the output description shows it is a lenet5 model, then we can continue to next step to send our request.

6.3 Sending request and get the result

Run predict function to send the request, select between TOP1_CLASS and TOP5_CLASS:

[9]:
# Set picture path and output policy(You can choose between Top1 and top5)
image_path = "/home/jovyan/7.png"
strategy = "TOP1_CLASS"

# predict(image_path, servable_name, dataset='mnist', strategy='TOP1_CLASS')
img_viewer = ImageViewer(Image.open(image_path), image_path)
img_viewer.show()
print(client.predict(image_path,'lenet5', 'mnist', strategy))
_images/quickstart_quickstart_in_one_minute_21_0.png
TOP1: 7, score: 0.9997

If user can see the output similar to this:

TOP1: 7, score: 0.99934917688369750977

that means the prediction is successfully performed.

Shutdown server

[10]:
server.shutdown()
[10]:
'Server shutting down...'

TinyMS Reasoning Visualization Experience

Combined with OpenCV image vision library, TinyMS V0.3.1 focus visualization features. Through simple and intuitive image visualization, it helps users to understand the effect of model reasoning more quickly.

For users who do not want to write code, TinyMS provides a visual interface of WEB UI. Users only need to upload the image to be inferred on the browser page to experience easily. Currently, it supports for LeNet5, CycleGan and SSD300 models.

WEB UI reasoning visualization

Users need to deploy the visual server first, details please see TinyMS Nginx Verion Installation. After the server is successfully deployed, the home page and reasoning effect page (taking CycleGan model as an example) presented by the browser are as follows:

_images/tinyms_web_index.jpgIndex Page

_images/tinyms_web_reasoning.jpgPredict Page

For users who want to run code, TinyMS provides model reasoning visualization module, which only needs 5 step code to experience quickly. Currently, it only supports SSD300 object detection model.

Model reasoning visualization module application

If users need to experience the model reasoning visualization module application for the first time, they can download code from TinyMS Official Repo, then do the following operations:

  • Static image detection

  1. Environmental preparation

    • An operating system with a visual desktop, such as Window_x64 or Ubuntu18.04

  2. Experience the module application

    # Download the TinyMS project
    git clone https://github.com/tinyms-ai/tinyms.git
    cd tinyms/tests/st/app/object_detection/
    # Run static image detection
    python opencv_image_app.py
    

    The image to be detected and the image after inference are shown as follows:

    _images/tinyms_visulization_origin.jpgInput Image

    _images/tinyms_visulization_reasoning.jpgReasoning Image

  • Real-time dynamic detection of video images collected by computer camera

  1. Environmental preparation:

    • An operating system with a visual desktop, such as Window_x64 or Ubuntu18.04

    • Make sure the operating system can access the camera normally

      Note:

      Generally speaking, for the operating system under the host, such as Window_x64 and Ubuntu 18.04, the camera can be accessed normally, but for the operating system under the virtual machine, please make sure that the relevant virtual machine services are enabled and the camera driver is connected. Following, we take the virtual machine VMware Workstation under the window as an example:

      1. First of all, we enter the command ls /dev/v* in the terminal to check whether there is a /dev/video0 driver. If there is, it means it is normal, please ignore the following operations, if not, we will perform the following operations.

      2. Secondly, enable the relevant virtual machine services. enable the service VMware USB Arbitration Service in the windows host, that is, enter services.msc on the keyboard Win+R to find the corresponding service and enable it. After it is turned on, the virtual machine needs to be restarted.

      3. Then, connect the camera driver. On the menu bar of VMware Workstation, click Virtual Machine (M) => Removable Device => Camera Name => Host Connection, and click Virtual Machine (M) =>Settings(S)=>USB, select USB3.0.

      4. Finally, You can use cheese to test whether the camera can be accessed normally.

  2. Experience the module application

    For the different choices for operating systems and different choices for testing methods, we provide the following specific environment experiences, please verify in the corresponding environment, note that the following environments have all satisfy the two conditions for environmental preparation, which is very important.

    • If your host operating system is windows

      • If you test on the host

        Environmental Requirements:

        • Operating system:Window_64

        • Environmental dependency: Git + Python 3. 7.5 + TinyMS 0.3.1 + Microsoft Visual C++ 14.0 or greater

        • Command line tool:Git Bash

          Note: For details about the environment dependency of VC++ 14.0, please refer to the notes under Pypi install TinyMS

        Execute the following commands after the environment requirements are satisfied:

        # 1.Download the TinyMS project in the container
        git clone https://github.com/tinyms-ai/tinyms.git
        cd tinyms/tests/st/app/object_detection/
        # 2.Run dynamic video image detection collected by camera
        python opencv_camera_app.py
        
      • If you test on a virtual machine

        Let’s take the virtual machine VMware Workstation under Window_x64 as an example. Please refer to the notes in the environment preparation for VM connection to the camera:

        Environmental Requirements:

        • Operating system:Ubuntu18.04 LTS Desktop

        • Environmental dependency: Docker 18.06.1-ce

        • Command line tool:Terminal

        Execute the following commands after the environment requirements are satisfied:

        # 1.Install xServer on the host and set permissions
        apt install x11-xserver-utils
        # 2.Allow all users to access the display interface
        xhost +
        # 3.Run container
        docker run -it --rm --device=/dev/video0 -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix tinyms/tinyms:0.3.1 /bin/bash
        # 4.Download the TinyMS project in the container
        git clone https://github.com/tinyms-ai/tinyms.git
        cd tinyms/tests/st/app/object_detection/
        # 5.Run dynamic video image detection collected by camera
        python opencv_camera_app.py
        
    • If your host operating system is ubuntu

      • If you test on the host

        Environmental Requirements:

        • Operating system:Ubuntu 18.04 LTS Desktop

        • Environmental dependency: Git + Python 3. 7.5 + TinyMS 0.3.1

        • Command line tool:Terminal

        Execute the following commands after the environment requirements are satisfied:

        # 1.Download the TinyMS project in the container
        git clone https://github.com/tinyms-ai/tinyms.git
        cd tinyms/tests/st/app/object_detection/
        # 2.Run dynamic video image detection collected by camera
        python opencv_camera_app.py
        
      • If you use docker access

        Environmental Requirements:

        • Operating system:Ubuntu18.04 LTS Desktop

        • Environmental dependency: Docker 18.06.1-ce

        • Command line tool:Terminal

        Execute the following commands after the environment requirements are satisfied:

        # 1.Install xServer on the host and set permissions
        apt install x11-xserver-utils
        # 2.Allow all users to access the display interface
        xhost +
        # 3.Run container
        docker run -it --rm --device=/dev/video0 -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix tinyms/tinyms:0.3.1 /bin/bash
        # 4.Download the TinyMS project in the container
        git clone https://github.com/tinyms-ai/tinyms.git
        cd tinyms/tests/st/app/object_detection/
        # 5.Run dynamic video image detection collected by camera
        python opencv_camera_app.py
        

    Currently, the document is still being improved:smile:. If your environment is not in the above reference, after you try it, you still have problems, we sincerely invite you to submit your issues and pull requests in our community, and we will reply you as soon as possible.

TinyMS ResNet50 Tutorial

In this tutorial, using TinyMS API to train/serve a ResNet50 model will be demonstrated.

Prerequisite

  • Ubuntu: 18.04

  • Python: 3.7.x

  • Flask: 1.1.2

  • MindSpore: CPU-1.1.1

  • TinyMS: 0.1.0

  • numpy: 1.17.5

  • Pillow: 8.1.0

  • pip: 21.0.1

  • requests: 2.18.4

Introduction

TinyMS is a high-level API which is designed for amateur of deep learning. It minimizes the number of actions of users required to construct, train, evaluate and serve a model. TinyMS also provides tutorials and documentations for developers.

This tutorial consists of six parts, constructing the model, downloading dataset, training, define servable json, starting server and making predictions in which the server will be run in a sub process.

[1]:
import os
import json

from PIL import Image
from tinyms import context
from tinyms.serving import start_server, predict, list_servables, shutdown, server_started
from tinyms.data import Cifar10Dataset, download_dataset, ImageFolderDataset
from tinyms.vision import cifar10_transform, ImageViewer, imagefolder_transform
from tinyms.model import Model, resnet50
from tinyms.callbacks import ModelCheckpoint, CheckpointConfig, LossMonitor
from tinyms.metrics import Accuracy
from tinyms.optimizers import Momentum
from tinyms.losses import SoftmaxCrossEntropyWithLogits
[WARNING] ME(12569:140321685239616,MainProcess):2021-03-19-15:21:33.633.399 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.

1. Construct the model

TinyMS encapsulates init and construct of the ResNet50 model, the line of the code is reduced to construct the model:

[2]:
# build the network
net = resnet50(class_num=10)
model = Model(net)

2. Download dataset

Training ResNet50 model with cifar10 dataset will be demonstrated, while we provide two pre-trained ckpt files for users to download, one is trained with cifar10 dataset and the other one is trained with ImageNet2012 dataset.

[3]:
# download the cifar10 dataset
cifar10_path = '/root/cifar10/cifar-10-batches-bin'
if not os.path.exists(cifar10_path):
    download_dataset('cifar10', '/root')
    print('************Download complete*************')
else:
    print('************Dataset already exists.**************')
************** Downloading the Cifar10 dataset **************
[███████████████████████████████████████████████████████████████████████████████████████████████████ ] 99.81%************Download complete*************

3. Train the model & evaluation

The dataset for both training and evaluation will be defined here, and the parameters for training also set in this block. A trained ckpt file will be saved to /etc/tinyms/serving/resnet50_cifar10 folder for later use, meanwhile the evaluation will be performed and the Accuracy can be checked

Notice: Since training ResNet50 on CPU is time consuming, we recommend skip training and using provided ckpt files to run.
[ ]:
# check ckpt folder exists or not
cifar10_ckpt_folder = '/etc/tinyms/serving/resnet50_cifar10'
cifar10_ckpt_path = '/etc/tinyms/serving/resnet50_cifar10/resnet50.ckpt'
if not os.path.exists(cifar10_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/resnet50_cifar10
else:
    print('resnet50_cifar10 ckpt folder already exists')

epoch_size = 90 # default is 90
batch_size = 32

# set environment parameters
dataset_sink_mode = False
device_target = "CPU"
context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

# set dataset parameters
train_dataset = Cifar10Dataset(cifar10_path, num_parallel_workers=4, shuffle=True)
train_dataset = cifar10_transform.apply_ds(train_dataset, repeat_size=1, batch_size=32, is_training=True)
eval_dataset = Cifar10Dataset(cifar10_path, num_parallel_workers=4, shuffle=True)
eval_dataset = cifar10_transform.apply_ds(eval_dataset, repeat_size=1, batch_size=32, is_training=False)
step_size = train_dataset.get_dataset_size()

save_checkpoint_epochs = 5
ckpoint_cb = ModelCheckpoint(prefix="resnet_cifar10", config=CheckpointConfig(
            save_checkpoint_steps=save_checkpoint_epochs * train_dataset.get_dataset_size(),
            keep_checkpoint_max=10))

# define the loss function
net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")

# define the optimizer
net_opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01, 0.9)
model.compile(loss_fn=net_loss, optimizer=net_opt, metrics={"Accuracy": Accuracy()})


print('************************Start training*************************')
model.train(epoch_size, train_dataset, callbacks=[ckpoint_cb, LossMonitor()],dataset_sink_mode=dataset_sink_mode)
model.save_checkpoint(cifar10_ckpt_path)
print('************************Finished training*************************')

model.load_checkpoint(cifar10_ckpt_path)
print('************************Start evaluation*************************')
acc = model.eval(eval_dataset, dataset_sink_mode=dataset_sink_mode)
print("============== Accuracy:{} ==============".format(acc))
Notice: If skipped training process, download the pretrained ckpt files and continue to serving

Click resnet_imagenet to download resnet-imagenet ckpt file and click resnet_cifar to download resnet-cifar ckpt file. Save this file to /etc/tinyms/serving/resnet50_<dataset_name>/resnet50.ckpt.

Or run the following code to download the resnet_imagenet and resnet_cifar ckpt file:

[4]:
# check lenet folder exists or not, and download resnet50_imagenet
imagenet2012_ckpt_folder = '/etc/tinyms/serving/resnet50_imagenet2012'
imagenet2012_ckpt_path = '/etc/tinyms/serving/resnet50_imagenet2012/resnet50.ckpt'
if not os.path.exists(imagenet2012_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/resnet50_imagenet2012
    !wget -P /etc/tinyms/serving/resnet50_imagenet2012 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/imagenet2012/resnet50.ckpt
else:
    print('imagenet2012 ckpt folder already exists')
    if not os.path.exists(imagenet2012_ckpt_path):
        !wget -P /etc/tinyms/serving/resnet50_imagenet2012 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/imagenet2012/resnet50.ckpt
    else:
        print('imagenet2012 ckpt file already exists')


# check lenet folder exists or not
cifar10_ckpt_folder = '/etc/tinyms/serving/resnet50_cifar10'
cifar10_ckpt_path = '/etc/tinyms/serving/resnet50_cifar10/resnet50.ckpt'
if not os.path.exists(cifar10_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/resnet50_cifar10
    !wget -P /etc/tinyms/serving/resnet50_cifar10 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/resnet50.ckpt
else:
    print('cifar10 ckpt folder already exists')
    if not os.path.exists(cifar10_ckpt_path):
        !wget -P /etc/tinyms/serving/resnet50_cifar10 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/resnet50.ckpt
    else:
        print('cifar10 ckpt file already exists')
imagenet2012 ckpt folder already exists
--2021-03-19 15:23:45--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/imagenet2012/resnet50.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 188521005 (180M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/resnet50_imagenet2012/resnet50.ckpt’

resnet50.ckpt       100%[===================>] 179.79M  36.7MB/s    in 5.9s

2021-03-19 15:23:52 (30.4 MB/s) - ‘/etc/tinyms/serving/resnet50_imagenet2012/resnet50.ckpt’ saved [188521005/188521005]

cifar10 ckpt folder already exists
--2021-03-19 15:23:52--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/resnet50.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 188462121 (180M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/resnet50_cifar10/resnet50.ckpt’

resnet50.ckpt       100%[===================>] 179.73M  35.7MB/s    in 5.6s

2021-03-19 15:23:58 (32.3 MB/s) - ‘/etc/tinyms/serving/resnet50_cifar10/resnet50.ckpt’ saved [188462121/188462121]

4. Define servable.json

Choose only one of the following two code blocks to define the servable json file which will be used later

Run this code to define the servable json file for ResNet50_imagenet2012 model:

[5]:
servable_json = [{'name': 'resnet50_imagenet2012',
                  'description': 'This servable hosts a resnet50 model predicting mushrooms',
                  'model': {
                      "name": "resnet50",
                      "format": "ckpt",
                      "class_num": 9}}]
os.chdir("/etc/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

Or run this code to define the servable json file for ResNet50_cifar10 model:

[ ]:
servable_json = [{'name': 'resnet50_cifar10',
                  'description': 'This servable hosts a resnet50 model predicting 10 classes of objects',
                  'model': {
                      "name": "resnet50",
                      "format": "ckpt",
                      "class_num": 10}}]
os.chdir("/etc/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

5. Start server

5.1 Introduction

TinyMS Serving is a C/S(client/server) structure. There is a server and client. TinyMS using Flask which is a micro web framework written in python as the C/S communication tool. In order to serve a model, user must start server first. If successfully started, the server will be run in a subprocess and listening to POST requests from 127.0.0.1 port 5000 sent by client and handle the requests using MindSpore backend which will construct the model, run the prediction and send the result back to the client.

5.2 start server

Run the following code block to start the server:

[6]:
start_server()
Server starts at host 127.0.0.1, port 5000

6. Make predictions

6.1 Upload the pic

A picture is required to be the input. The resnet_imagenet ckpt requires a mushroom picture, while the resnet_cifar ckpt requires a picture of

['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

Click mushroom which is used in this tutorial for resnet_imagenet and airplane for resnet-cifar. Upload the pic, if using terminal, either scp or wget will do, if running in Jupyter, click Upload button at the top right and select the picture. Save the picture at the root folder, rename to mushroom.jpeg(for resnet-imagenet) or airplane.jpg(for resnet-cifar).

Or run this code to download mushroom pic for resnet_imagenet and airplane for resnet_cifar:

[7]:
# download mushroom pic
if not os.path.exists('/root/mushroom.jpeg'):
    !wget -P /root/ https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/mushrooms/mushroom.jpeg
else:
    print('mushroom.jpeg already exists')

# download airplane pic
if not os.path.exists('/root/airplane.jpg'):
    !wget -P /root/ https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/objects/airplane.jpg
else:
    print('airplane.jpg already exists')
--2021-03-19 15:24:11--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/mushrooms/mushroom.jpeg
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 76020 (74K) [image/jpeg]
Saving to: ‘/root/mushroom.jpeg’

mushroom.jpeg       100%[===================>]  74.24K   370KB/s    in 0.2s

2021-03-19 15:24:12 (370 KB/s) - ‘/root/mushroom.jpeg’ saved [76020/76020]

--2021-03-19 15:24:12--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/objects/airplane.jpg
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 151188 (148K) [image/jpeg]
Saving to: ‘/root/airplane.jpg’

airplane.jpg        100%[===================>] 147.64K   561KB/s    in 0.3s

2021-03-19 15:24:12 (561 KB/s) - ‘/root/airplane.jpg’ saved [151188/151188]

6.2 List servables

Now, use list_servables function to check what model is servable right now.

[8]:
list_servables()
[8]:
[{'description': 'This servable hosts a resnet50 model predicting mushrooms',
  'model': {'class_num': 9, 'format': 'ckpt', 'name': 'resnet50'},
  'name': 'resnet50_imagenet2012'}]

If the output description shows it is a resnet50 model, run the following code which will automatically detect whether it is a imagenet model or a cifar model

6.3 Sending request and get the result

Run predict function to send the request, select between TOP1_CLASS and TOP5_CLASS to check the output

[9]:
# set image_path and output strategy
imagenet_image_path = "/root/mushroom.jpeg"
cifar_image_path = "/root/airplane.jpg"
strategy = "TOP1_CLASS"

# predict(image_path, servable_name, dataset_name, strategy='TOP1_CLASS')
if server_started() is True:
    servable_name = list_servables()[0]['name']
    if servable_name == 'resnet50_imagenet2012':
        img_viewer = ImageViewer(Image.open(imagenet_image_path), imagenet_image_path)
        img_viewer.show()
        print(predict(imagenet_image_path, "resnet50_imagenet2012", "imagenet2012", strategy))
    else:
        img_viewer = ImageViewer(Image.open(cifar_image_path), cifar_image_path)
        img_viewer.show()
        print(predict(cifar_image_path, "resnet50_cifar10", 'cifar10', strategy))
else:
    print('Server not started')
_images/tutorials_ipynb_TinyMS_ResNet50_tutorial_22_0.png
TOP1: Amanita毒蝇伞,伞菌目,鹅膏菌科,鹅膏菌属,主要分布于我国黑龙江、吉林、四川、西藏、云南等地,有毒, score: 0.99750286340713500977

Check output

If user runs resnet_imagenet and see output similar to this:

TOP1: Amanita毒蝇伞,伞菌目,鹅膏菌科,鹅膏菌属,主要分布于我国黑龙江、吉林、四川、西藏、云南等地,有毒, score: 0.99119007587432861328

that means the prediction result is returned and the inference is completed.

Or user runs resnet_cifar, and the output is expected to be like this:

TOP1: airplane, score: 0.99997282028198242188

## Change model

Run the following code to train a ResNet50 model with ImageNet2012 mushroom dataset, then run another servable_json code block to define servable. The dataset will be downloaded here.

[ ]:
# download the imagenet2012 mushroom dataset
imagenet_path = '/root/mushrooms'
if not os.path.exists(imagenet_path):
    !wget -P /root/ https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/resnet-50/mushrooms/mushrooms.zip
    !mkdir /root/mushrooms/
    !unzip /root/mushrooms.zip -d /root/mushrooms/
    print('************Download complete*************')
else:
    print('************Dataset already exists.**************')


# check ckpt folder exists or not
imagenet_ckpt_folder = '/etc/tinyms/serving/resnet50_imagenet2012'
imagenet_ckpt_path = '/etc/tinyms/serving/resnet50_imagenet2012/resnet50.ckpt'
if not os.path.exists(imagenet_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/resnet50_imagenet2012
else:
    print('resnet50_imagenet2012 ckpt folder already exists')


epoch_size = 90 # default is 90
batch_size = 32

# set environment parameters
dataset_sink_mode = False
device_target = "CPU"
context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

# set dataset parameters
imagenet_train_path = '/root/mushrooms/train'
train_dataset = ImageFolderDataset(imagenet_train_path, num_parallel_workers=4, shuffle=True)
train_dataset = imagefolder_transform.apply_ds(train_dataset, repeat_size=1, batch_size=32, is_training=True)
imagenet_eval_path = '/root/mushrooms/eval'
eval_dataset = ImageFolderDataset(imagenet_eval_path, num_parallel_workers=4, shuffle=True)
eval_dataset = imagefolder_transform.apply_ds(eval_dataset, repeat_size=1, batch_size=32, is_training=False)
step_size = train_dataset.get_dataset_size()

save_checkpoint_epochs = 5
ckpoint_cb = ModelCheckpoint(prefix="resnet_imagenet2012", config=CheckpointConfig(
            save_checkpoint_steps=save_checkpoint_epochs * train_dataset.get_dataset_size(),
            keep_checkpoint_max=10))

# define the loss function
net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")

# define the optimizer
net_opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01, 0.9)
model.compile(loss_fn=net_loss, optimizer=net_opt, metrics={"Accuracy": Accuracy()})


print('************************Start training*************************')
model.train(epoch_size, train_dataset, callbacks=[ckpoint_cb, LossMonitor()],dataset_sink_mode=dataset_sink_mode)
model.save_checkpoint(imagenet_ckpt_path)
print('************************Finished training*************************')

model.load_checkpoint(imagenet_ckpt_path)
print('************************Start evaluation*************************')
acc = model.eval(eval_dataset, dataset_sink_mode=dataset_sink_mode)
print("============== Accuracy:{} ==============".format(acc))

Shutdown server

[10]:
shutdown()
[10]:
'Server shutting down...'

TinyMS MobileNetV2 Tutorial

In this tutorial, using TinyMS API to train/serve a MobileNetV2 model will be demonstrated.

Prerequisite

  • Ubuntu: 18.04

  • Python: 3.7.x

  • Flask: 1.1.2

  • MindSpore: CPU-1.1.1

  • TinyMS: 0.1.0

  • numpy: 1.17.5

  • Pillow: 8.1.0

  • pip: 21.0.1

  • requests: 2.18.4

Introduction

TinyMS is a high-level API which is designed for amateur of deep learning. It minimizes the number of actions of users required to construct, train, evaluate and serve a model. TinyMS also provides tutorials and documentations for developers.

This tutorial consists of six parts, constructing the model, downloading dataset, training, define servable json, starting server and making predictions in which the server will be run in a sub process.

[1]:
import os
import json

from PIL import Image
from tinyms import context
from tinyms.serving import start_server, predict, list_servables, shutdown, server_started
from tinyms.model import Model, mobilenetv2
from tinyms.data import Cifar10Dataset, download_dataset
from tinyms.vision import cifar10_transform, ImageViewer
from tinyms.metrics import Accuracy
from tinyms.optimizers import Momentum
from tinyms.losses import CrossEntropyWithLabelSmooth
from tinyms.utils.train.loss_manager import FixedLossScaleManager
from tinyms.utils.train.lr_generator import mobilenetv2_lr
from tinyms.utils.train.cb_config import mobilenetv2_cb
[WARNING] ME(8895:140407127349056,MainProcess):2021-03-19-15:03:40.515.970 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.

1. Construct the model

TinyMS encapsulates init and construct of the MobileNetV2 model, the line of the code is reduced to construct the model:

[2]:
# build the model
net = mobilenetv2(class_num=10, is_training=True)
model = Model(net)

2. Download dataset

The cifar10 dataset will be downloaded if cifar10 folder didn’t exist at the root. If cifar10 folder already exists, this step will not be performed.

[3]:
# download the dataset
cifar10_path = '/root/cifar10/cifar-10-batches-bin'
if not os.path.exists(cifar10_path):
    download_dataset('cifar10', '/root')
    print('************Download complete*************')
else:
    print('************Dataset already exists.**************')
************** Downloading the Cifar10 dataset **************
[████████████████████████████████████████████████████████████████████████████████████████████████████] 100.00%
============== /root/cifar10/cifar-10-binary.tar.gz is ready ==============
************Download complete*************

3. Train the model & evaluation

The dataset for both training and evaluation will be defined here, and the parameters for training also set in this block. A trained ckpt file will be saved to /etc/tinyms/serving/mobilenetv2 folder for later use, meanwhile the evaluation will be performed and the Accuracy can be checked

Notice: Since training MobileNetV2 on CPU is time consuming, we recommend skip training and using provided ckpt files to run.
[ ]:
# check ckpt folder exists or not
ckpt_folder = '/etc/tinyms/serving/mobilenetv2'
ckpt_path = '/etc/tinyms/serving/mobilenetv2/mobilenetv2.ckpt'
if not os.path.exists(ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/mobilenetv2
else:
    print('mobilenetv2 ckpt folder already exists')

# Declare common variables
epoch_size = 60 # default is 60
batch_size = 32
class_num = 10

# set runtime environment
device_target="CPU"
dataset_sink_mode = False
context.set_context(mode=context.GRAPH_MODE, device_target=device_target)

# create cifar10 dataset
train_dataset = Cifar10Dataset(cifar10_path, num_parallel_workers=4, shuffle=True)
train_dataset = cifar10_transform.apply_ds(train_dataset, repeat_size=1, batch_size=32, is_training=True)
eval_dataset = Cifar10Dataset(cifar10_path, num_parallel_workers=4, shuffle=True)
eval_dataset = cifar10_transform.apply_ds(eval_dataset, repeat_size=1, batch_size=32, is_training=False)
step_size = train_dataset.get_dataset_size()

# define the loss function
label_smooth = 0.1
loss = CrossEntropyWithLabelSmooth(smooth_factor=label_smooth,num_classes=class_num)

# get learning rate
lr_max = 0.001
lr_init_scale = 0.01
lr_end_scale = 0.01
lr = mobilenetv2_lr(global_step=0,
                    lr_init=lr_max*lr_init_scale,
                    lr_end=lr_max*lr_end_scale,
                    lr_max=lr_max,
                    warmup_epochs=2,
                    total_epochs=epoch_size,
                    steps_per_epoch=step_size)

# define the optimizer
loss_scale = FixedLossScaleManager(1024, drop_overflow_update=False)
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()),lr, 0.9, 4e-5, 1024)
model.compile(loss_fn=loss, optimizer=opt, metrics={"Accuracy": Accuracy()},loss_scale_manager=loss_scale)

# configure checkpoint to save weights and do training job
save_checkpoint_epochs = 10
ckpoint_cb = mobilenetv2_cb(device_target=device_target,
                            lr=lr,
                            is_saving_checkpoint=True,
                            save_checkpoint_epochs=save_checkpoint_epochs,
                            step_size=step_size)


print('************************Start training*************************')
model.train(epoch_size, train_dataset, callbacks=ckpoint_cb, dataset_sink_mode=dataset_sink_mode)
model.save_checkpoint(ckpt_path)
print('************************Finished training*************************')

model.load_checkpoint(ckpt_path)
print('************************Start evaluation*************************')
acc = model.eval(eval_dataset, dataset_sink_mode=dataset_sink_mode)
print("============== Accuracy:{} ==============".format(acc))
Notice: If skipped training process, download the pretrained ckpt file and continue to serving

Click HERE to download MobileNetV2 ckpt file and save this file to /etc/tinyms/serving/mobilenetv2/mobilenetv2.ckpt.

Or run the following code to download and store the ckpt file:

[4]:
mobilenetv2_ckpt_folder = '/etc/tinyms/serving/mobilenetv2'
mobilenetv2_ckpt_path = '/etc/tinyms/serving/mobilenetv2/mobilenetv2.ckpt'

if not os.path.exists(mobilenetv2_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/mobilenetv2
    !wget -P /etc/tinyms/serving/mobilenetv2 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/mobilenetv2.ckpt
else:
    print('mobilenetv2 ckpt folder already exists')
    if not os.path.exists(mobilenetv2_ckpt_path):
        !wget -P /etc/tinyms/serving/mobilenetv2 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/mobilenetv2.ckpt
    else:
        print('mobilenetv2 ckpt file already exists')
mobilenetv2 ckpt folder already exists
--2021-03-19 15:06:26--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cifar10/mobilenetv2.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.5, 49.4.112.90, 49.4.112.113, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18509001 (18M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/mobilenetv2/mobilenetv2.ckpt’

mobilenetv2.ckpt    100%[===================>]  17.65M  16.1MB/s    in 1.1s

2021-03-19 15:06:29 (16.1 MB/s) - ‘/etc/tinyms/serving/mobilenetv2/mobilenetv2.ckpt’ saved [18509001/18509001]

4. Define servable.json

Define the MobileNetV2 servable json file for model name, format and number of classes for later use.

[5]:
servable_json = [{'name': 'mobilenetv2',
                  'description': 'This servable hosts a mobilenetv2 model predicting 10 classes of objects',
                  'model': {
                      "name": "mobilenetv2",
                      "format": "ckpt",
                      "class_num": 10}}]
os.chdir("/etc/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

5. Start server

5.1 Introduction

TinyMS Serving is a C/S(client/server) structure. There is a server and client. TinyMS using Flask which is a micro web framework written in python as the C/S communication tool. In order to serve a model, user must start server first. If successfully started, the server will be run in a subprocess and listening to POST requests from 127.0.0.1 port 5000 sent by client and handle the requests using MindSpore backend which will construct the model, run the prediction and send the result back to the client.

5.2 start server

Run the following code block to start the server:

[6]:
start_server()
Server starts at host 127.0.0.1, port 5000

6. Make predictions

6.1 Upload the pic

A picture is required to be the input. This ckpt requires a picture containing the following objects:

['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

Click airplane to download the picture used in this tutorial. Upload the pic, if using terminal, either scp or wget will do, if running in Jupyter, click Upload button at the top right and select the picture.

Save the picture at the root folder, rename to airplane.jpg(or any name you want).

Or run the following code to download the picture used in this tutorial

[7]:
# download the airplane pic
if not os.path.exists('/root/airplane.jpg'):
    !wget -P /root/ https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/objects/airplane.jpg
else:
    print('airplane.jpg already exists')
--2021-03-19 15:06:37--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/objects/airplane.jpg
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.5, 49.4.112.90, 49.4.112.113, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 151188 (148K) [image/jpeg]
Saving to: ‘/root/airplane.jpg’

airplane.jpg        100%[===================>] 147.64K   560KB/s    in 0.3s

2021-03-19 15:06:38 (560 KB/s) - ‘/root/airplane.jpg’ saved [151188/151188]

6.2 List servables

Now, we can use list_servables function to check what model is servable right now.

[8]:
list_servables()
[8]:
[{'description': 'This servable hosts a mobilenetv2 model predicting 10 classes of objects',
  'model': {'class_num': 10, 'format': 'ckpt', 'name': 'mobilenetv2'},
  'name': 'mobilenetv2'}]

If the output description shows it is a MobileNetV2 model, then we can continue to next step to send our request.

6.3 Sending request and get the result

Run predict function to send the request, select between TOP1_CLASS and TOP5_CLASS to check the output:

[9]:
# set image path and output strategy(TOP1_CLASS or TOP5_CLASS)
image_path = "/root/airplane.jpg"
strategy = "TOP1_CLASS"

# predict(image_path, servable_name, dataset_name, strategy)
if server_started() is True:
    img_viewer = ImageViewer(Image.open(image_path), image_path)
    img_viewer.show()
    print(predict(image_path, 'mobilenetv2', 'cifar10', strategy))
else:
    print("Server not started")
_images/tutorials_ipynb_TinyMS_MobileNetV2_tutorial_20_0.png
TOP1: airplane, score: 0.22268821299076080322

Check output

If user can see output similar to this:

TOP1: airplane, score: 0.22268821299076080322

that means the prediction result is returned and the inference is completed.

## Shutdown server

To restart and try another checkpoint file, click Kernel at the top, then Restart & Clear Output, replace the servable_json code and predict() function

Run the following code to shutdown Flask server:

[10]:
shutdown()
[10]:
'Server shutting down...'

TinyMS SSD300 Tutorial

In this tutorial, using TinyMS API to train/serve an SSD300 model will be demonstrated.

Prerequisite

  • Ubuntu: 18.04

  • Python: 3.7.x

  • Flask: 1.1.2

  • MindSpore: CPU-1.1.1

  • TinyMS: 0.1.0

  • numpy: 1.17.5

  • Pillow: 8.1.0

  • pip: 21.0.1

  • requests: 2.18.4

Introduction

TinyMS is a high-level API which is designed for amateur of deep learning. It minimizes the number of actions of users required to construct, train, evaluate and serve a model. TinyMS also provides tutorials and documentations for developers.

This tutorial consists of six parts, constructing the model, downloading dataset, training, define servable json, starting server and making predictions in which the server will be run in a sub process.

[1]:
import os
import json
import time
import tinyms as ts
import xml.etree.ElementTree as et

from PIL import Image
from tinyms import context, layers, primitives as P, Tensor
from tinyms.serving import start_server, predict, list_servables, shutdown, server_started
from tinyms.data import VOCDataset, download_dataset
from tinyms.vision import voc_transform, coco_eval, ImageViewer
from tinyms.model import Model, ssd300_mobilenetv2, ssd300_infer
from tinyms.losses import net_with_loss
from tinyms.optimizers import Momentum
from tinyms.callbacks import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor
from tinyms.utils.train.lr_generator import mobilenetv2_lr as ssd300_lr
from tinyms.initializers import initializer, TruncatedNormal
[WARNING] ME(14174:140126738016064,MainProcess):2021-03-19-15:33:42.136.028 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.

1. Construct the model

[2]:
# build network
net = ssd300_mobilenetv2(class_num=21)

2. Download dataset

The VOC dataset will be downloaded if voc folder didn’t exist at the root. If voc folder already exists, this step will not be performed.

[3]:
# download the dataset
voc_path = '/root/voc'

if not os.path.exists(voc_path):
    download_dataset('voc', '/root')
    print('************Download complete*************')
else:
    print('************Dataset already exists.**************')
************** Downloading the VOC2007 dataset **************
[████████████████████████████████████████████████████████████████████████████████████████████████████] 100.00%
============== /root/voc/VOCtrainval_06-Nov-2007.tar is ready ==============
************Download complete*************

3. Train the model & evaluation

The dataset for both training and evaluation will be defined here, and the parameters for training also set in this block. A trained ckpt file will be saved to /etc/tinyms/serving/ssd300 folder for later use, meanwhile the evaluation will be performed and the Accuracy can be checked

Notice: Since training SSD300 on CPU is time consuming, we recommend skip training and using provided ckpt files to run.
[ ]:
class TrainingWrapper(layers.Layer):
    """
    Encapsulation class of SSD300 network training.

    Append an optimizer to the training network after that the construct
    function can be called to create the backward graph.

    Args:
        network (Layer): The training network. Note that loss function should have been added.
        optimizer (Optimizer): Optimizer for updating the weights.
        sens (float): The adjust parameter. Default: 1.0.
    """

    def __init__(self, network, optimizer, sens=1.0):
        super(TrainingWrapper, self).__init__(auto_prefix=False)
        self.network = network
        self.network.set_grad()
        self.weights = ts.ParameterTuple(network.trainable_params())
        self.optimizer = optimizer
        self.grad = P.GradOperation(get_by_list=True, sens_param=True)
        self.sens = sens
        self.hyper_map = P.HyperMap()

    def construct(self, *args):
        weights = self.weights
        loss = self.network(*args)
        sens = P.Fill()(P.DType()(loss), P.Shape()(loss), self.sens)
        grads = self.grad(self.network, weights)(*args, sens)
        return P.depend(loss, self.optimizer(grads))

def create_voc_label(voc_dir, voc_cls, usage='val'):
    """Get image path and annotation from VOC."""
    if not os.path.isdir(voc_dir):
        raise ValueError(f'Cannot find {voc_dir} dataset path.')
    anno_dir = voc_dir
    if os.path.isdir(os.path.join(voc_dir, 'Annotations')):
        anno_dir = os.path.join(voc_dir, 'Annotations')

    cls_map = {name: i for i, name in enumerate(voc_cls)}
    # Fetch the specific xml files path
    xml_files = []
    with open(os.path.join(voc_dir, 'ImageSets', 'Main', usage+'.txt'), 'r') as f:
        for line in f:
            xml_files.append(line.strip('\n')+'.xml')

    json_dict = {"images": [], "type": "instances", "annotations": [],
                 "categories": []}
    bnd_id = 1
    for xml_file in xml_files:
        img_id = xml_files.index(xml_file)
        tree = et.parse(os.path.join(anno_dir, xml_file))
        root_node = tree.getroot()
        file_name = root_node.find('filename').text

        for obj in root_node.iter('object'):
            cls_name = obj.find('name').text
            if cls_name not in cls_map:
                print(f'Label "{cls_name}" not in "{cls_map}"')
                continue

            bnd_box = obj.find('bndbox')
            x_min = int(float(bnd_box.find('xmin').text)) - 1
            y_min = int(float(bnd_box.find('ymin').text)) - 1
            x_max = int(float(bnd_box.find('xmax').text)) - 1
            y_max = int(float(bnd_box.find('ymax').text)) - 1
            o_width = abs(x_max - x_min)
            o_height = abs(y_max - y_min)
            ann = {'area': o_width * o_height, 'iscrowd': 0,
                   'image_id': img_id,
                   'bbox': [x_min, y_min, o_width, o_height],
                   'category_id': cls_map[cls_name], 'id': bnd_id,
                   'ignore': 0,
                   'segmentation': []}
            json_dict['annotations'].append(ann)
            bnd_id = bnd_id + 1

        size = root_node.find("size")
        width = int(size.find('width').text)
        height = int(size.find('height').text)
        image = {'file_name': file_name, 'height': height, 'width': width,
                 'id': img_id}
        json_dict['images'].append(image)

    for cls_name, cid in cls_map.items():
        cat = {'supercategory': 'none', 'id': cid, 'name': cls_name}
        json_dict['categories'].append(cat)

    anno_file = os.path.join(anno_dir, 'annotation.json')
    with open(anno_file, 'w') as f:
        json.dump(json_dict, f)
    return anno_file


# check ckpt folder exists or not
ckpt_folder = '/etc/tinyms/serving/ssd300'
ckpt_path = '/etc/tinyms/serving/ssd300/ssd300.ckpt'
if not os.path.exists(ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/ssd300
else:
    print('ssd300 ckpt folder already exists')

# set parameters
epoch_size = 800 # default is 800
batch_size = 32
voc_path = '/root/voc/VOCdevkit/VOC2007'

# set environment parameters
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
dataset_sink_mode = False

# set dataset parameters
train_dataset = VOCDataset(voc_path, task='Detection', usage='trainval', num_parallel_workers=4, shuffle=True, decode=True)
train_dataset = voc_transform.apply_ds(train_dataset, repeat_size=1, batch_size=batch_size, num_parallel_workers=4, is_training=True)
eval_dataset = VOCDataset(voc_path, task='Detection', usage='val', num_parallel_workers=4, shuffle=True, decode=True)
eval_dataset = voc_transform.apply_ds(eval_dataset, repeat_size=1, batch_size=batch_size, num_parallel_workers=4, is_training=False)
dataset_size = train_dataset.get_dataset_size()
total = eval_dataset.get_dataset_size()

# define the loss function
net = net_with_loss(net)
params = net.trainable_params()
for p in params:
        if 'beta' not in p.name and 'gamma' not in p.name and 'bias' not in p.name:
            p.set_data(initializer(TruncatedNormal(0.02), p.data.shape, p.data.dtype))


# define the optimizer
pre_trained_epoch_size = 0
save_checkpoint_epochs = 10
lr = 0.01
lr = ssd300_lr(global_step=pre_trained_epoch_size * dataset_size,
                lr_init=0.001, lr_end=0.001 * lr, lr_max=lr,
                warmup_epochs=2, total_epochs=epoch_size,
                steps_per_epoch=dataset_size)
loss_scale = 1.0
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), lr,0.9, 1.5e-4, loss_scale)
model = Model(TrainingWrapper(net, opt, loss_scale))
model.compile()
ckpoint_cb = ModelCheckpoint(prefix="ssd300", config=CheckpointConfig(
    save_checkpoint_steps=save_checkpoint_epochs * dataset_size,
    keep_checkpoint_max=10))

print('************************Start training*************************')
model.train(epoch_size, train_dataset, callbacks=[ckpoint_cb, LossMonitor(), TimeMonitor(data_size=dataset_size)],
            dataset_sink_mode=dataset_sink_mode)
model.save_checkpoint(ckpt_path)
print('************************Finished training*************************')

eval_net = ssd300_infer(class_num=21)
model = Model(eval_net)
model.load_checkpoint(ckpt_path)
# perform the model predict operation
print("\n========================================\n")
print("total images num: ", total)
print("Processing, please wait a moment...")
start = time.time()
pred_data = []
id_iter = 0

for data in eval_dataset.create_dict_iterator(output_numpy=True):
    image_np = data['image']
    image_shape = data['image_shape']

    output = model.predict(Tensor(image_np))
    for batch_idx in range(image_np.shape[0]):
        pred_data.append({"boxes": output[0].asnumpy()[batch_idx],
                          "box_scores": output[1].asnumpy()[batch_idx],
                          "img_id": id_iter,
                          "image_shape": image_shape[batch_idx]})
        id_iter += 1
cost_time = int((time.time() - start) * 1000)
print(f'    100% [{total}/{total}] cost {cost_time} ms')

# calculate mAP for the predict data
voc_cls = ['background',
            'aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
            'bus', 'car', 'cat', 'chair', 'cow',
            'diningtable', 'dog', 'horse', 'motorbike', 'person',
            'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
anno_file = create_voc_label(voc_path, voc_cls)
mAP = coco_eval(pred_data, anno_file)
print("\n========================================\n")
print(f"mAP: {mAP}")
Notice: If skipped training process, download the pretrained ckpt file and continue to serving

Click HERE to download the ckpt file and save this file to /etc/tinyms/serving/ssd300/ssd300.ckpt.

Or run the following code to download and store the ckpt file:

[4]:
ssd300_ckpt_folder = '/etc/tinyms/serving/ssd300'
ssd300_ckpt_path = '/etc/tinyms/serving/ssd300/ssd300.ckpt'

if not os.path.exists(ssd300_ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/ssd300
    !wget -P /etc/tinyms/serving/ssd300 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/voc/ssd300.ckpt
else:
    print('ssd300 ckpt folder already exists')
    if not os.path.exists(ssd300_ckpt_path):
        !wget -P /etc/tinyms/serving/ssd300 https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/voc/ssd300.ckpt
    else:
        print('ssd300 ckpt file already exists')
ssd300 ckpt folder already exists
--2021-03-19 15:38:53--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/voc/ssd300.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 49.4.112.90, 121.36.121.44, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28056511 (27M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/ssd300/ssd300.ckpt’

ssd300.ckpt         100%[===================>]  26.76M  20.7MB/s    in 1.3s

2021-03-19 15:38:55 (20.7 MB/s) - ‘/etc/tinyms/serving/ssd300/ssd300.ckpt’ saved [28056511/28056511]

4. Define servable.json

Run this code to define the servable json file for later use:

[5]:
servable_json = [{'name': 'ssd300',
                  'description': 'This servable hosts an ssd300 model predicting bounding boxes',
                  'model': {
                      "name": "ssd300",
                      "format": "ckpt",
                      "class_num": 21}}]
os.chdir("/etc/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

5. Start server

5.1 Introduction

TinyMS Serving is a C/S(client/server) structure. There is a server and client. TinyMS using Flask which is a micro web framework written in python as the C/S communication tool. In order to serve a model, user must start server first. If successfully started, the server will be run in a subprocess and listening to POST requests from 127.0.0.1 port 5000 sent by client and handle the requests using MindSpore backend which will construct the model, run the prediction and send the result back to the client.

5.2 start server

Run the following code block to start the server:

[6]:
start_server()
Server starts at host 127.0.0.1, port 5000

6. Make predictions

6.1 Upload the pic

A picture is required to be the input. In this tutorial, the SSD300 ckpt requires a picture containing objects of

['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']

Click HERE to download the picture which is used in this tutorial. Upload the pic, if using terminal, either scp or wget will do, if running in Jupyter, click Upload button at the top right and select the picture. Save the picture at the root folder, rename to ssd300_test.jpeg(or any name you want).

Or run the following code to download the picture used in this tutorial:

[7]:
# download the test pic
if not os.path.exists('/root/ssd300_test.jpeg'):
    !wget -P /root/ https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/ssd300_test/ssd300_test.jpeg
else:
    print('ssd300_test.jpeg already exists')
--2021-03-19 15:38:59--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/tinyms-test-pics/ssd300_test/ssd300_test.jpeg
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 49.4.112.90, 121.36.121.44, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 70412 (69K) [image/jpeg]
Saving to: ‘/root/ssd300_test.jpeg’

ssd300_test.jpeg    100%[===================>]  68.76K   338KB/s    in 0.2s

2021-03-19 15:39:00 (338 KB/s) - ‘/root/ssd300_test.jpeg’ saved [70412/70412]

6.2 List servables

Now, use list_servables function to check what model is servable right now.

[8]:
list_servables()
[8]:
[{'description': 'This servable hosts an ssd300 model predicting bounding boxes',
  'model': {'class_num': 21, 'format': 'ckpt', 'name': 'ssd300'},
  'name': 'ssd300'}]

If the output description shows it is an ssd300 model, run the following code to predict the bounding boxes

6.3 Sending request and get the result

Run predict function and get the result, right now only TOP1CLASS strategy is supported for SSD300. Call ImageViewer.draw to draw the bounding boxes

[9]:
# set image path and output strategy(only TOP1_CLASS)
image_path = "/root/ssd300_test.jpeg"
strategy = "TOP1_CLASS"

labels = ['background',
          'aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
          'bus', 'car', 'cat', 'chair', 'cow',
          'diningtable', 'dog', 'horse', 'motorbike', 'person',
          'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']

# predict(image_path, servable_name, dataset_name, strategy)
# ImageViewer(img, title)
# ImageViewer.draw(predict_result, labels)
if server_started() is True:
    res = predict(image_path, 'ssd300', 'voc', strategy)
    img_viewer = ImageViewer(Image.open(image_path))
    img_viewer.draw(res, labels)
else:
    print("Server not started")
_images/tutorials_ipynb_TinyMS_SSD300_tutorial_20_0.png

Check output

If the input picture is shown with bounding boxes labeled with object classes and score, that means the prediction is successfully performed.

Shutdown server

[10]:
shutdown()
[10]:
'Server shutting down...'

TinyMS CycleGAN Tutorial

In this tutorial, using TinyMS API to train/serve a CycleGAN model will be demonstrated.

Prerequisite

  • Ubuntu: 18.04

  • Python: 3.7.x

  • Flask: 1.1.2

  • MindSpore: CPU-1.1.1

  • TinyMS: 0.1.0

  • numpy: 1.17.5

  • Pillow: 8.1.0

  • pip: 21.0.1

  • requests: 2.18.4

  • matplotlib: 3.3.4

Introduction

TinyMS is a high-level API which is designed for amateur of deep learning. It minimizes the number of actions of users required to construct, train, evaluate and serve a model. TinyMS also provides tutorials and documentations for developers.

This tutorial consists of five parts, downloading dataset, training, define servable json, starting server and making predictions in which the server will be run in a sub process.

[1]:
import os
import argparse
import json
import tinyms as ts
import numpy as np
import matplotlib.pyplot as plt

from PIL import Image
from tinyms import context, Tensor
from tinyms.serving import start_server, predict, list_servables, shutdown, server_started
from tinyms.data import GeneratorDataset, UnalignedDataset, GanImageFolderDataset, DistributedSampler
from tinyms.vision import cyclegan_transform
from tinyms.model.cycle_gan.cycle_gan import get_generator_discriminator, cycle_gan, TrainOneStepG, TrainOneStepD
from tinyms.losses import CycleGANDiscriminatorLoss, CycleGANGeneratorLoss
from tinyms.optimizers import Adam
from tinyms.data.utils import save_image, generate_image_list
from tinyms.utils.common_utils import GanReporter, gan_load_ckpt, GanImagePool
from tinyms.utils.train import cyclegan_lr
from tinyms.utils.eval import CityScapes, fast_hist, get_scores
[WARNING] ME(25552:140556571072320,MainProcess):2021-03-21-15:01:26.554.568 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
WARNING: 'ControlDepend' is deprecated from version 1.1 and will be removed in a future version, use 'Depend' instead.

1. Download dataset

In this tutorial, the cityscapes dataset is used and processed. Click the link before proceeding to the official website to submit and download the dataset.

2. Train the model & evaluation

Define parameters and training process

Notice: Training on CPU is time consuming, we recommend skip training and using provided ckpt files to run.
[ ]:
def parse_args():
    parser = argparse.ArgumentParser(description='MindSpore Cycle GAN Example')
    parser.add_argument('--device_target', type=str, default="CPU", choices=['Ascend', 'GPU', 'CPU'],
                        help='device where the code will be implemented (default: CPU)')
    parser.add_argument('--dataset_path', type=str, default="/root/dataset/cityscapes", help='cityscape dataset path.')
    parser.add_argument('--phase', type=str, default="train", help='train, eval or predict.')
    parser.add_argument('--model', type=str, default="resnet", choices=("resnet", "unet"),
                        help='generator model, should be in [resnet, unet].')
    parser.add_argument('--max_epoch', type=int, default=200, help='epoch size for training, default is 200.')
    parser.add_argument('--n_epoch', type=int, default=100,
                        help='number of epochs with the initial learning rate, default is 100')
    parser.add_argument('--batch_size', type=int, default=1, help='Batch size.')
    parser.add_argument("--save_checkpoint_epochs", type=int, default=10,
                        help="Save checkpoint epochs, default is 10.")
    parser.add_argument("--G_A_ckpt", type=str, default="/etc/tinyms/serving/cyclegan_cityscape/G_A.ckpt", help="pretrained checkpoint file path of G_A.")
    parser.add_argument("--G_B_ckpt", type=str, default="/etc/tinyms/serving/cyclegan_cityscape/G_B.ckpt", help="pretrained checkpoint file path of G_B.")
    parser.add_argument("--D_A_ckpt", type=str, default=None, help="pretrained checkpoint file path of D_A.")
    parser.add_argument("--D_B_ckpt", type=str, default=None, help="pretrained checkpoint file path of D_B.")
    parser.add_argument('--outputs_dir', type=str, default='/root/',
                        help='models are saved here, default is ./outputs.')
    parser.add_argument('--save_imgs', type=bool, default=True,
                        help='whether save imgs when epoch end, default is True.')
    parser.add_argument("--cityscapes_dir", type=str, default="/root/dataset/cityscapes/testA", help="Path to the original cityscapes dataset")
    parser.add_argument("--result_dir", type=str, default="/root/dataset/cityscapes/testB", help="Path to the generated images to be evaluated")
    args_opt = parser.parse_args(args=[])
    return args_opt


args_opt = parse_args()

context.set_context(mode=context.GRAPH_MODE, device_target="CPU")

dataset_path = args_opt.dataset_path
phase = args_opt.phase
G_A_ckpt = args_opt.G_A_ckpt
G_B_ckpt = args_opt.G_B_ckpt
repeat_size = 1


model = args_opt.model
batch_size = args_opt.batch_size
max_dataset_size = float("inf")
outputs_dir = args_opt.outputs_dir

max_epoch = args_opt.max_epoch
n_epoch = args_opt.n_epoch
n_epoch = min(max_epoch, n_epoch)


def create_dataset(dataset_path, batch_size=1, repeat_size=1, max_dataset_size=None,
                   shuffle=True, num_parallel_workers=1, phase='train', data_dir='testA'):
    """ create Mnist dataset for train or eval.
    Args:
        data_path: Data path
        batch_size: The number of data records in each group
        repeat_size: The number of replicated data records
        num_parallel_workers: The number of parallel workers
    """
    # define dataset and apply the transform func
    if phase == 'train':
        ds = UnalignedDataset(dataset_path, phase, max_dataset_size=max_dataset_size, shuffle=True)

        device_num = 1
        distributed_sampler = DistributedSampler(len(ds), num_replicas=device_num, rank=0, shuffle=shuffle)
        gan_generator_ds = GeneratorDataset(ds, column_names=["image_A", "image_B"], sampler=distributed_sampler,
                                            num_parallel_workers=num_parallel_workers)
    else:
        datadir = os.path.join(dataset_path, data_dir)
        ds = GanImageFolderDataset(datadir, max_dataset_size=max_dataset_size)
        gan_generator_ds = GeneratorDataset(ds, column_names=["image", "image_name"],
                                            num_parallel_workers=num_parallel_workers)

    gan_generator_ds = cyclegan_transform.apply_ds(gan_generator_ds,
                                                   repeat_size=repeat_size,
                                                   batch_size=batch_size,
                                                   num_parallel_workers=num_parallel_workers,
                                                   shuffle=shuffle,
                                                   phase=phase)
    dataset_size = len(ds)
    return gan_generator_ds, dataset_size


# create dataset
dataset, args_opt.dataset_size = create_dataset(dataset_path, batch_size=batch_size, repeat_size=1,
                                                max_dataset_size=max_dataset_size, shuffle=True,
                                                num_parallel_workers=1,
                                                phase="train",
                                                data_dir=None)


G_A, G_B, D_A, D_B = get_generator_discriminator(model)
gan_load_ckpt(args_opt.G_A_ckpt, args_opt.G_B_ckpt, args_opt.D_A_ckpt, args_opt.D_B_ckpt,
              G_A, G_B, D_A, D_B)
generator_net = cycle_gan(G_A, G_B)

# define loss function and optimizer
loss_D = CycleGANDiscriminatorLoss(D_A, D_B)
loss_G = CycleGANGeneratorLoss(generator_net, D_A, D_B)
lr = cyclegan_lr(max_epoch, n_epoch, args_opt.dataset_size)

optimizer_G = Adam(generator_net.trainable_params(),
                   cyclegan_lr(max_epoch, n_epoch, args_opt.dataset_size), beta1=0.5)
optimizer_D = Adam(loss_D.trainable_params(),
                   cyclegan_lr(max_epoch, n_epoch, args_opt.dataset_size), beta1=0.5)

# build two net: generator net and descriminator net
net_G = TrainOneStepG(loss_G, generator_net, optimizer_G)
net_D = TrainOneStepD(loss_D, optimizer_D)

# train process
imgae_pool_A = GanImagePool(pool_size=50)
imgae_pool_B = GanImagePool(pool_size=50)


def train_process(args_opt, data_loader, net_G, net_D, imgae_pool_A, imgae_pool_B):
    reporter = GanReporter(args_opt)
    reporter.info('==========start training===============')
    for _ in range(max_epoch):
        reporter.epoch_start()
        for data in data_loader:
            img_A = data["image_A"]
            img_B = data["image_B"]
            res_G = net_G(img_A, img_B)
            fake_A = res_G[0]
            fake_B = res_G[1]
            res_D = net_D(img_A, img_B, imgae_pool_A.query(fake_A), imgae_pool_B.query(fake_B))
            reporter.step_end(res_G, res_D)
            reporter.visualizer(img_A, img_B, fake_A, fake_B)
        reporter.epoch_end(net_G)

    reporter.info('==========end training===============')


data_loader = dataset.create_dict_iterator()
train_process(args_opt, data_loader, net_G, net_D, imgae_pool_A, imgae_pool_B)

# eval
# original image dir
cityscapes_dir = args_opt.cityscapes_dir

# fake image dir generated after predict
result_dir = args_opt.result_dir


def eval_process(args_opt, cityscapes_dir, result_dir):
    CS = CityScapes()
    hist_perframe = ts.zeros((CS.class_num, CS.class_num)).asnumpy()
    cityscapes = generate_image_list(cityscapes_dir)
    args_opt.dataset_size = len(cityscapes)
    reporter = GanReporter(args_opt)
    reporter.start_eval()
    for i, img_path in enumerate(cityscapes):
        if i % 100 == 0:
            reporter.info('Evaluating: %d/%d' % (i, len(cityscapes)))
        img_name = os.path.split(img_path)[1]
        ids1 = CS.get_id(os.path.join(cityscapes_dir, img_name))
        ids2 = CS.get_id(os.path.join(result_dir, img_name))
        hist_perframe += fast_hist(ids1.flatten(), ids2.flatten(), CS.class_num)

    mean_pixel_acc, mean_class_acc, mean_class_iou, per_class_acc, per_class_iou = get_scores(hist_perframe)
    reporter.info("mean_pixel_acc:{}, mean_class_acc:{}, mean_class_iou: {}".format(mean_pixel_acc,
                                                                                    mean_class_acc,
                                                                                    mean_class_iou))
    reporter.info("************ Per class numbers below ************")
    for i, cl in enumerate(CS.classes):
        while len(cl) < 15:
            cl = cl + ' '
        reporter.info("{}: acc = {}, iou = {}".format(cl, per_class_acc[i], per_class_iou[i]))
    reporter.end_eval()


# Compare the similarity between the original image and the fake image
eval_process(args_opt, cityscapes_dir, result_dir)
Notice: If skipped training process, download the pretrained ckpt file and continue to serving

Click G_A to download G_A.ckpt file and G_B to download G_B.ckpt file. Save them to /etc/tinyms/serving/cyclegan_cityscape/

Or run the following code to download and store the ckpt files:

[2]:
ckpt_folder = '/etc/tinyms/serving/cyclegan_cityscape'
G_A_ckpt_path = '/etc/tinyms/serving/cyclegan_cityscape/G_A.ckpt'
G_B_ckpt_path = '/etc/tinyms/serving/cyclegan_cityscape/G_B.ckpt'

if not os.path.exists(ckpt_folder):
    !mkdir -p  /etc/tinyms/serving/cyclegan_cityscape
    !wget -P /etc/tinyms/serving/cyclegan_cityscape https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_A.ckpt
    !wget -P /etc/tinyms/serving/cyclegan_cityscape https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_B.ckpt
else:
    print('ckpt folder already exists')
    if not os.path.exists(G_A_ckpt_path):
        !wget -P /etc/tinyms/serving/cyclegan_cityscape https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_A.ckpt
    if not os.path.exists(G_B_ckpt_path):
        !wget -P /etc/tinyms/serving/cyclegan_cityscape https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_B.ckpt
ckpt folder already exists
--2021-03-21 15:01:30--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_A.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7785480 (7.4M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/cyclegan_cityscape/G_A.ckpt’

G_A.ckpt            100%[===================>]   7.42M  3.58MB/s    in 2.1s

2021-03-21 15:01:33 (3.58 MB/s) - ‘/etc/tinyms/serving/cyclegan_cityscape/G_A.ckpt’ saved [7785480/7785480]

--2021-03-21 15:01:34--  https://ascend-tutorials.obs.cn-north-4.myhuaweicloud.com/ckpt_files/cityscapes/G_B.ckpt
Resolving ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)... 49.4.112.113, 121.36.121.44, 49.4.112.5, ...
Connecting to ascend-tutorials.obs.cn-north-4.myhuaweicloud.com (ascend-tutorials.obs.cn-north-4.myhuaweicloud.com)|49.4.112.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7785480 (7.4M) [binary/octet-stream]
Saving to: ‘/etc/tinyms/serving/cyclegan_cityscape/G_B.ckpt’

G_B.ckpt            100%[===================>]   7.42M  9.62MB/s    in 0.8s

2021-03-21 15:01:36 (9.62 MB/s) - ‘/etc/tinyms/serving/cyclegan_cityscape/G_B.ckpt’ saved [7785480/7785480]

3. Define servable.json

Define the servable json file for model name, format and number of classes for later use.

[3]:
servable_json = [{'name': 'cyclegan_cityscape',
                  'description': 'This servable hosts a Cycle GAN model predicting for cityscape dataset',
                  'model': {
                      "name": "cycle_gan",
                      "format": "ckpt",
                      "g_model": "resnet"}}]
os.chdir("/etc/tinyms/serving")
json_data = json.dumps(servable_json, indent=4)

with open('servable.json', 'w') as json_file:
    json_file.write(json_data)

4. Start server

4.1 Introduction

TinyMS Serving is a C/S(client/server) structure. There is a server and client. TinyMS using Flask which is a micro web framework written in python as the C/S communication tool. In order to serve a model, user must start server first. If successfully started, the server will be run in a subprocess and listening to POST requests from 127.0.0.1 port 5000 sent by client and handle the requests using MindSpore backend which will construct the model, run the prediction and send the result back to the client.

4.2 start server

Run the following code block to start the server:

[4]:
start_server()
Server starts at host 127.0.0.1, port 5000

5. Make predictions

5.1 List servables

Now, we can use list_servables function to check what model is servable right now.

[5]:
list_servables()
[5]:
[{'description': 'This servable hosts a Cycle GAN model predicting for cityscape dataset',
  'model': {'format': 'ckpt', 'g_model': 'resnet', 'name': 'cycle_gan'},
  'name': 'cyclegan_cityscape'}]

If the output description shows it is a CycleGAN model, then we can continue to next step to send our request.

5.2 Sending request and get the result

Run predict function to send the request, in this tutorial, both gray to color and color to gray will be demonstrated. Recommend using pics from cityscapes/test to run the predict. If user using own pics, resize to 256*256.

[6]:
servable_name = 'cyclegan_cityscape'
dataset_name = 'cityscape'

if server_started() is True:
    # gray to color
    testA_path = '/root/dataset/cityscapes/testA/1.jpg'
    strategy = 'gray2color'
    fakeB_data = predict(testA_path, servable_name, dataset_name, strategy)

    # color to gray
    testB_path = '/root/dataset/cityscapes/testB/1.jpg'
    strategy = 'color2gray'
    fakeA_data = predict(testB_path, servable_name, dataset_name, strategy)

    # draw the plot
    plt.figure(dpi=160, figsize=(10, 10))

    plt.subplot(221)
    plt.imshow(Image.open(testA_path))
    plt.axis('off')
    plt.title(testA_path)

    plt.subplot(222)
    plt.imshow(Image.fromarray(np.uint8(fakeB_data)))
    plt.axis('off')
    plt.title("fakeB.jpg")

    plt.subplot(223)
    plt.imshow(Image.open(testB_path))
    plt.axis('off')
    plt.title(testB_path)

    plt.subplot(224)
    plt.imshow(Image.fromarray(np.uint8(fakeA_data)))
    plt.axis('off')
    plt.title("fakeA.jpg")

    plt.show()
else:
    print("Server not started")
_images/tutorials_ipynb_TinyMS_CycleGAN_tutorial_15_0.png

Check output

If user can see 4 pics, that means two on the left column are from original test datset, while two pic on the right column are the generated pic.

## Shutdown server

To restart and try another checkpoint file, click Kernel at the top, then Restart & Clear Output, replace the servable_json code and predict() function

Run the following code to shutdown Flask server:

[7]:
shutdown()
[7]:
'Server shutting down...'

TinyMS Online Video Tutorial

In order to facilitate the learning of machine learning beginners, TinyMS provides a full set of online video tutorials, which will be updated simultaneously with the iteration of each version. We update and archive according to the content of each episode, including but not limited to video URLs, learning documents and Q&A. If you encounter any questions during the learning process, welcome to add the WeChat of the assistant (Wechat ID: mindspore0328, annotation: TinyMS). The assistant will add you to the TinyMS course learning group, and communicate with developers in the group in time. Of course, we also welcome you to directly pull Issue when you encounter problems, which can also help other developers who encounter the same problem.

Quick Learn Shell in 30 mins

Quick Learn Python in 30 mins

Quick Learn Mathematics in Deep Learning

Design Concept

Background

In recent years, with the rapid development of AI technology, deep learning frameworks such as TensorFlow, PyTorch, Apache MXNet and MindSpore have emerged. These frameworks are very good at solving problems for academic research or commercial production, however for first time beginners or application developers with limited deep learning knowledge, much simpler APIs are desired. Alongside the existing efforts like Keras for TensorFlow and Fastai for PyTorch to address the issue, the TinyMS project is a new addition in this field to provide simple high level APIs, tiny runtime footprint, modular development and agile deployment. TinyMS begins with initial focus on MindSpore integration and looks forward to more framework adaptations in the long run.

Interestingly enough, MindSpore’s high-level and mid-level Python APIs have already implemented most of the functions of Keras, and also been on par with Fastai’s design for PyTorch’s flexibility. Therefore unlike Keras and Fastai which are developed with most of their goals to compensate the underlying frameworks, TinyMS is designed to further enhance the experience of the framework, especially for all scenario development in the case of MindSpore.

With the help of TinyMS, the following goals should be achieved:

  • Quicker to learn: Get started with AI application development in one minute

  • Easier to develop: Complete the task of changing AI models and dataset from one to the other in one hour

Architecture

Design goals of TinyMS:

  • High level API that are extremely simple to learn and use.

  • Support complete AI development workflow from data preparation to model training/inference and finally deployment.

  • Decoupled modules that are could be easily extended.

  • Small runtime footprint that could be used on mobile, edge or cloud.

  • Standardizing spec for model training script format.

_images/tinyms-architecture.pngTinyMS Architecture

Workflow analysis

Typical model development workflow:

  • Data Acquisition: Dataset download, decompression, loading, etc.

  • Data Processing: Data preprocessing (enhancement) operations performed on the original dataset for better model performance.

  • Model Construction: Construction of the network, and also the definition of loss function, optimizer, etc.

  • Model Training: The process of model training, including the definition of callbacks

  • Accuracy Verification: The process of model accuracy verification, including the definition of metrics

  • Model Deployment: Model application services via an inference server

_images/tinyms-workflow.pngTinyMS Workflow

Module design

TinyMS has the following modules:

Name Introduction Example Code
app Support OpenCV to Achieve Model Reasoning Visualization from tinyms.app import object_detection
data Dataset Loading and Downloading from tinyms.data import MnistDataset, download_dataset
hub Pre-trained Model Hub for Inference and Transfer Learning from tinyms import hub
model Model High Level API and Predefined Network from tinyms.model import Model, lenet5
serving Model Serving from tinyms.serving import predict
vision Computer Vision Related Data Processing from tinyms.vision import mnist_transform, Resize
text Natural Language Processing Related Data Processing from tinyms.text import Lookup
callbacks Callbacks During Model Training from tinyms.callbacks import ModelCheckpoint
common Basic Components Including Tensor, Numpy Style Functions from tinyms import Tensor, array
context Global Context from tinyms import context
initializers Ops Weight Initialization from tinyms.initializers import Normal
layers Neural Network Layer from tinyms.layers import Layer, Conv2d
losses Loss Function from tinyms.losses import SoftmaxCrossEntropyWithLogits
metrics Metrics For Model Verification from tinyms.metrics import Accuracy
optimizers Optimizer from tinyms.optimizers import Momentum
primitives Basic Ops from tinyms.primitives import Add, tensor_add

Implementation

Data loading (data)

The data loading module is mainly divided into two parts: dataset download and loading. Through TinyMS’s data loading API, developers can complete the entire process of downloading, decompressing, formatting, and loading common datasets with just two lines of code.

Most AI frameworks do not provide an interface for dataset download. Users need to prepare the dataset in advance, and at the same time, adjust the format (training/validation data set division, etc.) according to the data loading API format provided by the framework itself. To make dataset API usage much more easier, TinyMS provides the download_dataset interface, which supports users to complete the download, decompression and format adjustment operations of the data set with one click; use Mnist dataset as an example:

from tinyms.data import download_dataset

mnist_path = download_dataset('mnist', local_path='./')

For data loading operations, TinyMS completely inherits MindSpore’s native Data Loading API. Users can use the xxxDataset interface to instantiate different data sets very conveniently. Take MnistDataset as an example:

from tinyms.data import MnistDataset

mnist_ds = MnistDataset(mnist_path, shuffle=True)

Data preprocessing (vision, text)

Usually in the model development workflow, data processing presents a big challenge: insufficient data, heavy manual labeling task, irregular data format and many other issues. Any of them could affect the network accuracy after training. Most frameworks provide data processing modules. Take MindSpore as an example, it currently provides data processing functions for common scenarios such as CV and NLP (for relevant interface definitions, please refer to mindspore.dataset.vision and mindspore.dataset.text), the user can directly call the preset data processing operator to process pictures or text, and then construct a data processing pipeline to efficiently parallelize massive data (see here).

TinyMS has made further abstraction and encapsulation on the basis of MindSpore, and directly corresponds to the processing of the dataset itself through the DatasetTransform interface, allowing users to utilize a single piece of data or the entire dataset with just one line of code regarding preprocessing operation; take MnistTransform as an example:

from PIL import Image
from tinyms.vision import mnist_transform

# Preprocessing a single one picture
img = mnist_transform(Image.open('picture.jpg'))
# Apply preprocessing to MnistDataset class instance
mnist_ds = mnist_transform.apply_ds(mnist_ds)

Model construction (model)

As the core of deep learning model development, the ​​framework’s main responsibility is to provide complete operator expressions to build different network structures. Therefore, the interfaces at the ​​framework level focus more on functional completeness and flexibility, whereas ModelZoo is provided for application development. TinyMS encapsulates the relevant network call API on the ModelZoo script; take the LeNet5 network as an example:

from tinyms.model import lenet5

net = lenet5(class_num=10)

In addition to encapsulating the commonly used network structures, TinyMS also provides a Model high-level API interface (based on MindSpore Model interface package), by drawing on the design idea of ​​Keras Model interface, it not only improves the original API functionalities, but also provides a consistent development experience for Keras users who wants to try TinyMS:

from tinyms.model import Model

model = Model(net)
model.compile(loss_fn=net_loss, optimizer=net_opt)

Model training (losses, optimizers, callbacks)

For the model training phase, the most important factors are the definitions of loss functions, optimizers, and callback functions. For beginners, it is not difficult to understand the basic principles of loss functions and optimizers, but a strong mathematical background is required to understand the principles of implementation. Therefore, the TinyMS high-level API encapsulates the loss function and optimizer at the network level, so that users can complete the initialization work with one line of code whether they are training simple or complex networks; take the LeNet5 network as an example:

from tinyms.losses import SoftmaxCrossEntropyWithLogits
from tinyms.optimizers import Momentum

lr = 0.01
momentum = 0.9
net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_opt = Momentum(net.trainable_params(), lr, momentum)

Regarding the definition of callback functions, in addition to commonly used callback functions (such as TimeMonitor, LossMonitor, etc.), MindSpore itself provides Callback interface to facilitate user-defined callback functions. The TinyMS high-level API also provides network-level encapsulation, so that users can complete the initialization of the callback function with one line of code; take the MobileNetV2 network as an example:

from tinyms.callbacks import mobilenetv2_cb

net_cb = mobilenetv2_cb(device_target, lr, is_saving_checkpoint, save_checkpoint_epochs, step_size)

Model evaluating (metrics)

Model accuracy verification is an indispensable process to verify whether the model accuracy meets the SOTA criteria. MindSpore natively provides measurement interfaces for indicators such as Accuracy and Precision (see here), while providing users with a custom measurement interface Metric. In terms of metric measurement, TinyMS directly inherits the native MindSpore API:

from tinyms.model import Model
from tinyms.metrics import Accuracy

model = Model(net)
model.compile(metrics={"Accuracy": Accuracy())

Pre-trained model loading (hub)

TinyMS Hub is a pre-trained model application tool, serving as a channel for model developers and application developers.

  • Provide model developers with a convenient and fast channel for model release and submission.

  • Provide application developers with high-quality pre-trained models, and complete the work of model migration to deployment quickly using model loading and fine-tuning APIs.

Current pre-trained models in TinyMS Hub mainly cover four mainstream task scenarios including image classification, object detection, semantic segmentation and recommendation.

There are several of scenarios for users to leverage hub to easily load the pre-trained model:

  • Load pre-trained model

    from PIL import Image
    from tinyms import hub
    from tinyms.vision import mnist_transform
    from tinyms.model import Model
    
    img = Image.open(img_path)
    img = mnist_transform(img)
    
    # load LeNet5 pre-trained model
    net= hub.load('tinyms/0.2/lenet5_v1_mnist', class_num=10)
    model = Model(net)
    
    res = model.predict(ts.expand_dims(ts.array(img), 0)).asnumpy()
    print("The label is:", mnist_transform.postprocess(res))
    
  • Load model checkpoint

    from tinyms import hub
    from tinyms.model import lenet5
    from tinyms.utils.train import load_checkpoint
    
    ckpt_dist_file = '/tmp/lenet5.ckpt'
    hub.load_checkpoint('tinyms/0.2/lenet5_v1_mnist', ckpt_dist_file)
    net = lenet5()
    load_checkpoint(ckpt_dist_file, net=net)
    
  • Load model weights

    from tinyms import hub
    from tinyms.model import lenet5
    from tinyms.utils.train import load_param_into_net
    
    param_dict = hub.load_weights('tinyms/0.2/lenet5_v1_mnist')
    net = lenet5()
    load_param_into_net(net, param_dict)
    

Model deployment (serving)

Model deployment refers to the process of servicing pre-trained models so that they can quickly and efficiently process data input by users and obtain results. MindSpore provides the predict function for inference. TinyMS provides a complete set of start server (start_server), check backend (list_servables), check start status (server_started) and shut down the server (shutdown) and other functions based on Flask ; Take the LeNet5 network as an example:

from tinyms.serving import Server, Client

server = Server()
# Start prediction server
server.start_server()

client = Client()
# List all servables available
client.list_servables()
# Call predict interface
client.predict(image_path, 'lenet5', dataset_name='mnist')
# Shutdown the prediction server
server.shutdown()

In addition, TinyMS also provides a web visualization interface, convenient for users to directly upload a picture on the web page for reasoning, currently mainly support LeNet5, CycleGan and SSD300 network. The developers only need to start the backend reasoning server, then deploy the front-end server through Nginx web server, The front-end project is stored in the tinyms/serving/web directory of the TinyMS project. If users want to try quickly, can visit Install TinyMS Nginx version section:

# Start web backend server
from tinyms.serving import Server

server = Server()
server.start_server()

Model reasoning visualization application(app

OpenCV is a library for computer vision, and TinyMS is a high-level API library for deep learning frameworks. Usually after training, when we need to load the pre-training model to verify the effect of the model, the result is usually a bunch of numbers. The data is boring and unintuitive for beginners, and it is very difficult to understand what they represent. Therefore, TinyMS takes model reasoning visualization as its main feature in version 0.3.0, and combines OpenCV to realize real-time detecting and visual detection of images to help users see the effect of reasoning more intuitively. At present, the visual reasoning module only supports object detection model SSD300, and more image processing models will be added in the future.

Below, I will demonstrate how to use a trained model to detect static images and live, moving video images captured by computer cameras in just 5 steps:

  • Static image object detection

import cv2

from tinyms.app.object_detection.utils.config_util import load_and_parse_config
from tinyms.app.object_detection.object_detector import ObjectDetector, object_detection_predict
from tinyms.app.object_detection.utils.view_util import visualize_boxes_on_image

# 1.Load and parse the config json file
config_path = '**/ssd300_shanshui.json'
config = load_and_parse_config(config_path=config_path)

# 2.Generate the instance of ObjectDetector
detector = ObjectDetector(config=config)

# 3.Read the input image using OpenCV
img_path = ('./pic/test.jpeg)
image_np = cv2.imread(img_path)
input = image_np.copy()

# 4.Detect the input image
detection_bbox_data = object_detection_predict(input, detector, is_training=False)

# 5.Draw the box for the input image and and view it using OpenCV.
detection_image_np = visualize_boxes_on_image(image_np, detection_bbox_data, box_color=(0, 255, 0),
                                              box_thickness=3, text_font=cv2.FONT_HERSHEY_PLAIN,
                                              font_scale=2, text_color=(0, 0, 255), font_size=3, show_scores=True)
cv2.imshow('object detection image', cv2.resize(detection_image_np, (600, 1000)))
cv2.waitKey(0)
  • Real-time dynamic detection of video images collected by computer camera

import cv2

from tinyms.app.object_detection.utils.config_util import load_and_parse_config
from tinyms.app.object_detection.object_detector import ObjectDetector, object_detection_predict
from tinyms.app.object_detection.utils.view_util import visualize_boxes_on_image

# 1.Load and parse the config json file
config_path = "**/tinyms/app/object_detection/configs/tinyms/0.3/ssd300_voc.json"
config = load_and_parse_config(config_path=config_path)

# 2.Generate the instance of ObjectDetector
detector = ObjectDetector(config=config)

cap = cv2.VideoCapture(0)
while True:
    # 3.Read the frame image from the camera using OpenCV
    ret, image_np = cap.read()
    input = image_np.copy()

    # 4.Detect the input frame image
    detection_bbox_data = object_detection_predict(input, detector, is_training=False)

    # 5.Draw the box for the input frame image and view it using OpenCV.
    detection_image_np = visualize_boxes_on_image(image_np, detection_bbox_data, box_color=(0, 255, 0),
                                                  box_thickness=3, text_font=cv2.FONT_HERSHEY_PLAIN,
                                                  font_scale=2, text_color=(0, 0, 255), font_size=3, show_scores=True)
    cv2.imshow('object detection camera', cv2.resize(detection_image_np, (800, 600)))

    if cv2.waitKey(25) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

tinyms

Top-level reference to dtype of common module. This module also provides Numpy-like interfaces in TinyMS.

Examples

>>> import tinyms as ts
>>>
>>> print(ts.ones([2, 3]))
[[1. 1. 1.]
[1. 1. 1.]]
class tinyms.QuantDtype[source]

An enum for quant datatype, contains INT1 ~ INT16, UINT1 ~ UINT16.

QuantDtype is defined in mindspore/common/dtype.py, use command below to import:

from mindspore import QuantDtype
tinyms.dtype_to_nptype(type_)[source]

Convert MindSpore dtype to numpy data type.

Parameters:

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns:

The data type of numpy.

tinyms.dtype_to_pytype(type_)[source]

Convert MindSpore dtype to python data type.

Parameters:

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns:

Type of python.

tinyms.pytype_to_dtype(obj)[source]

Convert python type to MindSpore type.

Parameters:

obj (type) – A python type object.

Returns:

Type of MindSpore type.

Raises:

NotImplementedError – If the python type cannot be converted to MindSpore type.

tinyms.get_py_obj_dtype(obj)[source]

Get the MindSpore data type, which corresponds to python type or variable.

Parameters:

obj (type) – An object of python type, or a variable of python type.

Returns:

Type of MindSpore type.

class tinyms.Tensor(input_data=None, dtype=None, shape=None, init=None, internal=False, const_arg=False)[source]

Tensor is a data structure that stores an n-dimensional array.

Parameters:
  • input_data (Union[Tensor, float, int, bool, tuple, list, numpy.ndarray]) – The data to be stored. It can be another Tensor, Python number or NumPy ndarray. Default: None.

  • dtype (mindspore.dtype) – Used to indicate the data type of the output Tensor. The argument should be defined in mindspore.dtype. If it is None, the data type of the output Tensor will be the same as the input_data. Default: None.

  • shape (Union[tuple, list, int]) – Used to indicate the shape of the output Tensor. The argument should be a list of integers, a tuple of integers or an integer. If input_data is available, shape doesn’t need to be set. If None in shape, a tensor of dynamic shape is created, input_data doesn’t need to be set; if None not in shape, a tensor of static shape is created, input_data or init must be set. Default: None.

  • init (Initializer) – The information of init data. ‘init’ is used for delayed initialization in parallel mode. Usually, it is not recommended to use ‘init’ interface to initialize Tensor in the other conditions. If ‘init’ interface is used to initialize Tensor, the Tensor.init_data API needs to be called to convert Tensor to the actual data. Default: None.

  • internal (bool) – Whether it is created by the framework. ‘True’ means that the tensor is created by framework. ‘False’ means that the tensor is created by user. Default: False

  • const_arg (bool) – Whether the tensor is a constant when it is used for the argument of a network. Default: False.

Outputs:

Tensor.

Note

The default value None of input_data works as a placeholder, it does not mean that we can create a NoneType Tensor.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.common.initializer import One
>>> # initialize a tensor with numpy.ndarray
>>> t1 = Tensor(np.zeros([1, 2, 3]), ms.float32)
>>> print(t1)
[[[0. 0. 0.]
[0. 0. 0.]]]
>>> print(type(t1))
<class 'mindspore.common.tensor.Tensor'>
>>> print(t1.shape)
(1, 2, 3)
>>> print(t1.dtype)
Float32
>>>
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> print(t2)
0.1
>>> print(type(t2))
<class 'mindspore.common.tensor.Tensor'>
>>> print(t2.shape)
()
>>> print(t2.dtype)
Float32
>>>
>>> # initialize a tensor with a tuple
>>> t3 = Tensor((1, 2))
>>> print(t3)
[1 2]
>>> print(type(t3))
<class 'mindspore.common.tensor.Tensor'>
>>> print(t3.shape)
(2,)
>>> print(t3.dtype)
Int64
...
>>> # initialize a tensor with init
>>> t4 = Tensor(shape = (1, 3), dtype=ms.float32, init=One())
>>> print(t4)
[[1. 1. 1.]]
>>> print(type(t4))
<class 'mindspore.common.tensor.Tensor'>
>>> print(t4.shape)
(1, 3)
>>> print(t4.dtype)
Float32
property H

Returns a view of a matrix (2-D tensor) conjugated and transposed. x.H is equivalent to mindspore.Tensor.swapaxes(0, 1).conj() for complex matrices and mindspore.Tensor.swapaxes(0, 1) for real matrices.

property T

Return the transposed tensor.

abs()[source]

For details, please refer to mindspore.ops.abs().

absolute()[source]

Alias for mindspore.Tensor.abs().

acos()[source]

For details, please refer to mindspore.ops.acos().

acosh()[source]

For details, please refer to mindspore.ops.acosh().

add(other)[source]

For details, please refer to mindspore.ops.add().

addbmm(batch1, batch2, *, beta=1, alpha=1)[source]

For details, please refer to mindspore.ops.addbmm().

addcdiv(tensor1, tensor2, value=1)[source]

For details, please refer to mindspore.ops.addcdiv().

addcmul(tensor1, tensor2, value=1)[source]

For details, please refer to mindspore.ops.addcmul().

addmm(mat1, mat2, *, beta=1, alpha=1)[source]

For details, please refer to mindspore.ops.addmm().

addmv(mat, vec, beta=1, alpha=1)[source]

For details, please refer to mindspore.ops.addmv().

addr(vec1, vec2, beta=1, alpha=1)[source]

For details, please refer to mindspore.ops.addr().

adjoint()[source]

For details, please refer to mindspore.ops.adjoint().

all(axis=None, keep_dims=False)[source]

For details, please refer to mindspore.ops.all().

amax(axis=None, keepdims=False, *, initial=None, where=None)[source]

For details, please refer to mindspore.ops.amax().

amin(axis=None, keepdims=False, *, initial=None, where=None)[source]

For details, please refer to mindspore.ops.amin().

angle()[source]

For details, please refer to mindspore.ops.angle().

any(axis=None, keep_dims=False)[source]

For details, please refer to mindspore.ops.any().

approximate_equal(other, tolerance=1e-05)[source]

For details, please refer to mindspore.ops.approximate_equal().

arccos()[source]

Alias for mindspore.Tensor.acos().

arccosh()[source]

For details, please refer to mindspore.ops.arccosh().

arcsin()[source]

For details, please refer to mindspore.ops.arcsin().

arcsinh()[source]

Alias for mindspore.Tensor.asinh().

arctan()[source]

For details, please refer to mindspore.ops.arctan().

arctan2(other)[source]

For details, please refer to mindspore.ops.arctan2().

arctanh()[source]

Alias for mindspore.Tensor.atanh().

argmax(axis=None, keepdims=False)[source]

For details, please refer to mindspore.ops.argmax().

argmax_with_value(axis=0, keep_dims=False)[source]

Returns the maximum value with corresponding index.

Compute the max value of input Tensor on the specified axis, and return the max value and index.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of axis is [-dims, dims - 1]. dims is the dimension length of this tensor.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor.

  • index (Tensor) - The index for the maximum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • value (Tensor) - The maximum value of input tensor, with the same shape as index.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index = x.argmax_with_value()
>>> print(output, index)
0.7 3
>>> output, index = x.argmax_with_value(keep_dims=True)
>>> print(output, index)
[0.7] [3]
argmin(axis=None, keepdims=False)[source]

For details, please refer to mindspore.ops.argmin().

argmin_with_value(axis=0, keep_dims=False)[source]

Returns the minimum value with corresponding index.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of axis is [-dims, dims - 1]. dims is the dimension length of this tensor.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor.

  • index (Tensor) - The index for the minimum value of the input tensor. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • value (Tensor) - The minimum value of input tensor, with the same shape as index.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index = x.argmin_with_value()
>>> print(output, index
0.0 0
>>> output, index = x.argmin_with_value(keep_dims=True)
>>> print(output, index)
[0.0] [0]
argsort(axis=-1, descending=False)[source]

For details, please refer to mindspore.ops.argsort().

argwhere()[source]

For details, please refer to mindspore.ops.argwhere().

asin()[source]

For details, please refer to mindspore.ops.asin().

asinh()[source]

For details, please refer to mindspore.ops.asinh().

asnumpy()[source]

Convert tensor to numpy array. Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray.

Returns:

A numpy ndarray which shares the same underlying storage with the tensor.

Examples

>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.array([1, 2], dtype=np.float32))
>>> y = x.asnumpy()
>>> y[0] = 11
>>> print(x)
[11.  2.]
>>> print(y)
[11.  2.]
asnumpy_of_slice_persistent_data(param_key, slice_index)[source]

Convert a slice of tensor data to numpy array. A slice is part of tensor data. Returns as a NumPy ndarray. This slice tensor data and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray.

Returns:

A numpy ndarray which shares the same underlying storage with the slice of tensor data.

assign_value(value)[source]

Assign another tensor value to this tensor.

Parameters:

value (Tensor) – Tensor for assignment.

Returns:

Tensor, Tensor that’s been assigned.

assign_value_cpp(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.Tensor) → mindspore._c_expression.Tensor

Assign another tensor value to this.

Arg:

value (mindspore.tensor): The value tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data2 = mindspore.Tensor(np.ones((2, 2), np.float32))
>>> data.assign_value(data2)
>>> data.shape
(2, 2)
astype(dtype, copy=True)[source]

Return a copy of the tensor, cast to a specified type.

Parameters:
  • dtype (Union[mindspore.dtype, numpy.dtype, str]) – Designated tensor dtype, can be in format of mindspore.dtype.float32 or numpy.float32 or float32.

  • copy (bool, optional) – By default, astype always returns a newly allocated tensor. If this is set to false, the input tensor is returned instead of a copy. Default: True.

Returns:

Tensor, with the designated dtype.

Raises:

TypeError – If the specified dtype cannot be understood.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((1,2,2,1), dtype=np.float32))
>>> x = x.astype("int32")
>>> print(x.dtype)
Int32
atan()[source]

For details, please refer to mindspore.ops.atan().

atan2(other)[source]

For details, please refer to mindspore.ops.atan2().

atanh()[source]

For details, please refer to mindspore.ops.atanh().

baddbmm(batch1, batch2, beta=1, alpha=1)[source]

For details, please refer to mindspore.ops.baddbmm().

bernoulli(p=0.5, seed=None)[source]

For details, please refer to mindspore.ops.bernoulli().

bincount(weights=None, minlength=0)[source]

For details, please refer to mindspore.ops.bincount().

bitwise_and(other)[source]

For details, please refer to mindspore.ops.bitwise_and().

bitwise_left_shift(other)[source]

For details, please refer to mindspore.ops.bitwise_left_shift().

bitwise_or(other)[source]

For details, please refer to mindspore.ops.bitwise_or().

bitwise_right_shift(other)[source]

For details, please refer to mindspore.ops.bitwise_right_shift().

bitwise_xor(other)[source]

For details, please refer to mindspore.ops.bitwise_xor().

bmm(mat2)[source]

For details, please refer to mindspore.ops.bmm().

bool()[source]

Converts input tensor dtype to bool. If the value in tensor is zero, it will be False, otherwise it will be True.

Returns:

Tensor, converted to the bool dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2,2]), mindspore.float32)
>>> output = input_x.bool()
>>> print(output.dtype)
Bool
broadcast_to(shape)[source]

For details, please refer to mindspore.ops.broadcast_to().

ceil()[source]

For details, please refer to mindspore.ops.ceil().

cholesky(upper=False)[source]

For details, please refer to mindspore.ops.cholesky().

cholesky_inverse(upper=False)[source]

For details, please refer to mindspore.ops.cholesky_inverse().

choose(choices, mode='clip')[source]

Construct a tensor from an index tensor and a list of tensors to choose from.

Parameters:
  • choices (Union[tuple, list, Tensor]) – Choice tensors. The input tensor and all of the choices must be broadcasted to the same shape. If choices is itself a tensor, then its outermost dimension (i.e., the one corresponding to choices.shape[0]) is taken as defining the “sequence”.

  • mode ('raise', 'wrap', 'clip', optional) –

    Specifies how indices outside [0, n-1] will be treated:

    • raise: Raises an error;

    • wrap: Wraps around;

    • clip: Clips to the range. ‘clip’ mode means that values greater than n-1 are mapped to n-1. Note that this disables indexing with negative numbers.

    Default: ‘clip’.

Returns:

Tensor, the merged result.

Raises:

ValueError – If the input tensor and any of the choices cannot be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33]]
>>> x = Tensor(np.array([2, 3, 1, 0]))
>>> print(x.choose(choices))
[20 31 12  3]
chunk(chunks, axis=0)[source]

For details, please refer to mindspore.ops.chunk().

clamp(min=None, max=None)[source]

For details, please refer to mindspore.ops.clamp().

clip(min=None, max=None)[source]

Alias for mindspore.Tensor.clamp().

col2im(output_size, kernel_size, dilation, padding_value, stride)[source]

For details, please refer to mindspore.ops.col2im().

conj()[source]

For details, please refer to mindspore.ops.conj().

copy()[source]

Return a copy of the tensor.

Note

The current implementation does not support order argument.

Returns:

Copied tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.ones((3,3)).astype("float32"))
>>> output = a.copy()
>>> print(output)
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
copysign(other)[source]

For details, please refer to mindspore.ops.copysign().

cos()[source]

For details, please refer to mindspore.ops.cos().

cosh()[source]

For details, please refer to mindspore.ops.cosh().

cov(*, correction=1, fweights=None, aweights=None)[source]

For details, please refer to mindspore.ops.cov().

cross(other, dim=None)[source]

For details, please refer to mindspore.ops.cross().

cummax(axis)[source]

For details, please refer to mindspore.ops.cummax().

cummin(axis)[source]

For details, please refer to mindspore.ops.cummin().

cumprod(dim, dtype=None)[source]

For details, please refer to mindspore.ops.cumprod().

cumsum(axis=None, dtype=None)[source]

For details, please refer to mindspore.ops.cumsum().

data_sync(self: mindspore._c_expression.Tensor, arg0: bool) → None
deg2rad()[source]

For details, please refer to mindspore.ops.deg2rad().

det()[source]

For details, please refer to mindspore.ops.det().

diag()[source]

For details, please refer to mindspore.ops.diag().

diagflat(offset=0)[source]

For details, please refer to mindspore.ops.diagflat().

diagonal(offset=0, axis1=0, axis2=1)[source]

For details, please refer to mindspore.ops.diagonal().

diff(n=1, axis=-1, prepend=None, append=None)[source]

For details, please refer to mindspore.ops.diff().

digamma()[source]

For details, please refer to mindspore.ops.digamma().

dim(self: mindspore._c_expression.Tensor) → int

Get tensor’s data dimension.

Returns:

int, the dimension of tensor.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.dim()
2
div(value, *, rounding_mode=None)[source]

For details, please refer to mindspore.ops.div().

divide(value, *, rounding_mode=None)[source]

Alias for mindspore.Tensor.div().

dot(other)[source]

For details, please refer to mindspore.ops.dot().

dsplit(indices_or_sections)[source]

For details, please refer to mindspore.ops.dsplit().

property dtype

Return the dtype of the tensor (mindspore.dtype).

equal(other)[source]

For details, please refer to mindspore.ops.equal().

erf()[source]

For details, please refer to mindspore.ops.erf().

erfc()[source]

For details, please refer to mindspore.ops.erfc().

erfinv()[source]

For details, please refer to mindspore.ops.erfinv().

exp()[source]

For details, please refer to mindspore.ops.exp().

expand(size)[source]

For details, please refer to mindspore.ops.expand().

expand_as(x)[source]

Expand the dimension of target tensor to the dimension of input tensor.

Parameters:

x (Tensor) – The input tensor. The shape of the input tensor must obey the broadcasting rule.

Returns:

Tensor, has the same dimension as input tensor.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> x = Tensor([1, 2, 3], dtype=mstype.float32)
>>> y = Tensor(np.ones((2, 3)), dtype=mstype.float32)
>>> output = x.expand_as(y)
>>> print(output)
[[1. 2. 3.]
[1. 2. 3.]]
expand_dims(axis)[source]

For details, please refer to mindspore.ops.expand_dims().

expm1()[source]

For details, please refer to mindspore.ops.expm1().

fill(value)[source]

Tensor.fill is deprecated, please use ops.fill instead.

fills(value)[source]

Tensor.fills is deprecated, please use ops.fill instead.

flatten(order='C', *, start_dim=0, end_dim=-1)[source]

For details, please refer to mindspore.ops.flatten().

flip(dims)[source]

For details, please refer to mindspore.ops.flip().

fliplr()[source]

For details, please refer to mindspore.ops.fliplr().

flipud()[source]

For details, please refer to mindspore.ops.flipud().

float()[source]

Converts input tensor dtype to float32.

Returns:

Tensor, converted to the float32 dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2,2]), mindspore.int32)
>>> output = input_x.float()
>>> print(output.dtype)
Float32
float_power(other)[source]

For details, please refer to mindspore.ops.float_power().

floor()[source]

For details, please refer to mindspore.ops.floor().

flush_from_cache()[source]

Flush cache data to host if tensor is cache enable.

Examples

>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.array([1, 2], dtype=np.float32))
>>> y = x.flush_from_cache()
>>> print(y)
None
fmod(other)[source]

For details, please refer to mindspore.ops.fmod().

fold(output_size, kernel_size, dilation=1, padding=0, stride=1)[source]

For details, please refer to mindspore.ops.fold().

frac()[source]

For details, please refer to mindspore.ops.frac().

static from_numpy(array)[source]

Convert numpy array to Tensor. If the data is not C contiguous, the data will be copied to C contiguous to construct the tensor. Otherwise, The tensor will be constructed using this numpy array without copy.

Parameters:

array (numpy.array) – The input array.

Returns:

Tensor, has the same data type as input array.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = np.array([1, 2])
>>> output = Tensor.from_numpy(x)
>>> print(output)
[1 2]
gather(input_indices, axis, batch_dims=0)[source]

For details, please refer to mindspore.ops.gather().

gather_elements(dim, index)[source]

For details, please refer to mindspore.ops.gather_elements().

gather_nd(indices)[source]

For details, please refer to mindspore.ops.gather_nd().

ge(x)[source]

For details, please refer to mindspore.ops.ge().

geqrf()[source]

For details, please refer to mindspore.ops.geqrf().

ger(vec2)[source]

For details, please refer to mindspore.ops.ger().

getitem_index_info(self: object, arg0: object, arg1: bool_) → object
greater(other)[source]

For details, please refer to mindspore.ops.greater().

greater_equal(other)[source]

For details, please refer to mindspore.ops.greater_equal().

gt(x)[source]

For details, please refer to mindspore.ops.gt().

half()[source]

Converts input tensor dtype to float16.

Returns:

Tensor, converted to the float16 dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2,2]), mindspore.int32)
>>> output = input_x.half()
>>> print(output.dtype)
Float16
hardshrink(lambd=0.5)[source]

For details, please refer to mindspore.ops.hardshrink().

property has_init

Whether tensor is initialized.

heaviside(values)[source]

For details, please refer to mindspore.ops.heaviside().

histc(bins=100, min=0.0, max=0.0)[source]

For details, please refer to mindspore.ops.histc().

hsplit(indices_or_sections)[source]

For details, please refer to mindspore.ops.hsplit().

hypot(other)[source]

For details, please refer to mindspore.ops.hypot().

i0()[source]

For details, please refer to mindspore.ops.i0().

igamma(other)[source]

For details, please refer to mindspore.ops.igamma().

igammac(other)[source]

For details, please refer to mindspore.ops.igammac().

imag()[source]

Returns a new tensor containing imaginary value of the input tensor. If input tensor is real, it will return zeros.

Returns:

Tensor, the shape is the same as the input tensor.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3 + 0.4j)), mindspore.complex64)
>>> output = x.imag()
>>> print(output)
0.4
index_add(dim, index, source, *, alpha=1)[source]

For details, please refer to mindspore.ops.index_add().

index_fill(axis, index, value)[source]

For details, please refer to mindspore.ops.index_fill().

index_select(axis, index)[source]

For details, please refer to mindspore.ops.index_select().

init_data(slice_index=None, shape=None, opt_shard_group=None)[source]

Get the tensor format data of this Tensor.

Note

The init_data function can be called once for the same tensor.

Parameters:
  • slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor. Default: None.

  • shape (list[int]) – Shape of the slice, it is used when initialize a slice of the parameter. Default: None.

  • opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice. Default: None.

Returns:

Initialized Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore.common.initializer import initializer, Constant
>>> x = initializer(Constant(1), [2, 2], ms.float32)
>>> out = x.init_data()
>>> print(out)
[[1. 1.]
 [1. 1.]]
inner(other)[source]

For details, please refer to mindspore.ops.inner().

inplace_update(v, indices)[source]

For details, please refer to mindspore.ops.inplace_update().

int()[source]

Converts input tensor dtype to int32. If the value in tensor is float or half, the decimal will be discarded.

Returns:

Tensor, converted to the int32 dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2,2]), mindspore.float32)
>>> output = input_x.int()
>>> print(output.dtype)
Int32
inv()[source]

For details, please refer to mindspore.ops.inv().

inverse()[source]

For details, please refer to mindspore.ops.inverse().

invert()[source]

For details, please refer to mindspore.ops.invert().

is_complex()[source]

For details, please refer to mindspore.ops.is_complex().

is_floating_point()[source]

For details, please refer to mindspore.ops.is_floating_point().

is_init(self: mindspore._c_expression.Tensor) → bool

Get tensor init_flag.

Returns:

bool, whether the tensor init.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.is_init()
False
is_persistent_data()[source]

Check if size of tensor is huge, and need save data to persistent storage. If size of tensor is bigger then MS_EMBEDDING_REMOTE_CACHE_MEMORY_SIZE, it will use persistent storage to save tensor data. And will spilt data to some slice.

Returns:

True or False

is_signed()[source]

Judge whether the data type of tensor is a signed data type.

Returns:

Bool. If the dtype of self is a signed data type, return True. Otherwise, return False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> x = ms.Tensor([1, 2, 3], ms.int64)
>>> y = ms.Tensor([1, 2, 3], ms.uint64)
>>> output = x.is_signed()
>>> output2 = y.is_signed()
>>> print(output)
True
>>> print(output2)
False
isclose(x2, rtol=1e-05, atol=1e-08, equal_nan=False)[source]

For details, please refer to mindspore.ops.isclose().

isfinite()[source]

For details, please refer to mindspore.ops.isfinite().

isinf()[source]

For details, please refer to mindspore.ops.isinf().

isnan()[source]

For details, please refer to mindspore.ops.isnan().

isneginf()[source]

For details, please refer to mindspore.ops.isneginf().

isposinf()[source]

For details, please refer to mindspore.ops.isposinf().

isreal()[source]

For details, please refer to mindspore.ops.isreal().

item(index=None)[source]

Get the item at the specified index of the tensor.

Note

Tensor.item returns a Tensor scalar instead of a Python scalar.

Parameters:

index (Union[None, int, tuple(int)]) – The index in Tensor. Default: None.

Returns:

A Tensor scalar, dtype is the same with the original Tensor.

Raises:

ValueError – If the length of the index is not equal to self.ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,2,3],[4,5,6]], dtype=np.float32))
>>> x = x.item((0,1))
>>> print(x)
2.0
itemset(*args)[source]

Insert scalar into a tensor (scalar is cast to tensor’s dtype, if possible).

There must be at least 1 argument, and define the last argument as item. Then, tensor.itemset(*args) is equivalent to \(Tensor[args] = item\).

Parameters:

args (Union[(numbers.Number), (int/tuple(int), numbers.Number)]) – The arguments that specify the index and value. If args contain one argument (a scalar), it is only used in case tensor is of size 1. If args contain two arguments, the last argument is the value to be set and must be a scalar, the first argument specifies a single tensor element location. It is either an int or a tuple.

Returns:

A new tensor that doesn’t affect the original tensor, with value set by \(Tensor[args] = item\).

Raises:
  • ValueError – If the length of the first argument is not equal to self.ndim.

  • IndexError – If only one argument is provided, and the original Tensor is not scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,2,3],[4,5,6]], dtype=np.float32))
>>> print(x.itemset((0,1), 4))
[[1. 4. 3.]
[4. 5. 6.]]
>>> print(x)
[[1. 2. 3.]
[4. 5. 6.]]
property itemsize

Return the length of one tensor element in bytes.

lcm(other)[source]

For details, please refer to mindspore.ops.lcm().

ldexp(other)[source]

For details, please refer to mindspore.ops.ldexp().

le(other)[source]

For details, please refer to mindspore.ops.le().

lerp(end, weight)[source]

For details, please refer to mindspore.ops.lerp().

less(other)[source]

For details, please refer to mindspore.ops.less().

less_equal(other)[source]

For details, please refer to mindspore.ops.less_equal().

lgamma()[source]

For details, please refer to mindspore.ops.lgamma().

log()[source]

For details, please refer to mindspore.ops.log().

log10()[source]

For details, please refer to mindspore.ops.log10().

log1p()[source]

For details, please refer to mindspore.ops.log1p().

log2()[source]

For details, please refer to mindspore.ops.log2().

log_matrix_determinant()[source]

For details, please refer to mindspore.ops.log_matrix_determinant().

logaddexp(other)[source]

For details, please refer to mindspore.ops.logaddexp().

logaddexp2(other)[source]

For details, please refer to mindspore.ops.logaddexp2().

logdet()[source]

For details, please refer to mindspore.ops.logdet().

logical_and(other)[source]

For details, please refer to mindspore.ops.logical_and().

logical_not()[source]

For details, please refer to mindspore.ops.logical_not().

logical_or(other)[source]

For details, please refer to mindspore.ops.logical_or().

logical_xor(other)[source]

For details, please refer to mindspore.ops.logical_xor().

logit(eps=None)[source]

For details, please refer to mindspore.ops.logit().

logsumexp(axis, keepdims=False)[source]

For details, please refer to mindspore.ops.logsumexp().

long()[source]

Converts input tensor dtype to int64. If the value in tensor is float or half, the decimal will be discarded.

Returns:

Tensor, converted to the int64 dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2,2]), mindspore.int32)
>>> output = input_x.long()
>>> print(output.dtype)
Int64
lstsq(A)[source]

For details, please refer to mindspore.ops.lstsq().

lt(other)[source]

Alias for mindspore.Tensor.less().

property mH

Accessing this property is equivalent to Calling self.adjoint(). For details, please refer to mindspore.ops.adjoint().

property mT

Returns the Tensor that exchanges the last two dimensions. Accessing the attribute, x.mT, is equal to calling the method, x.swapaxes(-2, -1). For details, please refer to mindspore.Tensor.swapaxes().

masked_fill(mask, value)[source]

For details, please refer to mindspore.ops.masked_fill().

masked_select(mask)[source]

For details, please refer to mindspore.ops.masked_select().

matmul(tensor2)[source]

For details, please refer to mindspore.ops.matmul().

matrix_determinant()[source]

For details, please refer to mindspore.ops.matrix_determinant().

matrix_power(n)[source]

For details, please refer to mindspore.ops.matrix_power().

max(axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)[source]

Return the maximum of a tensor or maximum along an axis.

Parameters:
  • axis (Union[None, int, list, tuple of ints], optional) – Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. Default: None.

  • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A boolean tensor which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided. Default: True.

  • return_indices (bool, optional) – Whether to return the index of the maximum value. Default: False. If axis is a list or tuple of ints, it must be False.

Returns:

Tensor or scalar, maximum of input tensor. If axis is None, the result is a scalar value. If axis is given, the result is a tensor of dimension self.ndim - 1.

Raises:

TypeError – If arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.argmin(): Return the indices of the minimum values along an axis.

mindspore.Tensor.argmax(): Return the indices of the maximum values along an axis.

mindspore.Tensor.min(): Return the minimum of a tensor or minimum along an axis.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2, 2)).astype('float32'))
>>> output = a.max()
>>> print(output)
3.0
>>> value, indices = a.max(axis=0, return_indices=True)
>>> print(value)
[2. 3.]
>>> print(indices)
[1 1]
maximum(other)[source]

For details, please refer to mindspore.ops.maximum().

mean(axis=None, keep_dims=False)[source]

For details, please refer to mindspore.ops.mean().

median(axis=-1, keepdims=False)[source]

For details, please refer to mindspore.ops.median().

min(axis=None, keepdims=False, *, initial=None, where=True, return_indices=False)[source]

Return the minimum of a tensor or minimum along an axis.

Parameters:
  • axis (Union[None, int, list, tuple of ints], optional) – An axis or axes along which to operate. By default, flattened input is used. If axis is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. Default: None.

  • keepdims (bool, optional) – If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (bool Tensor, optional) – A boolean tensor which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided. Default: True.

  • return_indices (bool, optional) – Whether to return the index of the minimum value. Default: False. If axis is a list or tuple of ints, it must be False.

Returns:

Tensor or scalar, minimum of input tensor. If axis is None, the result is a scalar value. If axis is given, the result is a tensor of dimension self.ndim - 1.

Raises:

TypeError – If arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.argmin(): Return the indices of the minimum values along an axis.

mindspore.Tensor.argmax(): Return the indices of the maximum values along an axis.

mindspore.Tensor.max(): Return the minimum of a tensor or minimum along an axis.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2, 2)).astype('float32'))
>>> output = a.min()
>>> print(output)
0.0
>>> output = a.min(axis=0)
>>> print(output)
[0. 1.]
>>> output = a.min(axis=0, initial=9, where=Tensor([False]))
>>> print(output)
[9. 9.]
>>> output = a.min(axis=0, initial=9, where=Tensor([False, True]))
>>> print(output)
[9. 1.]
>>> value, indices = a.min(axis=0, return_indices=True)
>>> print(value)
[0. 1.]
>>> print(indices)
[0 0]
minimum(other)[source]

For details, please refer to mindspore.ops.minimum().

mm(mat2)[source]

For details, please refer to mindspore.ops.mm().

moveaxis(source, destination)[source]

For details, please refer to mindspore.ops.moveaxis().

movedim(source, destination)[source]

For details, please refer to mindspore.ops.movedim().

msort()[source]

For details, please refer to mindspore.ops.msort().

mul(value)[source]

For details, please refer to mindspore.ops.mul().

multinomial(num_samples, replacement=True, seed=None)[source]

For details, please refer to mindspore.ops.multinomial().

multiply(value)[source]

For details, please refer to mindspore.ops.multiply().

mvlgamma(p)[source]

For details, please refer to mindspore.ops.mvlgamma().

nan_to_num(nan=0.0, posinf=None, neginf=None)[source]

For details, please refer to mindspore.ops.nan_to_num().

nansum(axis=None, keepdims=False, dtype=None)[source]

For details, please refer to mindspore.ops.nansum().

narrow(axis, start, length)[source]

For details, please refer to mindspore.ops.narrow().

property nbytes

Return the total number of bytes taken by the tensor.

property ndim

Return the number of tensor dimensions.

ndimension()[source]

Alias for mindspore.Tensor.ndim().

ne(other)[source]

For details, please refer to mindspore.ops.ne().

neg()[source]

For details, please refer to mindspore.ops.neg().

negative()[source]

For details, please refer to mindspore.ops.negative().

nelement()[source]

Alias for mindspore.Tensor.numel().

new_ones(size, *, dtype=None)[source]

Return a tensor of size filled with ones.

Parameters:

size (Union[int, tuple, list]) – An int, list or tuple of integers defining the output shape.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The desired dtype of the output tensor. If None, the returned tensor has the same dtype as self. Default: None.

Returns:

Tensor, the shape and dtype is defined above and filled with ones.

Raises:

TypeError – If size is not an int, list or tuple of integers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = x.new_ones((2, 2))
>>> print(output)
[[1. 1.]
 [1. 1.]]
new_zeros(size, *, dtype=None)[source]

Return a tensor of size filled with zeros.

Parameters:

size (Union[int, tuple, list]) – An int, list or tuple of integers defining the output shape.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The desired dtype of the output tensor. If None, the returned tensor has thesame dtype as self. Default: None.

Returns:

Tensor, the shape and dtype is defined above and filled with zeros.

Raises:

TypeError – If size is not an int, list or tuple of integers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = x.new_zeros((2, 2))
>>> print(output)
[[0. 0.]
 [0. 0.]]
nextafter(other)[source]

For details, please refer to mindspore.ops.nextafter().

nonzero()[source]

For details, please refer to mindspore.ops.nonzero().

norm(ord=None, dim=None, keepdim=False, *, dtype=None)[source]

For details, please refer to mindspore.ops.norm().

not_equal(other)[source]

For details, please refer to mindspore.ops.not_equal().

numel()[source]

For details, please refer to mindspore.ops.numel().

numpy()[source]

Alias for mindspore.Tensor.asnumpy().

permute(*axis)[source]

For details, please refer to mindspore.ops.permute().

persistent_data_from_numpy(self: array, arg0: int_) → mindspore._c_expression.Tensor

Creates a Tensor from a numpy.ndarray without copy. Use persistent data tensor.

Arg:

array (numpy.ndarray): The input ndarray. slice_num (int): The slice num of persistent data tensor.

Returns:

Tensor, tensor with shared data to input ndarray.

Examples

>>> a = np.ones((2, 3))
>>> t = mindspore.Tensor.persistent_data_from_numpy(a, 1)
positive()[source]

For details, please refer to mindspore.ops.positive().

pow(exponent)[source]

For details, please refer to mindspore.ops.pow().

prod(axis=None, keep_dims=False)[source]

For details, please refer to mindspore.ops.prod().

ptp(axis=None, keepdims=False)[source]

The name of the function comes from the acronym for “peak to peak”. Calculate the difference between the maximum value and the minimum value along the axis.

Note

Numpy argument out is not supported.

Parameters:
  • axis (Union[None, int, tuple(int)]) – Axis or axes along which the range is computed. The default is to compute the variance of the flattened tensor. Default: None.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the tensor. Default is False.

Returns:

Tensor.

Raises:

TypeError – If self is not a tensor, or axis and keepdims have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> x = Tensor([[4.0, 9.0, 2.0, 10.0], [6.0, 9.0, 7.0, 12.0]]).astype("float32")
>>> print(x.ptp(axis=1))
[8. 6.]
>>> print(x.ptp(axis=0))
[2. 0. 5. 2.]
qr(some=True)[source]

For details, please refer to mindspore.ops.qr().

rad2deg()[source]

For details, please refer to mindspore.ops.rad2deg().

random_categorical(num_sample, seed=0, dtype=mindspore.int64)[source]

For details, please refer to mindspore.ops.random_categorical().

ravel()[source]

Return a contiguous flattened tensor.

Returns:

Tensor, a 1-D tensor, containing the same elements of the input.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.reshape(): Give a new shape to a tensor without changing its data.

mindspore.Tensor.flatten(): Return a copy of the tensor collapsed into one dimension.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = x.ravel()
>>> print(output.shape)
(24,)
real()[source]

For details, please refer to mindspore.ops.real().

reciprocal()[source]

For details, please refer to mindspore.ops.reciprocal().

remainder(divisor)[source]

For details, please refer to mindspore.ops.remainder().

renorm(p, axis, maxnorm)[source]

For details, please refer to mindspore.ops.renorm().

repeat(repeats, axis=None)[source]

Repeat elements of a tensor.

Parameters:
  • repeats (Union[int, tuple, list]) – The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

  • axis (int, optional) – The axis along which to repeat values. By default, use the flattened input tensor, and return a flat output tensor. Default: None.

Returns:

Tensor, has the same shape as input tensor except along the given axis.

Raises:
  • ValueError – If the axis is out of range.

  • TypeError – If arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.reshape(): Give a new shape to a tensor without changing its data.

mindspore.Tensor.resize(): Changes shape and size of tensor in-place.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array(3))
>>> print(x.repeat(4))
[3 3 3 3]
>>> x = Tensor(np.array([[1, 2],[3, 4]]))
>>> print(x.repeat(2))
[1 1 2 2 3 3 4 4]
>>> print(x.repeat(3, axis=1))
[[1 1 1 2 2 2]
[3 3 3 4 4 4]]
>>> print(x.repeat([1,2], axis=0))
[[1 2]
[3 4]
[3 4]]
repeat_interleave(repeats, dim=None)[source]

For details, please refer to mindspore.ops.repeat_interleave().

reshape(*shape)[source]

For details, please refer to mindspore.ops.reshape().

reshape_as(other)[source]

Change the shape of the Tensor to the shape of other without changing the data.

Parameters:

other (Tensor) – The result tensor has the same shape as other.

Returns:

Tensor, has the same shape as other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]], dtype=ms.float32)
>>> y = Tensor(np.arange(6).reshape(3,2))
>>> output = x.reshape_as(y)
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
resize(*new_shape)[source]

Changes shape and size of tensor in-place.

If the shape of the new tensor is larger than the shape of the original tensor, the new tensor will be filled with 0. And if the shape of the new tensor is smaller than the shape of the original tensor, the new tensor is filled with the elements of the original tensor in order.

Note

Instead of changing the size of the input tensor and returns nothing as in numpy, this method returns a new Tensor with the input size. Numpy argument refcheck is not supported.

Parameters:

new_shape (Union[ints, tuple of ints]) – Shape of resized tensor.

Returns:

Tensor.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.reshape(): Give a new shape to a tensor without changing its data.

mindspore.Tensor.repeat(): Repeat elements of a tensor.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32))
>>> y = x.resize(3, 3)
>>> print(y)
[[1. 2. 3.]
[4. 5. 6.]
[0. 0. 0.]]
>>> y = x.resize(2, 2)
>>> print(y)
[[1. 2.]
[3. 4.]]
reverse(axis)[source]

For details, please refer to mindspore.ops.reverse().

reverse_sequence(seq_lengths, seq_dim=0, batch_dim=0)[source]

For details, please refer to mindspore.ops.reverse_sequence().

roll(shifts, dims)[source]

For details, please refer to mindspore.ops.roll().

rot90(k, dims)[source]

For details, please refer to mindspore.ops.rot90().

round()[source]

For details, please refer to mindspore.ops.round().

rsqrt()[source]

For details, please refer to mindspore.ops.rsqrt().

scatter(axis, index, src)[source]

For details, please refer to mindspore.ops.scatter().

scatter_add(indices, updates)[source]

For details, please refer to mindspore.ops.scatter_add().

scatter_div(indices, updates)[source]

For details, please refer to mindspore.ops.scatter_div().

scatter_max(indices, updates)[source]

For details, please refer to mindspore.ops.scatter_max().

scatter_min(indices, updates)[source]

For details, please refer to mindspore.ops.scatter_min().

scatter_mul(indices, updates)[source]

For details, please refer to mindspore.ops.scatter_mul().

scatter_sub(indices, updates)[source]

Creates a new tensor by subtracting the values from the positions in self tensor indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using mindspore.ops.ScatterNdSub , except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of self[indices]. For more details, see use cases.

Note

On GPU, if some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to self tensor. On CPU, if some values of the indices are out of bound, raising an index error. On Ascend, out of bound checking is not supported, if some values of the indices are out of bound, unknown errors may be caused.

Parameters:
  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + self.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as self tensor.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of self tensor is less than the last dimension of shape of indices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]).astype('float32'))
>>> indices = Tensor(np.array([[0, 0], [0, 0]]).astype('int32'))
>>> updates = Tensor(np.array([1.0, 2.2]).astype('float32'))
>>> output = x.scatter_sub(indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
[ 0.4        0.5       -3.2      ]]
searchsorted(v, side='left', sorter=None)[source]

Finds indices where elements should be inserted to maintain order.

Parameters:
  • v (Union[int, float, bool, list, tuple, Tensor]) – Values to insert into the tensor.

  • side (str, optional) – If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of the tensor). Default: ‘left’.

  • sorter (Union[int, float, bool, list, tuple, Tensor]) – 1-D optional tensor of integer indices that sort the tensor into ascending order. They are typically the result of argsort. Default: None.

Returns:

Tensor, array of insertion points with the same shape as v.

Raises:

ValueError – If argument for side or sorter is invalid.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([1, 2, 3, 4, 5]))
>>> print(x.searchsorted(3))
2
select(condition, y)[source]

For details, please refer to mindspore.ops.select().

set_cast_dtype(self: mindspore._c_expression.Tensor, dtype: mindspore._c_expression.typing.Type = None) → None
set_const_arg(const_arg=True)[source]

Specify whether the tensor is a constant when it is used for the argument of a network.

Parameters:

const_arg (bool) – Whether the tensor is a constant when it is used for the argument of a network. Default: True.

Returns:

Tensor, has been specified whether to be a const network argument.

Raises:

TypeError – If const_arg is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,2,3],[4,5,6]], dtype=np.float32))
>>> x.set_const_arg(True)
set_dtype(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.typing.Type) → mindspore._c_expression.typing.Type

Set the tensor’s data type.

Arg:

dtype (mindspore.dtype): The type of output tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data.set_dtype(mindspore.int32)
mindspore.int32
set_init_flag(self: mindspore._c_expression.Tensor, arg0: bool) → None

Set tensor init_flag.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.set_init_flag(True)
setitem_index_info(self: object, arg0: object, arg1: object, arg2: bool_) → object
sgn()[source]

For details, please refer to mindspore.ops.sgn().

property shape

For details, please refer to mindspore.ops.shape().

short()[source]

Return a copy of the tensor, cast to int16 type, equivalent to self.astype(mstype.int16). If the value in tensor is float or half, the decimal will be discarded. For details, please refer to mindspore.Tensor.astype().

Returns:

Tensor, converted to the int16 dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> x = ms.Tensor(np.array([1,2,3,4,5]), ms.int32)
>>> output = x.short()
>>> output
Tensor(shape=[5], dtype=Int16, value= [1, 2, 3, 4, 5])
sigmoid()[source]

For details, please refer to mindspore.ops.sigmoid().

sign()[source]

For details, please refer to mindspore.ops.sign().

signbit()[source]

For details, please refer to mindspore.ops.signbit().

sin()[source]

For details, please refer to mindspore.ops.sin().

sinc()[source]

For details, please refer to mindspore.ops.sinc().

sinh()[source]

For details, please refer to mindspore.ops.sinh().

property size

For details, please refer to mindspore.ops.size().

slice_num_of_persistent_data()[source]

Get slice num of a tensor which use persistent storage.

Returns:

Num of slice.

slice_shape_of_persistent_data()[source]

Get slice shape of tensor after cut to slice size.

Returns:

The slice shape of tensor.

slogdet()[source]

For details, please refer to mindspore.ops.slogdet().

soft_shrink(lambd=0.5)[source]

For details, please refer to mindspore.ops.soft_shrink().

sort(axis=-1, descending=False)[source]

For details, please refer to mindspore.ops.sort().

split(split_size_or_sections, axis=0)[source]

For details, please refer to mindspore.ops.split().

sqrt()[source]

For details, please refer to mindspore.ops.sqrt().

square()[source]

For details, please refer to mindspore.ops.square().

squeeze(axis=None)[source]

For details, please refer to mindspore.ops.squeeze().

std(axis=None, ddof=0, keepdims=False)[source]

For details, please refer to mindspore.ops.std().

property strides

Return the tuple of bytes to step in each dimension when traversing a tensor.

sub(y)[source]

For details, please refer to mindspore.ops.sub().

subtract(other, *, alpha=1)[source]

For details, please refer to mindspore.ops.subtract().

sum(axis=None, dtype=None, keepdims=False, initial=None)[source]

Return sum of tensor elements over a given axis.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • axis (Union[None, int, tuple(int)]) – Axis or axes along which a sum is performed. Default: None. If None, sum all the elements of the input tensor. If the axis is negative, it counts from the last to the first axis. If the axis is a tuple of ints, a sum is performed on all the axes specified in the tuple instead of a single axis or all the axes as before.

  • dtype (mindspore.dtype, optional) – defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the sum method of sub-classes of ndarray, however any non-default value will be. If the sub-class method does not implement keepdims any exceptions will be raised. Default: False.

  • initial (scalar) – Starting value for the sum. Default: None.

Returns:

Tensor. A tensor with the same shape as input, with the specified axis removed. If the input tensor is a 0-d array, or if the axis is None, a scalar is returned.

Raises:
  • TypeError – If input is not array_like, or axis is not int or tuple of ints, or keepdims is not integer, or initial is not scalar.

  • ValueError – If any axis is out of range or duplicate axes exist.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.cumsum(): Return the cumulative sum of the elements along a given axis.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([-1, 0, 1]).astype(np.float32))
>>> print(input_x.sum())
0.0
>>> input_x = Tensor(np.arange(10).reshape(2, 5).astype(np.float32))
>>> print(input_x.sum(axis=1))
[10. 35.]
sum_to_size(*size)[source]

Sum self Tensor to the size. size must be expandable to the Tensor size.

Parameters:

size (Union[tuple(int), int]) – The expected shape of output Tensor.

Returns:

Tensor, the sum result of self Tensor according to the size.

Raises:

ValueError – If size is not expandable to the size of self Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 3, 3, 3, 3, 3), mindspore.float32)
>>> output = x.sum_to_size((1, 3, 1, 3))
>>> print(output.shape)
(1, 3, 1, 3)
svd(full_matrices=False, compute_uv=True)[source]

For details, please refer to mindspore.ops.svd().

swapaxes(axis0, axis1)[source]

For details, please refer to mindspore.ops.swapaxes().

swapdims(dim0, dim1)[source]

For details, please refer to mindspore.ops.swapdims().

t()[source]

For details, please refer to mindspore.ops.t().

take(indices, axis=None, mode='clip')[source]

Takes elements from a tensor along an axis.

Parameters:
  • indices (Tensor) – The indices with shape (Nj…) of the values to extract.

  • axis (int, optional) – The axis over which to select values. By default, the flattened input tensor is used. Default: None.

  • mode ('raise', 'wrap', 'clip', optional) –

    • raise: Raises an error;

    • wrap: Wraps around;

    • clip: Clips to the range. ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.

    Default: ‘clip’.

Returns:

Tensor, the indexed result.

Raises:

ValueError – If axis is out of range, or mode has values other than (‘raise’, ‘wrap’, ‘clip’)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.array([4, 3, 5, 7, 6, 8]))
>>> indices = Tensor(np.array([0, 1, 4]))
>>> output = a.take(indices)
>>> print(output)
[4 3 6]
tan()[source]

For details, please refer to mindspore.ops.tan().

tanh()[source]

For details, please refer to mindspore.ops.tanh().

tensor_split(indices_or_sections, axis=0)[source]

For details, please refer to mindspore.ops.tensor_split().

tile(multiples)[source]

For details, please refer to mindspore.ops.tile().

to(dtype)[source]

Performs tensor dtype conversion.

Parameters:

dtype (Number) – The valid data type of the output tensor. Only constant value is allowed.

Returns:

Tensor, converted to the specified dtype.

Raises:

TypeError – If dtype is not a Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> dtype = mindspore.int32
>>> output = input_x.to(dtype)
>>> print(output.dtype)
Int32
to_coo()[source]

Convert a Tensor to COOTensor.

Note

Only 2-D tensor is supported for now.

Returns:

COOTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indices (Tensor): 2-D integer tensor, indicates the positions of values of the dense tensor.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the COOTensor, is the same as the original dense tensor.

Raises:

ValueError – If input tensor is not 2-D.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,  0], [-5, 0]]), mindspore.float32)
>>> output = x.to_coo()
>>> print(output.indices, output.values, output.shape)
[[0 0]
 [1 0]] [ 1. -5.] (2, 2)
to_csr()[source]

Convert a Tensor to CSRTensor.

Note

Only 2-D tensor is supported for now.

Returns:

CSRTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indptr (Tensor): 1-D integer tensor, indicates the start and end point for values in each row.

  • indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the CSRTensor, is the same as the original dense tensor.

Raises:

ValueError – If input tensor is not 2-D.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1,  0], [-5, 0]]), mindspore.float32)
>>> output = x.to_csr()
>>> print(output.indptr, output.indices, output.values, output.shape)
[0 1 2] [0 0] [ 1. -5.] (2, 2)
top_k(k, sorted=True)[source]

Tensor.top_k is deprecated, please use Tensor.topk instead.

topk(k, dim=None, largest=True, sorted=True)[source]

For details, please refer to mindspore.ops.topk().

trace(offset=0, axis1=0, axis2=1, dtype=None)[source]

Return the sum along diagonals of the tensor.

Parameters:
  • offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal.

  • axis1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

  • axis2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis.

  • dtype (mindspore.dtype, optional) – defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, the sum along diagonals.

Raises:

ValueError – If the input tensor has less than two dimensions.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.diagonal(): Return specified diagonals.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.eye(3, dtype=np.float32))
>>> print(x.trace())
3.0
transpose(*axes)[source]

For details, please refer to mindspore.ops.transpose().

tril(diagonal=0)[source]

For details, please refer to mindspore.ops.tril().

triu(diagonal=0)[source]

For details, please refer to mindspore.ops.triu().

true_divide(value)[source]

Alias for Tensor.div() with \(rounding\_mode=None\). For details, please refer to mindspore.ops.div().

trunc()[source]

For details, please refer to mindspore.ops.trunc().

unbind(dim=0)[source]

For details, please refer to mindspore.ops.unbind().

unfold(kernel_size, dilation=1, padding=0, stride=1)[source]

For details, please refer to mindspore.ops.unfold().

unique_consecutive(return_idx=False, return_counts=False, axis=None)[source]

For details, please refer to mindspore.ops.unique_consecutive().

unique_with_pad(pad_num)[source]

For details, please refer to mindspore.ops.unique_with_pad().

unsorted_segment_max(segment_ids, num_segments)[source]

For details, please refer to mindspore.ops.unsorted_segment_max().

unsorted_segment_min(segment_ids, num_segments)[source]

For details, please refer to mindspore.ops.unsorted_segment_min().

unsorted_segment_prod(segment_ids, num_segments)[source]

For details, please refer to mindspore.ops.unsorted_segment_prod().

unsqueeze(dim)[source]

For details, please refer to mindspore.ops.unsqueeze().

value()[source]

Get the value of the tensor or the parameter.

Returns:

The value of the tensor or the parameter.

Examples

>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.array([1, 2], dtype=np.float32))
>>> x_value = x.value()
>>> print(x_value)
[1.  2.]
var(axis=None, ddof=0, keepdims=False)[source]

Compute the variance along the specified axis.

The variance is the average of the squared deviations from the mean, i.e., \(var = mean(abs(x - x.mean())**2)\).

Return the variance, which is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype, out and where are not supported.

Parameters:
  • axis (Union[None, int, tuple(int)]) – Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. Default: None.

  • ddof (int) – Means Delta Degrees of Freedom. Default: 0. The divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements.

  • keepdims (bool) – Default: False.

Returns:

Variance tensor.

Supported Platforms:

Ascend GPU CPU

See also

mindspore.Tensor.mean(): Reduce a dimension of a tensor by averaging all elements in the dimension.

mindspore.Tensor.std(): Compute the standard deviation along the specified axis.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([1., 2., 3., 4.], np.float32))
>>> output = input_x.var()
>>> print(output)
1.25
view(*shape)[source]

Reshape the tensor according to the input shape. It’s the same as mindspore.Tensor.reshape(), implemented by the underlying reshape operator.

Parameters:

shape (Union[tuple(int), int]) – Dimension of the output tensor.

Returns:

Tensor, which dimension is the input shape’s value.

Examples

>>> from mindspore import Tensor
>>> import numpy as np
>>> a = Tensor(np.array([[1, 2, 3], [2, 3, 4]], dtype=np.float32))
>>> output = a.view((3, 2))
>>> print(output)
[[1. 2.]
[3. 2.]
[3. 4.]]
view_as(other)[source]

View self Tensor as the same shape as other .

Parameters:

other (Tensor) – The returned Tensor has the same shape as other.

Returns:

Tensor, has the same shape as other.

Raises:

TypeError – If other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor([[1, 2, 3], [2, 3, 4]], mstype.float32)
>>> b = Tensor([1, 1, 1, 1, 1, 1], mstype.float32)
>>> output = a.view_as(b)
>>> print(output)
[1. 2. 3. 2. 3. 4.]
vsplit(indices_or_sections)[source]

For details, please refer to mindspore.ops.vsplit().

where(condition, y)[source]

For details, please refer to mindspore.ops.where().

xdivy(y)[source]

For details, please refer to mindspore.ops.xdivy().

xlogy(y)[source]

For details, please refer to mindspore.ops.xlogy().

class tinyms.RowTensor(indices=None, values=None, shape=None, row_tensor=None)[source]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape \((L0, D1, .., DN)\) where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

For example, if indices is [0], values is [[1, 2]], shape is

\((3, 2)\) , then the dense representation of the row tensor will be:

[[1, 2],
 [0, 0],
 [0, 0]]

Warning

This is an experimental API that is subjected to change or deletion.

Parameters:
  • indices (Tensor) – A 1-D integer Tensor of shape \((D0)\) . Default: None.

  • values (Tensor) – A Tensor of any dtype of shape \((D0, D1, ..., Dn)\) . Default: None.

  • shape (tuple(int)) – An integer tuple which contains the shape of the corresponding dense tensor. Default: None.

  • row_tensor (RowTensor) – A RowTensor object. Default: None.

Returns:

RowTensor, composed of indices, values, and shape.

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, RowTensor
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> shape = (3, 2)
>>> x = RowTensor(indices, values, shape)
>>> print(x.values)
[[1. 2.]]
>>> print(x.indices)
[0]
>>> print(x.dense_shape)
(3, 2)
property dense_shape

Return RowTensor’s shape.

property indices

Return RowTensor’s indices.

property values

Return RowTensor’s non-zero values.

class tinyms.SparseTensor(indices, values, shape)[source]

A sparse representation of a set of nonzero elements from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

For example, if indices is [[0, 1], [1, 2]], values is [1, 2], dense_shape is (3, 4), then the dense representation of the sparse tensor will be:

[[0, 1, 0, 0],
 [0, 0, 2, 0],
 [0, 0, 0, 0]]

Note

The interface is deprecated from version 1.7 and will be removed in a future version. Please use ‘COOTensor’ instead.

Parameters:
  • indices (Tensor) – A 2-D integer Tensor of shape \((N, ndims)\), where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

  • values (Tensor) – A 1-D tensor of any type and shape \((N)\), which supplies the values for each element in indices.

  • shape (tuple(int)) – An integer tuple of size ndims, which specifies the shape of the sparse tensor.

Returns:

SparseTensor, composed of indices, values, and shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, SparseTensor
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> shape = (3, 4)
>>> x = SparseTensor(indices, values, shape)
>>> print(x.values)
[1. 2.]
>>> print(x.indices)
[[0 1]
 [1 2]]
>>> print(x.shape)
(3, 4)
property indices

Return SparseTensor’s indices.

property shape

Return SparseTensor’s shape.

property values

Return SparseTensor’s non-zero values.

class tinyms.COOTensor(indices=None, values=None, shape=None, coo_tensor=None)[source]

A sparse representation of a set of nonzero elements from a tensor at given indices.

For a tensor dense, its COOTensor(indices, values, shape) has dense[indices[i]] = values[i].

For example, if indices is [[0, 1], [1, 2]], values is [1, 2], shape is (3, 4), then the dense representation of the sparse tensor will be:

[[0, 1, 0, 0],
 [0, 0, 2, 0],
 [0, 0, 0, 0]]

Common arithmetic operations include: addition (+), subtraction (-), multiplication (*), and division (/). For details about operations supported by COOTensor, see operators.

Warning

  • This is an experimental API that is subject to change or deletion.

  • Currently, duplicate coordinates in the indices will not be coalesced. If the indices contain out-of-bound values, the result will be undefined.

Parameters:
  • indices (Tensor) – A 2-D integer Tensor of shape \((N, ndims)\), where N and ndims are the number of values and number of dimensions in the COOTensor, respectively. Currently, ndims must be 2. Please make sure that the indices are in range of the given shape.

  • values (Tensor) – A 1-D tensor of any type and shape \((N)\), which supplies the values for each element in indices.

  • shape (tuple(int)) – An integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

  • coo_tensor (COOTensor) – A COOTensor object.

Returns:

COOTensor, composed of indices, values, and shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, COOTensor
>>> indices = Tensor([[0, 1], [1, 2]], dtype=ms.int32)
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> print(x.values)
[1. 2.]
>>> print(x.indices)
[[0 1]
 [1 2]]
>>> print(x.shape)
(3, 4)
abs() → mindspore.common.sparse_tensor.COOTensor[source]

Return absolute value element-wisely.

Returns:

COOTensor.

Supported Platforms:

Ascend GPU CPU

add(other: mindspore.common.sparse_tensor.COOTensor, thresh: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.COOTensor[source]

Return the sum with another COOTensor.

Parameters:
  • other (COOTensor) – the second SparseTensor to sum.

  • thresh (Tensor) – A 0-D Tensor, represents the magnitude threshold that determines if an output value/index pair take space, Its dtype should match that of the values if they are real. If output’s value is less than the thresh, it will vanish.

Returns:

COOTensor, representing the sum.

Raises:
  • ValueError – If any input(self/other)’s indices’s dim is not equal to 2.

  • ValueError – If any input(self/other)’s values’s dim is not equal to 1.

  • ValueError – If any input(self/other)’s shape’s dim is not equal to 1.

  • ValueError – If thresh’s dim is not equal to 0.

  • TypeError – If any input(self/other)’s indices’s type is not equal to int64.

  • TypeError – If any input(self/other)’s shape’s type is not equal to int64.

  • ValueError – If any input(self/other)’s indices’s length is not equal to its values’s length.

  • TypeError – If any input(self/other)’s values’s type is not equal to anf of (int8/int16/int32/int64/float32/float64/complex64/complex128)

  • TypeError – If thresh’s type is not equal to anf of (int8/int16/int32/int64/float32/float64)

  • TypeError – If self’s indices’s type is not equal to other’s indices’s type

  • TypeError – If self’s values’s type is not equal to other’s values’s type

  • TypeError – If self’s shape’s type is not equal to other’s shape’s type

  • TypeError – If (self/other)’s value’s type is not matched with thresh’s type

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, COOTensor
>>> from mindspore import dtype as mstype
>>> indics0 = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values0 = Tensor([1, 2], dtype=mstype.int32)
>>> shape0 = (3, 4)
>>> input0 = COOTensor(indics0, values0, shape0)
>>> indics1 = Tensor([[0, 0], [1, 1]], dtype=mstype.int64)
>>> values1 = Tensor([3, 4], dtype=mstype.int32)
>>> shape1 = (3, 4)
>>> input1 = COOTensor(indics1, values1, shape1)
>>> thres = Tensor(0, dtype=mstype.int32)
>>> out = input0.add(input1, thres)
>>> print(out)
COOTensor(shape=[3, 4], dtype=Int32, indices=Tensor(shape=[4, 2], dtype=Int64, value=
[[0 0]
 [0 1]
 [1 1]
 [1 2]]), values=Tensor(shape=[4], dtype=Int32, value=[3 1 4 2]))
astype(dtype: mstype) → COOTensor[source]

Return a copy of the COOTensor, cast its values to a specified type.

Parameters:

dtype (Union[mindspore.dtype, numpy.dtype, str]) – Designated tensor dtype.

Returns:

COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, COOTensor
>>> indices = Tensor([[0, 1], [1, 2]], dtype=ms.int32)
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> shape = (3, 4)
>>> coo_tensor = COOTensor(indices, values, shape)
>>> print(coo_tensor.astype(ms.float64).dtype)
Float64
coalesce() → mindspore.common.sparse_tensor.COOTensor[source]

Returns a coalesced copy of an uncoalesced sparse tensor.

Returns:

A COOTensor.

Supported Platforms:

GPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, COOTensor
>>> x_indices = Tensor([[0, 0, 1], [1, 1, 2]], dtype=ms.int64)
>>> x_values = Tensor([1, 5, 4], dtype=ms.float32)
>>> x_shape = (3, 3)
>>> coo_tensor = COOTensor(x_indices.transpose(), x_values, x_shape)
>>> res = coo_tensor.coalesce()
>>> print(res)
COOTensor(shape=[3, 3], dtype=Float32, indices=Tensor(shape=[2, 2], dtype=Int64,
    value=[[0 1] [1 2]]), values=Tensor(shape=[2], dtype=Float32, value=[6.00000000e+00 4.00000000e+00]))
property dtype

Return the dtype of the values of COOTensor (mindspore.dtype).

property indices

Return COOTensor’s indices.

property itemsize

Return the length of one tensor element in bytes.

property ndim

Return the number of tensor dimensions.

property shape

Return COOTensor’s shape.

property size

Return the number of non-zero values.

to_csr() → mindspore.common.sparse_tensor.CSRTensor[source]

Converts COOTensor to CSRTensor.

Note

Currently only supports CPU backend with LLVM 12.0.1 installed.

Returns:

CSRTensor.

Supported Platforms:

GPU CPU

to_dense() → mindspore.common.tensor.Tensor[source]

Converts COOTensor to Dense Tensor.

Returns:

Tensor.

Supported Platforms:

GPU

to_tuple() → Tuple[mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor, Tuple[int, ...]][source]

Return indices, values and shape as a tuple.

Returns:

Tuple.

Supported Platforms:

Ascend GPU CPU

property values

Return COOTensor’s non-zero values.

class tinyms.CSRTensor(indptr=None, indices=None, values=None, shape=None, csr_tensor=None)[source]

Constructs a sparse tensor in CSR (Compressed Sparse Row) format, with specified values indicated by values and row and column positions indicated by indptr and indices.

For example, if indptr is [0, 1, 2, 2], indices is [1, 2], values is [1., 2.], shape is (3, 4), then the dense representation of the sparse tensor will be:

[[0., 1., 0., 0.],
 [0., 0., 2., 0.],
 [0., 0., 0., 0.]]

Common arithmetic operations include: addition (+), subtraction (-), multiplication (*), and division (/). For details about operations supported by CSRTensor, see operators.

Warning

  • This is an experimental API that is subjected to change.

  • If the values given by indptr or indices are invalid, the results may be undefined. Invalid values include when the length of values or indices exceeds the range indicated by indptr, and when the columns indicated by indices are repeated on the same row.

Parameters:
  • indptr (Tensor) – 1-D Tensor of shape \((M)\), which equals to shape[0] + 1, which indicates the start and end point for values in each row. Default: None. If provided, must be int16, int32 or int64.

  • indices (Tensor) – 1-D Tensor of shape \((N)\), which has the same length as values. indices indicates the which column values should be placed. Default: None. If provided, must be int16, int32 or int64.

  • values (Tensor) – Tensor, which has the same length as indices (values.shape[0] == indices.shape[0]). values stores the data for CSRTensor. Default: None.

  • shape (tuple(int)) – A tuple indicates the shape of the CSRTensor, and shape[0] must equal to M - 1, which all equal to number of rows of the CSRTensor. Default: None.

  • csr_tensor (CSRTensor) – A CSRTensor object. Values’ feature dimension should match with CSRTensor’s feature dimension (values.shape[1:] == csr_tensor.shape[2:]). Default: None.

Outputs:

CSRTensor, with shape defined by shape, and dtype inferred from value.

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, CSRTensor
>>> # initialize a csr_tensor with indptr, indices, values and shape
>>> indptr = Tensor([0, 1, 2], dtype=ms.int32)
>>> indices = Tensor([0, 1], dtype=ms.int32)
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> shape = (2, 4)
>>> csr_tensor = CSRTensor(indptr, indices, values, shape)
>>> # access a data member of CSRTensor
>>> print(indptr == csr_tensor.indptr)
[ True  True  True]
abs() → mindspore.common.sparse_tensor.CSRTensor[source]

Return absolute value element-wisely.

Returns:

CSRTensor, with all values being non-negative.

Supported Platforms:

Ascend GPU CPU

add(b: mindspore.common.sparse_tensor.CSRTensor, alpha: mindspore.common.tensor.Tensor, beta: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Addition of two CSR Tensors : C = alpha * A + beta * B

Parameters:
  • b (CSRTensor) – Sparse CSR Tensor.

  • alpha (Tensor) – Dense Tensor, its shape must be able to broadcast to self.

  • beta (Tensor) – Dense Tensor, its shape must be able to broadcast to b.

Returns:

CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> import mindspore.common.dtype as mstype
>>> indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> indices = Tensor([0, 1], dtype=mstype.int32)
>>> values_a = Tensor([2, 1], dtype=mstype.float32)
>>> values_b = Tensor([1, 2], dtype=mstype.float32)
>>> dense_shape = (2, 4)
>>> alpha = Tensor(1, mstype.float32)
>>> beta = Tensor(1, mstype.float32)
>>> a = CSRTensor(indptr, indices, values_a, dense_shape)
>>> b = CSRTensor(indptr, indices, values_b, dense_shape)
>>> print(a.add(b, alpha, beta))
    CSRTensor(shape=[2,4], dtype=Float32,
              indptr=Tensor(shape=[3], dtype=Int32, value = [0, 1, 2]),
              indices=Tensor(shape=[2], dtype=Int32, value = [0, 1]),
              values=Tensor(shape=[2], dtype=Float32, value = [3.0, 3.0]))
astype(dtype: mstype) → CSRTensor[source]

Return a copy of the CSRTensor, cast its values to a specified type.

Parameters:

dtype (Union[mindspore.dtype, numpy.dtype, str]) – Designated tensor dtype.

Returns:

CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, CSRTensor
>>> indptr = Tensor([0, 1, 2], dtype=ms.int32)
>>> indices = Tensor([0, 1], dtype=ms.int32)
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> shape = (2, 4)
>>> csr_tensor = CSRTensor(indptr, indices, values, shape)
>>> print(csr_tensor.astype(ms.float64).dtype)
Float64
property dtype

Return the dtype of the values of CSRTensor (mindspore.dtype).

property indices

Return CSRTensor’s column indices.

property indptr

Return CSRTensor’s row indices pointers.

property itemsize

Return the length of one tensor element in bytes.

mm(matrix: Union[mindspore.common.tensor.Tensor, mindspore.common.sparse_tensor.CSRTensor]) → Union[mindspore.common.tensor.Tensor, mindspore.common.sparse_tensor.CSRTensor][source]

Return the matrix multiplication result of the right-multiply matrix(dense or CSRTensor) of the CSRTensor. The CSRTensor with shape [M, N] needs to adapt the right matrix with shape [N, K] to get the dense matrix or CSRTensor with result [M, K].

Note

If right matrix is CSRTensor, currently only supports GPU backend. if right matrix is Tensor, currently supports CPU backend with LLVM 12.0.1 or GPU backend.

Parameters:

matrix (Tensor or CSRTensor) – A dense Tensor or CSRTensor, its shape[0] should be equal to csr_tensor.shape[1]

Returns:

Tensor or CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> indices = Tensor([0, 1], dtype=mstype.int32)
>>> values = Tensor([2, 1], dtype=mstype.float32)
>>> dense_shape = (2, 4)
>>> csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
>>> dense_matrix = Tensor([[1., 2.], [1, 2.], [1, 2.], [1., 2.]], dtype=mstype.float32)
>>> print(csr_tensor.mm(dense_matrix))
[[2. 4.]
[1. 2.]]
mv(dense_vector: mindspore.common.tensor.Tensor) → mindspore.common.tensor.Tensor[source]

Return the matrix multiplication result of the right-multiply dense matrix of the CSRTensor. The CSRTensor with shape [M, N] needs to adapt the dense vector with shape [N, 1] to get the dense vector with result [M, 1].

Note

Currently only supports CPU backend with LLVM 12.0.1 installed.

Parameters:

dense_vector (Tensor) – A dense Tensor, its shape must be (csr_tensor.shape[1], 1)

Returns:

Tensor.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> indices = Tensor([0, 1], dtype=mstype.int32)
>>> values = Tensor([2, 1], dtype=mstype.float32)
>>> dense_shape = (2, 4)
>>> csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
>>> dense = Tensor([[1], [1], [1], [1]], dtype=mstype.float32)
>>> print(csr_tensor.mv(dense))
[[2.]
[1.]]
property ndim

Return the number of tensor dimensions.

property shape

Return CSRTensor’s shape.

property size

Return the number of non-zero values.

sum(axis: int) → mindspore.common.tensor.Tensor[source]

Reduces a dimension of a CSRTensor by summing all elements in the dimension.

Note

Currently only supports CPU backend with LLVM 12.0.1 installed.

Parameters:

axis (int) – The dimensions to reduce.

Returns:

Tensor, the dtype is the same as CSRTensor.values.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> indices = Tensor([0, 1], dtype=mstype.int32)
>>> values = Tensor([2, 1], dtype=mstype.float32)
>>> dense_shape = (2, 4)
>>> csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
>>> print(csr_tensor.sum(1))
[[2.]
[1.]]
to_coo() → mindspore.common.sparse_tensor.COOTensor[source]

Converts CSRTensor to COOTensor.

Note

Currently only supports CPU backend with LLVM 12.0.1 installed.

Returns:

COOTensor.

Supported Platforms:

GPU CPU

to_dense() → mindspore.common.tensor.Tensor[source]

Converts CSRTensor to Dense Tensor.

Returns:

Tensor.

Supported Platforms:

GPU

to_tuple() → Tuple[mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor, Tuple[int, ...]][source]

Return indptr, indices, values and shape as a tuple.

Returns:

Tuple.

Supported Platforms:

Ascend GPU CPU

property values

Return CSRTensor’s non-zero values.

tinyms.ms_function(fn=None, input_signature=None, hash_args=None, jit_config=None)[source]

Create a callable MindSpore graph from a Python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Note

ms_function will be deprecated and removed in a future version. Please use jit instead. If input_signature is specified, each input of fn must be a Tensor. And the input arguments for fn will not accept **kwargs.

Parameters:
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • input_signature (Tensor) – The Tensor which describes the input arguments. The shape and dtype of the Tensor will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

  • hash_args (Union[Object, List or Tuple of Objects]) – The local free variables used inside fn, like functions or objects of class defined outside fn. Calling fn again with change of hash_args will trigger recompilation.

  • jit_config (JitConfig) – Jit config for compile. Default: None.

Returns:

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import ms_function
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling ms_function
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function
>>> @ms_function
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function with input_signature parameter
>>> @ms_function(input_signature=(Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)),
...                               Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
...
... # Set hash_args as fn, otherwise cache of compiled `closure_fn` will not be reused.
... # While fn differs during calling again, recompilation will be triggered.
>>> def func(x):
...     return ops.exp(x)
...
>>> def closure_fn(x, fn):
...     @ms_function(hash_args=fn)
...     def inner_fn(a):
...         return fn(a)
...     return inner_fn(x)
...
>>> inputs = Tensor(np.ones([10, 10, 10]).astype(np.float32))
>>> for i in range(10):
...     closure_fn(inputs, func)
tinyms.ms_class(cls)[source]

Class decorator for user-defined classes.

This allows MindSpore to identify user-defined classes and thus obtain their attributes and methods.

Note

ms_class will be deprecated and removed in a future version. Please use jit_class instead.

Parameters:

cls (Class) – User-defined class.

Returns:

Class.

Raises:
  • TypeError – If ms_class is used for non-class types or nn.Cell.

  • AttributeError – If the private attributes or magic methods of the class decorated with ms_class is called.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> from mindspore import ms_class
...
>>> @ms_class
... class UserDefinedNet:
...     def __init__(self):
...         self.value = 10
...
...     def func(self, x):
...         return 2 * x
...
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.net = UserDefinedNet()
...
...     def construct(self, x):
...         out = self.net.value + self.net.func(x)
...         return out
...
>>> net = Net()
>>> out = net(5)
>>> print(out)
20
tinyms.jit(fn=None, input_signature=None, hash_args=None, jit_config=None)[source]

Create a callable MindSpore graph from a Python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Note

If input_signature is specified, each input of fn must be a Tensor. And the input arguments for fn will not accept **kwargs.

Parameters:
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • input_signature (Tensor) – The Tensor which describes the input arguments. The shape and dtype of the Tensor will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

  • hash_args (Union[Object, List or Tuple of Objects]) – The local free variables used inside fn, like functions or objects of class defined outside fn. Calling fn again with change of hash_args will trigger recompilation.

  • jit_config (JitConfig) – Jit config for compile. Default: None.

Returns:

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import jit
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling decorator @jit
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = jit(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @jit
>>> @jit
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @jit with input_signature parameter
>>> @jit(input_signature=(Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)),
...                       Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
...
... # Set hash_args as fn, otherwise cache of compiled `closure_fn` will not be reused.
... # While fn differs during calling again, recompilation will be triggered.
>>> def func(x):
...     return ops.exp(x)
...
>>> def closure_fn(x, fn):
...     @jit(hash_args=fn)
...     def inner_fn(a):
...         return fn(a)
...     return inner_fn(x)
...
>>> inputs = Tensor(np.ones([10, 10, 10]).astype(np.float32))
>>> for i in range(10):
...     closure_fn(inputs, func)
tinyms.jit_class(cls)[source]

Class decorator for user-defined classes.

This allows MindSpore to identify user-defined classes and thus obtain their attributes and methods.

Parameters:

cls (Class) – User-defined class.

Returns:

Class.

Raises:
  • TypeError – If jit_class is used for non-class types or nn.Cell.

  • AttributeError – If the private attributes or magic methods of the class decorated with jit_class is called.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> from mindspore import jit_class
...
>>> @jit_class
... class UserDefinedNet:
...     def __init__(self):
...         self.value = 10
...
...     def func(self, x):
...         return 2 * x
...
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.net = UserDefinedNet()
...
...     def construct(self, x):
...         out = self.net.value + self.net.func(x)
...         return out
...
>>> net = Net()
>>> out = net(5)
>>> print(out)
20
class tinyms.Parameter(default_input, name=None, requires_grad=True, layerwise_parallel=False, parallel_optimizer=True)[source]

Parameter is a Tensor subclass, when they are assigned as Cell attributes they are automatically added to the list of its parameters, and will appear, e.g. in cell.get_parameters() iterator.

Note

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by a Tensor, the type of Parameter will be Tensor. Tensor will save the shape and type info of a tensor with no memory usage. The shape can be changed while compiling for auto-parallel. Call init_data will return a Tensor Parameter with initialized data. If there is an operator in the network that requires part of the inputs to be Parameter, then the Parameters as this part of the inputs are not allowed to be cast. Give each Parameter a unique name to facilitate subsequent operations and updates. If there are two or more Parameter objects with the same name in a network, will be prompted to set a unique name when defining.

Parameters:
  • default_input (Union[Tensor, int, float, numpy.ndarray, list]) – Parameter data, to initialize the parameter data.

  • name (str) –

    Name of the parameter. Default: None.

    1) If the parameter is not given a name, the default name is its variable name. For example, the name of param_a below is name_a, and the name of param_b is the variable name param_b.

    self.param_a = Parameter(Tensor([1], ms.float32), name="name_a")
    self.param_b = Parameter(Tensor([2], ms.float32))
    

    2) If parameter in list or tuple is not given a name, will give it a unique name. For example, the names of parameters below are Parameter$1 and Parameter$2.

    self.param_list = [Parameter(Tensor([3], ms.float32)),
                       Parameter(Tensor([4], ms.float32))]
    

    3) If the parameter is given a name, and the same name exists between different parameters, an exception will be thrown. For example, “its name ‘name_a’ already exists.” will be thrown.

    self.param_a = Parameter(Tensor([1], ms.float32), name="name_a")
    self.param_tuple = (Parameter(Tensor([5], ms.float32), name="name_a"),
                        Parameter(Tensor([6], ms.float32)))
    

    4) If a parameter appear multiple times in list or tuple, check the name of the object only once. For example, the following example will not throw an exception.

    self.param_a = Parameter(Tensor([1], ms.float32), name="name_a")
    self.param_tuple = (self.param_a, self.param_a)
    

  • requires_grad (bool) – True if the parameter requires gradient. Default: True.

  • layerwise_parallel (bool) – When layerwise_parallel is true in data/hybrid parallel mode, broadcast and gradients communication would not be applied to parameters. Default: False.

  • parallel_optimizer (bool) – It is used to filter the weight shard operation in semi auto or auto parallel mode. It works only when enable parallel optimizer in mindspore.set_auto_parallel_context(). Default: True.

Examples

>>> import numpy as np
>>> from mindspore import Parameter, Tensor
>>> import mindspore.ops as ops
>>> import mindspore.nn as nn
>>> import mindspore
>>>
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...         self.weight = Parameter(Tensor(np.ones((1, 2)), mindspore.float32), name="w", requires_grad=True)
...
...     def construct(self, x):
...         out = self.matmul(self.weight, x)
...         return out
>>> net = Net()
>>> x = Tensor(np.ones((2, 1)), mindspore.float32)
>>> print(net(x))
[[2.]]
>>> net.weight.set_data(Tensor(np.zeros((1, 2)), mindspore.float32))
>>> print(net(x))
[[0.]]
asnumpy(self: mindspore._c_expression.Tensor) → array

Convert tensor to numpy.ndarray.

Returns:

numpy.ndarray.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> array = data.asnumpy()
>>> array
array([[1., 1., 1.],
       [1., 1., 1.]])
asnumpy_of_slice_persistent_data(self: mindspore._c_expression.Tensor, arg0: int, arg1: int) → array

Convert tensor to numpy.ndarray of a slice.

Returns:

numpy.ndarray.

Examples

>>> data = mindspore.Tensor(np.ones((2000000000, 256)))
>>> data.asnumpy_of_slice_persistent_data(0, 1)
assign_value_cpp(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.Tensor) → mindspore._c_expression.Tensor

Assign another tensor value to this.

Arg:

value (mindspore.tensor): The value tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data2 = mindspore.Tensor(np.ones((2, 2), np.float32))
>>> data.assign_value(data2)
>>> data.shape
(2, 2)
property cache_enable

Return whether the parameter is cache enable.

property cache_shape

Return the cache shape corresponding to the parameter if use cache.

clone(init='same')[source]

Clone the parameter.

Parameters:

init (Union[Tensor, str, numbers.Number]) – Initialize the shape and dtype of the parameter. If init is a Tensor or numbers.Number, clone a new parameter with the same shape and dtype, and the data of the new parameter will be set according to init. If init is a str, the init should be the alias of the class inheriting from Initializer. For example, if init is ‘same’, clone a new parameter with the same data, shape, and dtype. Default: ‘same’.

Returns:

Parameter, a new parameter.

property comm_fusion

Get the fusion type (int) for communication operators corresponding to this parameter.

In AUTO_PARALLEL and SEMI_AUTO_PARALLEL mode, some communication operators used for parameters or gradients aggregation are inserted automatically. The value of fusion must be greater than or equal to 0. When the value of fusion is 0, operators will not be fused together.

copy()[source]

Copy the parameter.

Returns:

Parameter, a new parameter.

property data

Return the parameter object.

data_sync(self: mindspore._c_expression.Tensor, arg0: bool) → None
dim(self: mindspore._c_expression.Tensor) → int

Get tensor’s data dimension.

Returns:

int, the dimension of tensor.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.dim()
2
property dtype

Get the MetaTensor’s dtype.

from_numpy(self: array) → mindspore._c_expression.Tensor

Creates a Tensor from a numpy.ndarray without copy.

Arg:

array (numpy.ndarray): The input ndarray.

Returns:

Tensor, tensor with shared data to input ndarray.

Examples

>>> a = np.ones((2, 3))
>>> t = mindspore.Tensor.from_numpy(a)
getitem_index_info(self: object, arg0: object, arg1: bool_) → object
init_data(layout=None, set_sliced=False)[source]

Initialize the parameter’s data.

Parameters:
  • layout (Union[None, tuple]) –

    The parameter’s layout info. layout [dev_mat, tensor_map, slice_shape, filed_size, uniform_split, opt_shard_group]. Default: None. It’s not None only in ‘SEMI_AUTO_PARALLEL’ or ‘AUTO_PARALLEL’ mode.

    • dev_mat (list(int)): The parameter’s device matrix.

    • tensor_map (list(int)): The parameter’s tensor map.

    • slice_shape (list(int)): The parameter’s slice shape.

    • filed_size (int): The parameter’s filed size.

    • uniform_split (bool): Whether the parameter is split evenly.

    • opt_shard_group (str): The group of the parameter while running optimizer parallel.

  • set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

Returns:

Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

Raises:
  • RuntimeError – If it is from Initializer, and parallel mode has changed after the Initializer created.

  • ValueError – If the length of the layout is less than 6.

  • TypeError – If layout is not tuple.

property inited_param

Get the new parameter after call the init_data.

Default is a None, If self is a Parameter without data, after call the init_data the initialized Parameter with data will be recorded here.

is_init(self: mindspore._c_expression.Tensor) → bool

Get tensor init_flag.

Returns:

bool, whether the tensor init.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.is_init()
False
is_persistent_data(self: mindspore._c_expression.Tensor) → bool

Check if tensor have persistent data.

Returns:

Bool.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.is_persistent_data()
property key

Return the parameter unique key.

property layerwise_parallel

Get the layerwise parallel status(bool) of the parameter.

When layerwise_parallel is true in DATA_PARALLEL and HYBRID_PARALLEL parallel mode, broadcast and gradients communication would not be applied to parameters.

property name

Get the name of the parameter.

property parallel_optimizer

Get the optimizer parallel status(bool) of the parameter.

It is used to filter the weight shard operation in AUTO_PARALLEL and SEMI_AUTO_PARALLEL mode. It works only when enable parallel optimizer in mindspore.set_auto_parallel_context().

property parallel_optimizer_comm_recompute

Get the communication recompute status(bool) of optimizer parallel for the parameter.

In AUTO_PARALLEL and SEMI_AUTO_PARALLEL mode, when applying parallel optimizer, some mindspore.ops.AllGather operators used for parameters gathering are inserted automatically. It is used to control the recompute attr for those mindspore.ops.AllGather operators.

Note

  • Only Graph mode is supported.

  • It is recommended to use cell.recompute(parallel_optimizer_comm_recompute=True/False) to configure the AllGather operators introducing by parallel optimizer rather than using this interface directly.

persistent_data_from_numpy(self: array, arg0: int_) → mindspore._c_expression.Tensor

Creates a Tensor from a numpy.ndarray without copy. Use persistent data tensor.

Arg:

array (numpy.ndarray): The input ndarray. slice_num (int): The slice num of persistent data tensor.

Returns:

Tensor, tensor with shared data to input ndarray.

Examples

>>> a = np.ones((2, 3))
>>> t = mindspore.Tensor.persistent_data_from_numpy(a, 1)
property requires_grad

Return whether the parameter requires gradient.

set_cast_dtype(self: mindspore._c_expression.Tensor, dtype: mindspore._c_expression.typing.Type = None) → None
set_data(data, slice_shape=False)[source]

Set Parameter’s data.

Parameters:
  • data (Union[Tensor, int, float]) – New data.

  • slice_shape (bool) – If slice the parameter is set to true, the shape is not checked for consistency. Default: False.

Returns:

Parameter, the parameter after set data.

set_dtype(self: mindspore._c_expression.Tensor, arg0: mindspore._c_expression.typing.Type) → mindspore._c_expression.typing.Type

Set the tensor’s data type.

Arg:

dtype (mindspore.dtype): The type of output tensor.

Examples

>>> data = mindspore.Tensor(np.ones((1, 2), np.float32))
>>> data.set_dtype(mindspore.int32)
mindspore.int32
set_init_flag(self: mindspore._c_expression.Tensor, arg0: bool) → None

Set tensor init_flag.

Examples

>>> data = mindspore.Tensor(np.ones((2, 3)))
>>> data.set_init_flag(True)
set_param_fl(push_to_server=False, pull_from_server=False, requires_aggr=True)[source]

Set the way of parameter and server interaction.

Parameters:
  • push_to_server (bool) – Whether the parameter should be pushed to server. Default: False.

  • pull_from_server (bool) – Whether the parameter should be pulled from server. Default: False.

  • requires_aggr (bool) – Whether the parameter should be aggregated in the server. Default: True.

set_param_ps(init_in_server=False)[source]

Set whether the trainable parameter is updated by parameter server and whether the trainable parameter is initialized on server.

Note

It only works when a running task is in the parameter server mode. It is supported only in graph mode.

Parameters:

init_in_server (bool) – Whether trainable parameter updated by parameter server is initialized on server. Default: False.

setitem_index_info(self: object, arg0: object, arg1: object, arg2: bool_) → object
property shape

Get the MetaTensor’s shape.

property sliced

Get slice status of the parameter.

property unique

Whether the parameter is already unique or not.

value()[source]

Return the value of parameter object.

Examples

>>> from mindspore import Tensor, Parameter
>>> import numpy as np
>>> x = Parameter(Tensor(np.array([1, 2], dtype=np.float32)), name="param")
>>> x_value = x.value()
>>> print(x_value)
[1.  2.]
class tinyms.ParameterTuple[source]

Inherited from tuple, ParameterTuple is used to save multiple parameter.

Note

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init='same')[source]

Clone the parameters in ParameterTuple element-wisely to generate a new ParameterTuple.

Parameters:
  • prefix (str) – Namespace of parameter, the prefix string will be added to the names of parameters in parametertuple.

  • init (Union[Tensor, str, numbers.Number]) –

    Clone the shape and dtype of Parameters in ParameterTuple and set data according to init. Default: ‘same’.

    • If init is a Tensor , set the new Parameter data to the input Tensor.

    • If init is numbers.Number , set the new Parameter data to the input number.

    • If init is a str, data will be set according to the initialization method of the same name in the Initializer.

    • If init is ‘same’, the new Parameter has the same value with the original Parameter.

Returns:

Tuple, the new Parameter tuple.

count()

Return number of occurrences of value.

index()

Return first index of value.

Raises ValueError if the value is not present.

tinyms.set_seed(seed)[source]

Set global seed.

Note

The global seed is used by numpy.random, mindspore.common.Initializer, mindspore.ops.function.random_func and mindspore.nn.probability.distribution.

If global seed is not set, these packages will use their own default seed independently, numpy.random and mindspore.common.Initializer will choose a random seed, mindspore.ops.function.random_func and mindspore.nn.probability.distribution will use zero.

Seed set by numpy.random.seed() only used by numpy.random, while seed set by this API will also used by numpy.random, so just set all seed by this API is recommended.

In semi_auto_parallel/auto_parallel mode, when using set_seed, weights with same shape and same sharding strategy in the same device would be initialized to the same result, otherwise, they would be initialized to the different result.

Parameters:

seed (int) – The seed to be set.

Raises:

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_seed, Parameter
>>> from mindspore.common.initializer import initializer
>>> import mindspore as ms
>>> # Note: (1) Please make sure the code is running in PYNATIVE MODE;
>>> # (2) Because Composite-level ops need parameters to be Tensors, for below examples,
>>> # when using ops.uniform operator, minval and maxval are initialised as:
>>> minval = Tensor(1.0, ms.float32)
>>> maxval = Tensor(2.0, ms.float32)
>>>
>>> # 1. If global seed is not set, numpy.random and initializer will choose a random seed:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get different results:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4
>>>
>>> # 2. If global seed is set, numpy.random and initializer will use it:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>>
>>> # 3. If neither global seed nor op seed is set, mindspore.ops.function.random_func and
>>> # mindspore.nn.probability.distribution will choose a random seed:
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get different results:
>>> c1 = ops.uniform((1, 4), minval, maxval) # C3
>>> c2 = ops.uniform((1, 4), minval, maxval) # C4
>>>
>>> # 4. If global seed is set, but op seed is not set, mindspore.ops.function.random_func and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # default op seed. Each call will change the default op seed, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval) # C2
>>>
>>> # 5. If both global seed and op seed are set, mindspore.ops.function.random_func and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # op seed counter. Each call will change the op seed counter, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 6. If op seed is set but global seed is not set, 0 will be used as global seed. Then
>>> # mindspore.ops.function.random_func and mindspore.nn.probability.distribution act as in
>>> # condition 5.
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the different results:
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 7. Recall set_seed() in the program will reset numpy seed and op seed counter of
>>> # mindspore.ops.function.random_func and mindspore.nn.probability.distribution.
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> c1 = ops.uniform((1, 4), minval, maxval, seed=2) # C1
>>> set_seed(1234)
>>> np_2 = np.random.normal(0, 1, [1]).astype(np.float32) # still get A1
>>> c2 = ops.uniform((1, 4), minval, maxval, seed=2) # still get C1
tinyms.get_seed()[source]

Get global seed.

Returns:

Integer. The global seed.

tinyms.set_dump(target, enabled=True)[source]

Enable or disable dump for the target and its contents.

target should be an instance of mindspore.nn.Cell or mindspore.ops.Primitive . Please note that this API takes effect only when Asynchronous Dump is enabled and the dump_mode field in dump config file is “2”. See the dump document for details. The default enabled status for a mindspore.nn.Cell or mindspore.ops.Primitive is False.

Warning

This is an experimental API that is subject to change or deletion.

Note

  1. This API is only effective for GRAPH_MODE with Ascend backend.

  2. This API only supports being called before training starts. If you call this API during training, it may not be effective.

  3. After using set_dump(Cell, True) , operators in forward and backward computation (computation generated by the grad operations) of the cell will be dumped.

  4. For mindspore.nn.SoftmaxCrossEntropyWithLogits layer, the forward computation and backward computation use the same set of operators. So you can only see dump data from backward computation. Please note that mindspore.nn.SoftmaxCrossEntropyWithLogits layer will also use the above operators internally when initialized with sparse=True and reduction=”mean” .

Parameters:
  • target (Union[Cell, Primitive]) – The Cell instance or Primitive instance to which the dump flag is set.

  • enabled (bool, optional) – True means enable dump, False means disable dump. Default: True.

Supported Platforms:

Ascend

Examples

>>> # Please set the dump config file and environment variable before
>>> # running this example to actually get the dump data.
>>> # See the document of this API for details.
>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, set_dump
>>>
>>> ms.set_context(device_target="Ascend", mode=ms.GRAPH_MODE)
>>>
>>> class MyNet(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.conv1 = nn.Conv2d(5, 6, 5, pad_mode='valid')
...         self.relu1 = nn.ReLU()
...
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu1(x)
...         return x
>>>
>>> if __name__ == "__main__":
...     net = MyNet()
...     set_dump(net.conv1)
...     input_tensor = Tensor(np.ones([1, 5, 10, 10], dtype=np.float32))
...     output = net(input_tensor)
tinyms.ms_memory_recycle()[source]

Recycle memory used by MindSpore. When train multi Neural network models in one process, memory used by MindSpore is very large, this is because MindSpore cached runtime memory for every model. To recycle these cached memory, users can call this function after training of one model.

tinyms.mutable(input_data, dynamic_len=False)[source]

Make a constant value mutable.

Currently, all the inputs of Cell except Tensor such as scalar, tuple, list and dict, are regarded as constant values. The constant values are non-differentiable and used to do constant folding in the optimization process.

Besides, currently when the network input is tuple[Tensor], list[Tensor] or dict[Tensor], even without changing the shape and dtype of the Tensors, the network will be re-compiled when calling this network repeatedly because the these inputs are regarded as constant values.

To solve the above problems, we provide api mutable to make the constant inputs of Cell ‘mutable’. A ‘mutable’ input means that it is changed to be a variable input just like Tensor and the most important thing is that it will be differentiable.

When the input_data is tuple or list and dynamic_len is False, mutable will return a constant length tuple or list with all mutable elements. If dynamic_len is True, the length of the return tuple or list will be dynamic.

If a dynamic length tuple or list is used as the input of the network and the network is repeatedly called, and the length of the tuple or list is different for each run, it does not need to be re-compiled.

Parameters:
  • input_data (Union[int, float, Tensor, tuple, list, dict]) – The input data to be made mutable. If ‘input_data’ is list/tuple/dict, the type of each element should also in the valid types.

  • dynamic_len (bool) – Whether to set the whole sequence to be dynamic length. In graph compilation, if dynamic_len is True, the input_data must be list or tuple and the elements of input_data must have the same type and shape. Default: False.

Warning

This is an experimental API that is subject to change or deletion.

Note

Currently this api only works in GRAPH mode.

Returns:

The origin input data which has been set mutable.

Raises:
  • TypeError – If input_data is not one of int, float, Tensor, tuple, list, dict or their nested structure.

  • TypeError – If dynamic_len is True and input_data is not tuple or list.

  • ValueError – If dynamic_len is True, input_data is tuple or list but the elements within input_data do not have the same shape and type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore.common import mutable
>>> from mindspore.common import dtype as mstype
>>> from mindspore import Tensor
>>> from mindspore import context
>>> context.set_context(mode=context.GRAPH_MODE)
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...
...     def construct(self, z):
...         x = z[0]
...         y = z[1]
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation()
...
...     def construct(self, z):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(z)
...
>>> z = mutable((Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32),
...              Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)))
>>> output = GradNetWrtX(Net())(z)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 1.41000009e+00,  1.60000002e+00,  6.59999943e+00],
 [ 1.41000009e+00,  1.60000002e+00,  6.59999943e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.70000005e+00,  1.70000005e+00,  1.70000005e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.50000000e+00,  1.50000000e+00,  1.50000000e+00]]))
class tinyms.JitConfig(jit_level='O1', exc_mode='auto', **kwargs)[source]

Jit config for compile.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • jit_level (str) –

    Option for argument level for Optimization of lift graph. Supports [“O0”, “O1”, “O2”, “O3”]. Default: “O1”.

    • ”O0”: Basic optimization.

    • ”O1”: Manual optimization.

    • ”O2”: Manual optimization and graph computation fusion.

    • ”O3”: Performance optimization, no generalization guaranteed.

  • exc_mode (str) –

    Mode for execute the network. Supports [“auto”, “sink”, “no_sink”]. Default: “auto”.

    • ”auto”: Automatic Policies.

    • ”sink”: Build computational graphs with the sink mode.

    • ”no_sink”: Build computational graphs with no sink mode.

  • **kwargs (dict) – A dictionary of keyword arguments that the class needs.

Examples

>>> from mindspore import JitConfig
>>>
>>> jitconfig = JitConfig(jit_level="O1")
>>> net = LeNet5()
>>>
>>> net.set_jit_config(jitconfig)
tinyms.abs(x, dtype=None)

Calculates the absolute value element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Currently the backend kernel only supports float calculation, if the input is not a float, then it will be casted to mstype.float32 and casted back.

Parameters:
  • x (Tensor) – Tensor to be used for calculation.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5], np.float32)
>>> output = np.absolute(x)
>>> print(output)
[1. 2. 3. 4. 5.]
tinyms.absolute(x, dtype=None)[source]

Calculates the absolute value element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Currently the backend kernel only supports float calculation, if the input is not a float, then it will be casted to mstype.float32 and casted back.

Parameters:
  • x (Tensor) – Tensor to be used for calculation.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5], np.float32)
>>> output = np.absolute(x)
>>> print(output)
[1. 2. 3. 4. 5.]
tinyms.add(x1, x2, dtype=None)[source]

Adds arguments element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input to be added.

  • x2 (Tensor) – input to be added.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the sum of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2])
>>> x2 = np.full((3, 2), [3, 4])
>>> output = np.add(x1, x2)
>>> print(output)
[[4 6]
[4 6]
[4 6]]
tinyms.amax(a, axis=None, keepdims=False, initial=None, where=True)[source]

Returns the maximum of an array or maximum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Tensor) – Input data.

  • axis (None or int or tuple of integers, optional) – Defaults to None. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of integers, the maximum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (scalar, optional) – Defaults to None. The minimum value of an output element. Must be present to allow computation on empty slice.

  • where (boolean Tensor, optional) – Defaults to True. A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns:

Tensor or scalar, maximum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amax(a)
>>> print(output)
3.0
>>> output = np.amax(a, axis=0)
>>> print(output)
[2. 3.]
>>> output = np.amax(a, axis=1)
>>> print(output)
[1. 3.]
>>> output = np.amax(a, where=np.array([False, True]), initial=-1, axis=0)
>>> print(output)
[-1.  3.]
tinyms.amin(a, axis=None, keepdims=False, initial=None, where=True)[source]

Returns the minimum of an array or minimum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Tensor) – Input data.

  • axis (None or int or tuple of integers, optional) – Defaults to None. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of integers, the minimum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (bool, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (Number, optional) – Defaults to None. The maximum value of an output element. Must be present to allow computation on empty slice.

  • where (bool Tensor, optional) – Defaults to True. A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns:

Tensor or scalar, minimum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amin(a)
>>> print(output)
0.0
>>> output = np.amin(a, axis=0)
>>> print(output)
[0. 1.]
>>> output = np.amin(a, axis=1)
>>> print(output)
[0. 2.]
>>> output = np.amin(a, where=np.array([False, True]), initial=10, axis=0)
>>> print(output)
[10.  1.]
tinyms.append(arr, values, axis=None)[source]

Appends values to the end of a tensor.

Parameters:
  • arr (Tensor) – Values are appended to a copy of this tensor.

  • values (Tensor) – These values are appended to a copy of arr. It must be of the correct shape (the same shape as arr, excluding axis). If axis is not specified, values can be any shape and will be flattened before use.

  • axis (None, int, optional) – The axis along which values are appended. If axis is not given, both arr and values are flattened before use, default is None.

Returns:

Tensor, a copy of tensor with values appended to axis.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If specified axis exceeds arr.ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((2, 3))
>>> b = np.ones((2, 1))
>>> print(np.append(a, b, axis=1).shape)
(2, 4)
tinyms.apply_along_axis(func1d, axis, arr, *args, **kwargs)[source]

Applies a function to 1-D slices along the given axis. Executes func1d(a, *args, **kwargs) where func1d operates on 1-D arrays and a is a 1-D slice of arr along axis.

Parameters:
  • func1d (function) – Maps (M,) -> (Nj…). This function should accept 1-D arrays. It is applied to 1-D slices of arr along the specified axis.

  • axis (int) – Axis along which arr is sliced.

  • arr (Tensor) – Input array with shape (Ni…, M, Nk…).

  • args (any) – Additional arguments to func1d.

  • kwargs (any) – Additional named arguments to func1d.

Returns:

Tensor with shape (Ni…, Nj…, Nk…), the output array. Its shape is identical to the shape of arr, except along the axis dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of func1d. So if func1d returns a scalar, the output will have one fewer dimensions than arr.

Supported Platforms:

Ascend GPU CPU

Raises:

ValueError – If axis is out of the range.

Examples

>>> import mindspore.numpy as np
>>> b = np.array([[1,2,3], [4,5,6], [7,8,9]])
>>> print(np.apply_along_axis(np.diag, -1, b))
[[[1 0 0]
[0 2 0]
[0 0 3]]
[[4 0 0]
[0 5 0]
[0 0 6]]
[[7 0 0]
[0 8 0]
[0 0 9]]]
tinyms.apply_over_axes(func, a, axes)[source]

Applies a function repeatedly over multiple axes.

func is called as res = func(a, axis), where axis is the first element of axes. The result res of the function call must have either the same dimensions as a or one less dimension. If res has one less dimension than a, a dimension is inserted before axis. The call to func is then repeated for each axis in axes, with res as the first argument.

Parameters:
  • func (function) – This function must take two arguments, func(a, axis).

  • a (Union[int, float, bool, list, tuple, Tensor]) – Input tensor.

  • axes (Union[int, list, tuple]) – Axes over which func is applied; the elements must be integers.

Returns:

Tensor. The number of dimensions is the same as a, but the shape can be different. This depends on whether func changes the shape of its output with respect to its input.

Raises:
  • TypeError – If input a is not array_like or axes is not int or sequence of ints.

  • ValueError – If any axis is out of range or duplicate axes exist.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(10).reshape(2, 5).astype('float32')
>>> print(x)
[[0. 1. 2. 3. 4.]
 [5. 6. 7. 8. 9.]]
>>> print(np.apply_over_axes(np.sum, x, axes=0))
[[ 5.  7.  9. 11. 13.]]
tinyms.arange(start, stop=None, step=None, dtype=None)[source]

Returns evenly spaced values within a given interval.

Parameters:
  • start (Union[int, float]) – Start of interval. The interval includes this value. When stop is provided as a position argument, start must be given, when stop is a normal argument, start can be optional, and default is 0. Please see additional examples below.

  • stop (Union[int, float], optional) – End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of out.

  • step (Union[int, float], optional) – Spacing between values. For any output out, this is the distance between two adjacent values, \(out[i+1] - out[i]\). The default step size is 1. If step is specified as a position argument, start must also be given.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype. If dtype is None, the data type of the new tensor will be inferred from start, stop and step. Default is None.

Returns:

Tensor with evenly spaced values.

Raises:
  • TypeError(PyNative Mode) – If input arguments have types not specified above, or arguments are not given in the correct orders specified above.

  • RuntimeError(Graph Mode) – The inputs that lead to TypeError in Pynative Mode will lead to RuntimeError in Graph Mode.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.arange(0, 5, 1))
[0 1 2 3 4]
>>> print(np.arange(3))
[0 1 2]
>>> print(np.arange(start=0, stop=3))
[0 1 2]
>>> print(np.arange(0, stop=3, step=0.5))
[0.  0.5 1.  1.5 2.  2.5]
tinyms.arccos(input, dtype=None)[source]

Trigonometric inverse cosine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • input (Tensor) – Input tensor. x-coordinate on the unit circle. For real arguments, the domain is \([-1, 1]\).

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input = np.asarray([1, -1], np.float32)
>>> output = np.arccos(input)
>>> print(output)
[0.        3.1415927]
tinyms.arccosh(x, dtype=None)[source]

Inverse hyperbolic cosine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(1, 5).astype('float32')
>>> print(np.arccosh(x))
[0.        1.316958  1.7627472 2.063437 ]
tinyms.arcsin(x, dtype=None)[source]

Inverse sine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor. y-coordinate on the unit circle.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Output Tensor.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, -1], np.float32)
>>> output = np.arcsin(x)
>>> print(output)
[ 1.5707964 -1.5707964]
tinyms.arcsinh(x, dtype=None)[source]

Inverse hyperbolic sine element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1., 2., 3., 4.], dtype=np.float32)
>>> print(np.arcsinh(x))
[0.8813736 1.4436355 1.8184465 2.0947125]
tinyms.arctan(x, dtype=None)[source]

Trigonometric inverse tangent, element-wise.

The inverse of tan, so that if \(y = tan(x)\) then \(x = arctan(y)\).

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5).astype('float32')
>>> print(np.arctan(x))
[0.        0.7853982 1.1071488 1.2490457 1.3258177]
tinyms.arctan2(x1, x2, dtype=None)[source]

Element-wise arc tangent of \(x1/x2\) choosing the quadrant correctly.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input tensor.

  • x2 (Tensor) – input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the sum of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([-1, +1, +1, -1])
>>> x2 = np.array([-1, -1, +1, +1])
>>> output = np.arctan2(x1, x2)
>>> print(output)
[-2.3561945   2.3561945   0.78539819 -0.78539819]
tinyms.arctanh(x, dtype=None)[source]

Inverse hyperbolic tangent element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([-0.99, -0.75, -0.5, 0, 0.5]).astype('float32')
>>> print(np.arctanh(x))
[-2.646653   -0.97295505 -0.54930615  0.          0.54930615]
tinyms.argmax(a, axis=None)[source]

Returns the indices of the maximum values along an axis.

Note

Numpy argument out is not supported. On Ascend, in case of multiple occurrences of the maximum values, the return indices may not necessarily correspond to the first occurrence.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input array.

  • axis (int, optional) – By default, the index is into the flattened array, otherwise along the specified axis. Default: None.

Returns:

Tensor, array of indices into the array. It has the same shape as a.shape with the dimension along axis removed.

Raises:

ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(10, 16).reshape(2, 3)
>>> print(np.argmax(a))
5
>>> print(np.argmax(a, axis=0))
[1 1 1]
>>> print(np.argmax(a, axis=1))
[2 2]
tinyms.argmin(a, axis=None)[source]

Returns the indices of the minimum values along an axis.

Note

Numpy argument out is not supported.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input array.

  • axis (int, optional) – By default, the index is into the flattened array, otherwise along the specified axis. Default: None.

Returns:

Tensor, array of indices into the array. It has the same shape as a.shape with the dimension along axis removed.

Raises:

ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(10, 16).reshape(2, 3)
>>> print(np.argmin(a))
0
>>> print(np.argmin(a, axis=0))
[0 0 0]
>>> print(np.argmin(a, axis=1))
[0 0]
tinyms.array(obj, dtype=None, copy=True, ndmin=0)[source]

Creates a tensor.

This function creates tensors from an array-like object.

Parameters:
  • obj (Union[int, float, bool, list, tuple]) – Input data, in any form that can be converted to a Tensor. This includes Tensor, list, tuple and numbers.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or ‘int32’. If dtype is None, the data type of the new tensor will be inferred from obj. Default is None.

  • copy (bool) – If True, then the object is copied. Otherwise, a copy will only be made if necessary. Default: True.

  • ndmin (int) – Specifies the minimum number of dimensions that the resulting tensor should have. Ones will be pre-pended to the shape as needed to meet this requirement. Default: 0

Returns:

Tensor, generated tensor with the specified dtype.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input obj has different sizes at different dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.array([1,2,3]))
[1 2 3]
tinyms.array_equal(a1, a2, equal_nan=False)[source]

Returns True if input arrays have same shapes and all elements equal.

Note

In mindspore, a bool tensor is returned instead, since in Graph mode, the value cannot be traced and computed at compile time.

Since on Ascend, nan is treated differently, currently the argument equal_nan is not supported on Ascend.

Parameters:
Returns:

Scalar bool tensor, value is True if inputs are equal, False otherwise.

Raises:

TypeError – If inputs have types not specified above.

Supported Platforms:

GPU CPU Ascend

Examples

>>> import mindspore.numpy as np
>>> a = [0,1,2]
>>> b = [[0,1,2], [0,1,2]]
>>> print(np.array_equal(a,b))
False
tinyms.array_equiv(a1, a2)[source]

Returns True if input arrays are shape consistent and all elements equal.

Shape consistent means they are either the same shape, or one input array can be broadcasted to create the same shape as the other one.

Note

In mindspore, a bool tensor is returned instead, since in Graph mode, the value cannot be traced and computed at compile time.

Parameters:

a1/a2 (Union[int, float, bool, list, tuple, Tensor]) – Input arrays.

Returns:

Scalar bool tensor, value is True if inputs are equivalent, False otherwise.

Raises:

TypeError – If inputs have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = [0,1,2]
>>> b = [[0,1,2], [0,1,2]]
>>> print(np.array_equiv(a,b))
True
tinyms.array_split(x, indices_or_sections, axis=0)[source]

Splits a tensor into multiple sub-tensors.

Note

Currently, array_split only supports mindspore.float32 on CPU.

The only difference between np.split and np.array_split is that np.array_split allows indices_or_sections to be an integer that does not equally divide the axis. For a tensor of length l that should be split into n sections, it returns \(l % n\) sub-arrays of size \(l//n + 1\) and the rest of size \(l//n\).

Parameters:
  • x (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – If integer, \(N\), the tensor will be divided into \(N\) tensors along axis. If tuple(int), list(int) or of sorted integers, the entries indicate where along axis the array is split. For example, \([2, 3]\) would, for \(axis=0\), result in three sub-tensors \(x[:2]\), \(x[2:3]`and :math:`x[3:]\). If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A list of sub-tensors.

Raises:
  • TypeError – If argument indices_or_sections is not integer, tuple(int) or list(int) or argument axis is not integer.

  • ValueError – If argument axis is out of range of \([-x.ndim, x.ndim)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.arange(9).astype("float32")
>>> output = np.array_split(input_x, 4)
>>> print(output)
(Tensor(shape=[3], dtype=Float32,
    value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
Tensor(shape=[2], dtype=Float32,
    value= [ 3.00000000e+00,  4.00000000e+00]),
Tensor(shape=[2], dtype=Float32,
    value= [ 5.00000000e+00,  6.00000000e+00]),
Tensor(shape=[2], dtype=Float32,
    value= [ 7.00000000e+00,  8.00000000e+00]))
tinyms.array_str(a)[source]

Returns a string representation of the data in an array.

The data in the array is returned as a single string. This function is similar to array_repr, the difference being that array_repr also returns information on the kind of array and its data type.

Note

Numpy argument max_line_width, precision and suppress_small are not supported. Graph mode does not support the function.

Parameters:

a (Tensor) – Input data.

Returns:

String.

Supported Platforms:

Ascend GPU CPU

Raises:

TypeError – If input is not tensor.

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5)
>>> np.array_str(x)
'[0 1 2 3 4]'
tinyms.asarray(a, dtype=None)[source]

Converts the input to tensor.

This function converts tensors from an array-like object.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input data, in any form that can be converted to a Tensor. This includes Tensor, list, tuple and numbers.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or ‘int32’. If dtype is None, the data type of the new tensor will be inferred from obj. Default is None.

Returns:

Tensor, generated tensor with the specified dtype.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input a has different sizes at different dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.asarray([1,2,3]))
[1 2 3]
tinyms.asfarray(a, dtype=mindspore.float32)[source]

Similar to asarray, converts the input to a float tensor.

If non-float dtype is defined, this function will return a float32 tensor instead.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input data, in any form that can be converted to a Tensor. This includes Tensor, list, tuple and numbers.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, can be in format of np.int32, or ‘int32’. If dtype is None, the data type of the new tensor will be inferred from a. Default is mindspore.float32.

Returns:

Tensor, generated tensor with the specified float dtype.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input a has different sizes at different dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.asfarray([1,2,3]))
[1. 2. 3.]
tinyms.atleast_1d(*arys)[source]

Converts inputs to arrays with at least one dimension.

Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved.

Note

In graph mode, returns a tuple of tensor instead of a list of tensors.

Parameters:

*arys (Tensor) – one or more input tensors.

Returns:

Tensor, or list of tensors, each with a.ndim >= 1.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((2, 3))
>>> b = np.ones(())
>>> c = np.ones(5)
>>> output = np.atleast_1d(a, b, c)
>>> print(output)
    [Tensor(shape=[2, 3], dtype=Float32, value=
    [[1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
    [1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]),
    Tensor(shape=[1], dtype=Float32, value= [1.00000000e+00]),
    Tensor(shape=[5], dtype=Float32,
    value= [1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
    1.00000000e+00, 1.00000000e+00])]
tinyms.atleast_2d(*arys)[source]

Reshapes inputs as arrays with at least two dimensions.

Note

In graph mode, returns a tuple of tensor instead of a list of tensors.

Parameters:

*arys (Tensor) – one or more input tensors.

Returns:

Tensor, or list of tensors, each with a.ndim >= 2.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((2, 3))
>>> b = np.ones(())
>>> c = np.ones(5)
>>> output = np.atleast_2d(a, b, c)
>>> print(output)
    [Tensor(shape=[2, 3], dtype=Float32, value=
    [[1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
    [1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]),
    Tensor(shape=[1, 1], dtype=Float32, value= [[1.00000000e+00]]),
    Tensor(shape=[1, 5], dtype=Float32,
    value= [[1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
    1.00000000e+00, 1.00000000e+00]])]
tinyms.atleast_3d(*arys)[source]

Reshapes inputs as arrays with at least three dimensions.

Note

In graph mode, returns a tuple of tensor instead of a list of tensors.

Parameters:

*arys (Tensor) – one or more input tensors.

Returns:

Tensor, or list of tensors, each with a.ndim >= 3. For example, a 1-D array of shape (N,) becomes a tensor of shape (1, N, 1), and a 2-D array of shape (M, N) becomes a tensor of shape (M, N, 1).

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((2, 3))
>>> b = np.ones(())
>>> c = np.ones(5)
>>> output = np.atleast_3d(a, b, c)
>>> print(output)
    [Tensor(shape=[2, 3, 1], dtype=Float32, value=
    [[[1.00000000e+00], [1.00000000e+00], [1.00000000e+00]],
    [[1.00000000e+00], [1.00000000e+00], [1.00000000e+00]]]),
    Tensor(shape=[1, 1, 1], dtype=Float32, value= [[[1.00000000e+00]]]),
    Tensor(shape=[1, 5, 1], dtype=Float32,
    value= [[[1.00000000e+00], [1.00000000e+00], [1.00000000e+00],
    [1.00000000e+00], [1.00000000e+00]]])]
tinyms.average(x, axis=None, weights=None, returned=False)[source]

Computes the weighted average along the specified axis.

Parameters:
  • x (Tensor) – A Tensor to be averaged.

  • axis (Union[None, int, tuple(int)]) – Axis along which to average x. Default: None. If the axis is None, it will average over all of the elements of the tensor x. If the axis is negative, it counts from the last to the first axis.

  • weights (Union[None, Tensor]) – Weights associated with the values in x. Default: None. If weights is None, all the data in x are assumed to have a weight equal to one. If weights is 1-D tensor, the length must be the same as the given axis. Otherwise, weights should have the same shape as x.

  • returned (bool) – Default: False. If True, the tuple (average, sum_of_weights) is returned. If False, only the average is returned.

Returns:

Averaged Tensor. If returned is True, return tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.array([[1., 2.], [3., 4.]])
>>> output = np.average(input_x, axis=0, weights=input_x, returned=True)
>>> print(output)
(Tensor(shape=[2], dtype=Float32, value= [ 2.50000000e+00,  3.33333325e+00]),
 Tensor(shape=[2], dtype=Float32, value= [ 4.00000000e+00,  6.00000000e+00]))
tinyms.bartlett(M)[source]

Returns the Bartlett window. The Bartlett window is very similar to a triangular window, except that the end points are at zero. It is often used in signal processing for tapering a signal, without generating too much ripple in the frequency domain.

Parameters:

M (int) – Number of points in the output window. If zero or less, an empty array is returned.

Returns:

Tensor, the triangular window, with the maximum value normalized to one (the value one appears only if the number of samples is odd), with the first and last samples equal to zero.

Raises:

TypeError – If M is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.bartlett(12))
[0.         0.18181819 0.36363637 0.5454545  0.72727275 0.9090909
0.9090909  0.72727275 0.5454545  0.36363637 0.18181819 0.        ]
tinyms.bincount(x, weights=None, minlength=0, length=None)[source]

Count number of occurrences of each value in array of non-negative ints. The number of bins (of size 1) is one larger than the largest value in x. If minlength is specified, there will be at least this number of bins in the output array (though it will be longer if necessary, depending on the contents of x). Each bin gives the number of occurrences of its index value in x. If weights is specified the input array is weighted by it, i.e. if a value n is found at position i, out[n] += weight[i] instead of out[n] += 1.

Note

The additional argument length specifies the number of bins (overriding x.max() + 1), which must be provided in graph mode. If x contains negative values, no error will be raised, and negative values are treated as zeros instead.

Parameters:
  • x (Union[list, tuple, Tensor]) – 1-d input array.

  • weights (Union[int, float, bool, list, tuple, Tensor], optional) – Weights, array of the same shape as x. Defaults to None.

  • minlength (int, optional) – A minimum number of bins for the output array. Defaults to 0.

  • length (int, optional) – Number of bins. Defaults to None.

Returns:

Tensor, the result of binning the input array. The length of out is equal to np.amax(x)+1.

Raises:

ValueError – If x is not one-dimensional, or if x and weights do not have the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.bincount(np.arange(5)))
[1. 1. 1. 1. 1.]
>>> print(np.bincount(np.array([0, 1, 1, 3, 2, 1, 7])))
[1. 3. 1. 1. 0. 0. 0. 1.]
>>> w = np.array([0.3, 0.5, 0.2, 0.7, 1., -0.6]) # weights
>>> x = np.array([0, 1, 1, 2, 2, 2])
>>> print(np.bincount(x,  weights=w))
[0.3 0.7 1.1]
tinyms.bitwise_and(x1, x2, dtype=None)[source]

Computes the bit-wise AND of two arrays element-wise. Computes the bit-wise AND of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator &.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. Only integer and boolean types are handled. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.bitwise_and(13, 17))
1
tinyms.bitwise_not(x, dtype=None)

Computes bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator ~. For signed integer inputs, the two’s complement is returned. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range -2^{N-1} to +2^{N-1}-1.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Supported dtypes on Ascend: np.int16, np.uint16.

Parameters:
  • x (Tensor) – Only integer and boolean types are handled.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar.

Supported Platforms:

Ascend

Examples

>>> import mindspore.numpy as np
>>> print(np.invert(np.array(13, dtype=np.uint16)))
65522
tinyms.bitwise_or(x1, x2, dtype=None)[source]

Computes the bit-wise OR of two arrays element-wise. Computes the bit-wise OR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator |.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. Only integer and boolean types are handled. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.bitwise_or(13, 16))
29
tinyms.bitwise_xor(x1, x2, dtype=None)[source]

Computes the bit-wise XOR of two arrays element-wise. Computes the bit-wise XOR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator ^.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. Only integer and boolean types are handled. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.bitwise_xor(13, 17))
28
tinyms.blackman(M)[source]

Returns the Blackman window. The Blackman window is a taper formed by using the first three terms of a summation of cosines. It was designed to have close to the minimal leakage possible. It is close to optimal, only slightly worse than a Kaiser window.

Parameters:

M (int) – Number of points in the output window. If zero or less, an empty array is returned.

Returns:

Tensor, the window, with the maximum value normalized to one (the value one appears only if the number of samples is odd).

Raises:

TypeError – If M is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.blackman(12))
[-1.4901161e-08  3.2606430e-02  1.5990365e-01  4.1439798e-01
7.3604518e-01  9.6704674e-01  9.6704674e-01  7.3604518e-01
4.1439798e-01  1.5990365e-01  3.2606430e-02 -1.4901161e-08]
tinyms.broadcast_arrays(*args)[source]

Broadcasts any number of arrays against each other.

Note

Numpy argument subok is not supported. In graph mode, returns a tuple of Tensor instead of a list of Tensor.

Parameters:

*args (Tensor) – The arrays to broadcast.

Returns:

List of Tensor.

Raises:

ValueError – If arrays cannot be broadcast.

Supported Platforms:

Ascend GPU CPU

Example

>>> import mindspore.numpy as np
>>> x = np.array([[1,2,3]])
>>> y = np.array([[4],[5]])
>>> output = np.broadcast_arrays(x, y)
>>> print(output)
[Tensor(shape=[2, 3], dtype=Int32, value=
[[1, 2, 3],
[1, 2, 3]]), Tensor(shape=[2, 3], dtype=Int32, value=
[[4, 4, 4],
[5, 5, 5]])]
tinyms.broadcast_to(array, shape)[source]

Broadcasts an array to a new shape.

Parameters:
  • array (Tensor) – The array to broadcast.

  • shape (tuple) – The shape of the desired array.

Returns:

Tensor, original array broadcast to the given shape.

Raises:

ValueError – If array cannot be broadcast to shape.

Supported Platforms:

Ascend GPU CPU

Example

>>> import mindspore.numpy as np
>>> x = np.array([1, 2, 3])
>>> output = np.broadcast_to(x, (3, 3))
>>> print(output)
[[1 2 3]
[1 2 3]
[1 2 3]]
tinyms.cbrt(x, dtype=None)[source]

Returns the cube-root of a tensor, element-wise.

Note

Numpy arguments casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, -1, 3, -8, 64])
>>> output = np.cbrt(a)
>>> print(output)
[ 1.        -1.         1.4422495 -2.         4.       ]
tinyms.ceil(x, dtype=None)[source]

Returns the ceiling of the input, element-wise.

The ceil of the scalar x is the smallest integer i, such that i >= x.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • x (Tensor) – input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the floor of each element in x. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
>>> output = np.ceil(a)
>>> print(output)
[-1. -1. -0.  1.  2.  2.  2.]
tinyms.choose(a, choices, mode='clip')[source]

Construct an array from an index array and a list of arrays to choose from. Given an “index” array a of integers and a sequence of n arrays (choices), a and each choice array are first broadcast, as necessary, to arrays of a common shape; calling these Ba and Bchoices[i], i = 0,…,n-1 we have that, necessarily, Ba.shape == Bchoices[i].shape for each i. Then, a new array with shape Ba.shape is created as follows:

  • if mode='raise' (the default), then, first of all, each element of a (and thus Ba) must be in the range [0, n-1]; now, suppose that i (in that range) is the value at the (j0, j1, …, jm) position in Ba - then the value at the same position in the new array is the value in Bchoices[i] at that same position;

  • if mode='wrap', values in a (and thus Ba) may be any (signed) integer; modular arithmetic is used to map integers outside the range [0, n-1] back into that range; and then the new array is constructed as above;

  • if mode='clip', values in a (and thus Ba) may be any (signed) integer; negative integers are mapped to 0; values greater than n-1 are mapped to n-1; and then the new array is constructed as above.

Note

Numpy argument out is not supported. mode = 'raise' is not supported, and the default mode is ‘clip’ instead.

Parameters:
  • a (int array) – This array must contain integers in [0, n-1], where n is the number of choices, unless mode=wrap or mode=clip, in which cases any integers are permissible.

  • choices (sequence of arrays) – Choice arrays. a and all of the choices must be broadcastable to the same shape. If choices is itself an array, then its outermost dimension (i.e., the one corresponding to choices.shape[0]) is taken as defining the “sequence”.

  • mode ('raise', 'wrap', 'clip', optional) –

    Specifies how indices outside [0, n-1] will be treated:

    ’raise’ – raise an error;

    ’wrap’ – wrap around;

    ’clip’ – clip to the range. ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.

Returns:

Tensor, the merged result.

Raises:

ValueError – If a and any of the choices cannot be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33]]
>>> print(np.choose([2, 3, 1, 0], choices))
[20 31 12  3]
>>> print(np.choose([2, 4, 1, 0], choices, mode='clip'))
[20 31 12  3]
>>> print(np.choose([2, 4, 1, 0], choices, mode='wrap'))
[20  1 12  3]
>>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]]
>>> choices = [-10, 10]
>>> print(np.choose(a, choices))
[[ 10 -10  10]
 [-10  10 -10]
 [ 10 -10  10]]
tinyms.clip(x, xmin, xmax, dtype=None)[source]

Clips (limits) the values in an array.

Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of \([0, 1]\) is specified, values smaller than 0 become 0, and values larger than 1 become 1.

Parameters:
  • x (Tensor) – Tensor containing elements to clip.

  • xmin (Tensor, scalar, None) – Minimum value. If None, clipping is not performed on lower interval edge. Not more than one of xmin and xmax may be None.

  • xmax (Tensor, scalar, None) – Maximum value. If None, clipping is not performed on upper interval edge. Not more than one of xmin and xmax may be None. If xmin or xmax are tensors, then the three tensors will be broadcasted to match their shapes.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, a tensor with the elements of x, but where values < xmin are replaced with xmin, and those > xmax with xmax.

Raises:
  • TypeError – If inputs have types not specified above.

  • ValueError – If the shapes of x1 and x2 cannot broadcast, or both xmin and xmax are None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, 0, 3, 2, 0])
>>> output = np.clip(x, 0, 2)
>>> print(output)
[1 2 2 0 0 2 2 0]
tinyms.column_stack(tup)[source]

Stacks 1-D tensors as columns into a 2-D tensor. 2-D tensors are stacked as-is, like np.hstack.

Parameters:

tup (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. All of them must have the same shape except the axis to be concatenated.

Returns:

2-D Tensor, formed by stacking the given tensors.

Supported Platforms:

Ascend GPU CPU

Raises:

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([1, 2, 3]).astype('int32')
>>> x2 = np.array([4, 5, 6]).astype('int32')
>>> output = np.column_stack((x1, x2))
>>> print(output)
[[1 4]
 [2 5]
 [3 6]]
tinyms.concatenate(arrays, axis=0)[source]

Joins a sequence of tensors along an existing axis.

Note

To match Numpy behaviour, \(axis >= 32\) will not cause value error, the axis will be treated as None instead.

Parameters:
  • arrays (Union[Tensor, tuple(Tensor), list(Tensor)]) – a tensor or a list of tensors to be concatenated.

  • axis (Union[None, int], optional) – The axis along which the tensors will be joined, if axis is None, tensors are flattened before use. Default is 0.

Returns:

A tensor concatenated from a tensor or a list of tensors.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If axis is not in the range of \([-ndim, ndim-1]\), and less than 32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.ones((1,2,3))
>>> x2 = np.ones((1,2,1))
>>> x = np.concatenate((x1, x2), axis=-1)
>>> print(x.shape)
(1, 2, 4)
tinyms.convolve(a, v, mode='full')[source]

Returns the discrete, linear convolution of two one-dimensional sequences.

Note

If v is longer than a, the tensors are swapped before computation.

Parameters:
  • a (Union[list, tuple, Tensor]) – First one-dimensional input tensor.

  • v (Union[list, tuple, Tensor]) – Second one-dimensional input tensor.

  • mode (str, optional) – By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of \((N+M-1,)\). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen. If mode is ‘same’, it returns output of length \(max(M, N)\). Boundary effects are still visible. If mode is ‘valid’, it returns output of length \(max(M, N) - min(M, N) + 1\). The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect.

Returns:

Tensor, discrete, linear convolution of a and v.

Raises:
  • TypeError – If the inputs have types not specified above.

  • ValueError – If a and v are empty or have wrong dimensions

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.convolve([1., 2., 3., 4., 5.], [2., 3.], mode="valid")
>>> print(output)
[ 7. 12. 17. 22.]
tinyms.copysign(x1, x2, dtype=None)[source]

Changes the sign of x1 to that of x2, element-wise.

If x2 is a scalar, its sign will be copied to all elements of x1.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Complex inputs are not supported now.

Parameters:
  • x1 (Union[int, float, list, tuple, Tensor]) – Values to change the sign of.

  • x2 (Union[int, float, list, tuple, Tensor]) – The sign of x2 is copied to x1. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. The values of x1 with the sign of x2. This is a scalar if both x1 and x2 are scalars.

Raises:

TypeError – If dtype of the input is not in the given types or the input can not be converted to tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.copysign(np.array([1, -1, -1]), np.array([-1, 1, -1]))
>>> print(output)
[-1  1 -1]
tinyms.corrcoef(x, y=None, rowvar=True, dtype=None)[source]

Returns Pearson product-moment correlation coefficients.

Please refer to the documentation for cov for more detail. The relationship between the correlation coefficient matrix, R, and the covariance matrix, C, is \(R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} * C_{jj} } }\) The values of R are between -1 and 1, inclusive.

Note

Currently, complex numbers are not supported.

Parameters:
  • x (Union[int, float, bool, tuple, list, Tensor]) – A 1-D or 2-D array containing multiple variables and observations. Each row of x represents a variable, and each column a single observation of all those variables. Also see rowvar below.

  • y (Union[int, float, bool, tuple, list, Tensor], optional) – An additional set of variables and observations. Default: None.

  • rowvar (bool, optional) – If rowvar is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. Default: True.

  • dtype (mindspore.dtype, optional) – Data-type of the result. By default, the return data-type will have at least float32 precision. Default: None.

Returns:

Tensor, The correlation coefficient matrix of the variables.

Raises:
  • TypeError – If the inputs have types not specified above.

  • ValueError – If x and y have wrong dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.corrcoef([[2., 3., 4., 5.], [0., 2., 3., 4.], [7., 8., 9., 10.]])
>>> print(output)
[[1.         0.9827076  1.        ]
[0.9827077  0.99999994 0.9827077 ]
[1.         0.9827076  1.        ]]
tinyms.correlate(a, v, mode='valid')[source]

Cross-correlation of two 1-dimensional sequences.

This function computes the correlation as generally defined in signal processing texts:

\(c_{av}[k] = sum_n a[n+k] * conj(v[n])\)

with a and v sequences being zero-padded where necessary and conj being the conjugate.

Note

Currently, complex numbers are not supported.

Parameters:
  • a (Union[list, tuple, Tensor]) – First input sequence.

  • v (Union[list, tuple, Tensor]) – Second input sequence.

  • mode (str, optional) – By default, mode is ‘valid’. If mode is ‘valid’, it returns output of length \(max(M, N) - min(M, N) + 1\). The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect. If mode is ‘full’, it returns the convolution at each point of overlap, with an output shape of \((N + M - 1,)\). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen. If mode is ‘same’, it returns output of length \(max(M, N)\). Boundary effects are still visible.

Returns:

Tensor. Discrete cross-correlation of a and v.

Raises:
  • TypeError – If the inputs can not be converted to tensor.

  • ValueError – If a and v are empty or have wrong dimensions

Supported Platforms:

GPU

Examples

>>> import mindspore.numpy as np
>>> output = np.correlate([1, 2, 3], [0, 1, 0.5])
>>> print(output)
[3.5]
>>> output = np.correlate([1, 2, 3], [0, 1, 0.5], mode="same")
>>> print(output)
[2.  3.5 3. ]
>>> output = np.correlate([1, 2, 3, 4, 5], [1, 2], mode="same")
>>> print(output)
[ 2.  5.  8. 11. 14.]
tinyms.cos(x, dtype=None)[source]

Cosine element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5).astype('float32')
>>> print(np.cos(x))
[ 1.          0.5403023  -0.41614684 -0.9899925  -0.6536436 ]
tinyms.cosh(x, dtype=None)[source]

Hyperbolic cosine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5).astype('float32')
>>> print(np.cosh(x))
[ 1.         1.5430807  3.7621956 10.067662  27.308233 ]
tinyms.count_nonzero(x, axis=None, keepdims=False)[source]

Counts the number of non-zero values in the tensor x.

Parameters:
  • x (Tensor) – The tensor for which to count non-zeros.

  • axis (Union[int,tuple], optional) – Axis or tuple of axes along which to count non-zeros. Default is None, meaning that non-zeros will be counted along a flattened version of x. Default: None.

  • keepdims (bool, optional) – If this is set to True, the axes that are counted are left in the result as dimensions with size one. With this option, the result will broadcast correctly against x. Default: False.

Returns:

Tensor, indicating number of non-zero values in the x along a given axis. Otherwise, the total number of non-zero values in x is returned.

Raises:
  • TypeError – If axis is not int or tuple.

  • ValueError – If axis is not in range [-x.ndim, x.ndim).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, 0, 3, 2, 0])
>>> output = np.count_nonzero(x)
>>> print(output)
6
tinyms.cov(m, y=None, rowvar=True, bias=False, ddof=None, fweights=None, aweights=None, dtype=None)[source]

Estimates a covariance matrix, given data and weights.

Covariance indicates the level to which two variables vary together. If we examine N-dimensional samples, \(X = [x_1, x_2, ... x_N]^T\), then the covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\).

Note

fweights and aweights must be all positive, in Numpy if negative values are detected, a value error will be raised, in MindSpore we converts all values to positive instead.

Parameters:
  • m (Union[Tensor, list, tuple]) – A 1-D or 2-D tensor containing multiple variables and observations. Each row of m represents a variable, and each column represents a single observation of all those variables. Also see rowvar below.

  • y (Union[Tensor, list, tuple], optional) – An additional set of variables and observations. y has the same form as that of m, default is None.

  • rowvar (bool, optional) – If rowvar is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations.

  • bias (bool, optional) – Default Normalization (False) is by \((N - 1)\), where \(N\) is the number of observations given (unbiased estimate). If bias is True, then Normalization is by N. These values can be overridden by using the keyword ddof.

  • ddof (int, optional) – If not None, the default value implied by bias is overridden. Note that \(ddof=1\) will return the unbiased estimate, even if both fweights and aweights are specified, and \(ddof=0\) will return the simple average. See the notes for the details. The default value is None.

  • fweights (Union[Tensor, list, tuple], optional) – 1-D tensor of integer frequency weights; the number of times each observation vector should be repeated. The default value is None.

  • aweights (Union[Tensor, list, tuple], optional) – 1-D tensor of observation vector weights. These relative weights are typically larger for observations considered more important and smaller for observations considered less important. If \(ddof=0\) the tensor of weights can be used to assign probabilities to observation vectors. The default value is None.

  • dtype (Union[mindspore.dtype, str], optional) – Data-type of the result. By default, the return data-type will have mstype.float32 precision. Default is None.

Returns:

Tensor, the covariance matrix of the variables.

Raises:
  • TypeError – If the inputs have types not specified above.

  • ValueError – If m and y have wrong dimensions.

  • RuntimeError – If aweights and fweights have dimensions > 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.cov([[2., 3., 4., 5.], [0., 2., 3., 4.], [7., 8., 9., 10.]])
>>> print(output)
[[1.6666666 2.1666667 1.6666666]
[2.1666667 2.9166667 2.1666667]
[1.6666666 2.1666667 1.6666666]]
tinyms.cross(a, b, axisa=-1, axisb=-1, axisc=-1, axis=None)[source]

Returns the cross product of two (arrays of) vectors.

The cross product of a and b in \(R^3\) is a vector perpendicular to both a and b. If a and b are arrays of vectors, the vectors are defined by the last axis of a and b by default, and these axes can have dimensions 2 or 3. Where the dimension of either a or b is 2, the third component of the input vector is assumed to be zero and the cross product calculated accordingly. In cases where both input vectors have dimension 2, the z-component of the cross product is returned.

Parameters:
  • a (Union[list, tuple, Tensor]) – Components of the first vector(s).

  • b (Union[list, tuple, Tensor]) – Components of the second vector(s).

  • axisa (int, optional) – Axis of a that defines the vector(s). By default, the last axis.

  • axisb (int, optional) – Axis of b that defines the vector(s). By default, the last axis.

  • axisc (int, optional) – Axis of c containing the cross product vector(s). Ignored if both input vectors have dimension 2, as the return is scalar. By default, the last axis.

  • axis (int, optional) – If defined, the axis of a, b and c that defines the vector(s) and cross product(s). Overrides axisa, axisb and axisc. Defaults to None.

Returns:

Tensor, vector cross product(s).

Raises:

ValueError – when the dimensions of the vector(s) in a and/or b does not equal 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([[1,2,3], [4,5,6]])
>>> y = np.array([[4,5,6], [1,2,3]])
>>> output = np.cross(x, y)
>>> print(output)
[[-3  6 -3]
[ 3 -6  3]]
>>> output = np.cross(x, y, axisc=0)
>>> print(output)
[[-3  3]
[ 6 -6]
[-3  3]]
tinyms.cumprod(a, axis=None, dtype=None)[source]

Returns the cumulative product of elements along a given axis.

Note

Numpy argument out is not supported.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input tensor.

  • axis (int, optional) – Axis along which the cumulative product is computed. By default the input is flattened. Default: None.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:
  • TypeError – If the input can not be converted to tensor or axis is not integer.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1, 2, 3])
>>> print(np.cumprod(x))
[1 2 6]
tinyms.cumsum(a, axis=None, dtype=None)[source]

Returns the cumulative sum of the elements along a given axis.

Note

If a.dtype is int8, int16 or bool, the result dtype will be elevated to int32.

Parameters:
  • a (Tensor) – Input tensor.

  • axis (int, optional) – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

  • dtype (mindspore.dtype, optional) – If not specified, stay the same as a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. Default: None.

Returns:

Tensor.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.cumsum(np.ones((3,3)), axis=0)
>>> print(output)
[[1. 1. 1.]
 [2. 2. 2.]
 [3. 3. 3.]]
tinyms.deg2rad(x, dtype=None)[source]

Converts angles from degrees to radians.

Parameters:
  • x (Tensor) – Angles in degrees.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, the corresponding angle in radians. This is a tensor scalar if x is a tensor scalar.

Raises:

TypeError – If x is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5])
>>> output = np.deg2rad(x)
>>> print(output)
[ 0.01745329  0.03490658  0.05235988 -0.06981317 -0.08726647]
tinyms.diag(v, k=0)[source]

Extracts a diagonal or construct a diagonal array.

Parameters:
  • v (Tensor) – If v is a 2-D array, return a copy of its k-th diagonal. If v is a 1-D array, return a 2-D array with v on the k-th diagonal.

  • k (int, optional) – Diagonal in question. The default is 0. Use k>0 for diagonals above the main diagonal, and k<0 for diagonals below the main diagonal.

Returns:

Tensor, the extracted diagonal or constructed diagonal array.

Raises:

ValueError – If input is not 1-D or 2-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(9).reshape((3,3))
>>> print(x)
[[0 1 2]
[3 4 5]
[6 7 8]]
>>> output = np.diag(x)
>>> print(output)
[0 4 8]
>>> output = np.diag(x, k=1)
>>> print(output)
[1 5]
>>> output = np.diag(x, k=-1)
>>> print(output)
[3 7]
tinyms.diag_indices(n, ndim=2)[source]

Returns the indices to access the main diagonal of an array.

This returns a tuple of indices that can be used to access the main diagonal of an array a with a.ndim >= 2 dimensions and shape (n, n, …, n). For a.ndim = 2 this is the usual diagonal, for a.ndim > 2 this is the set of indices to access a[i, i, ..., i] for i = [0..n-1].

Parameters:
  • n (int) – The size, along each dimension, of the arrays for which the returned indices can be used.

  • ndim (int, optional) – The number of dimensions.

Returns:

Tuple of Tensor.

Raises:

TypeError – If input are not integers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.diag_indices(5, 3)
>>> print(output)
(Tensor(shape=[5], dtype=Int32, value= [0, 1, 2, 3, 4]),
Tensor(shape=[5], dtype=Int32, value= [0, 1, 2, 3, 4]),
Tensor(shape=[5], dtype=Int32, value= [0, 1, 2, 3, 4]))
tinyms.diagflat(v, k=0)[source]

Creates a two-dimensional array with the flattened input as a diagonal.

Note

On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • v (Tensor) – Input data, which is flattened and set as the k-th diagonal of the output.

  • k (int, optional) – Diagonal to set; 0, the default, corresponds to the “main” diagonal, a positive (negative) k giving the number of the diagonal above (below) the main.

Returns:

Tensor, The 2-D output array.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.diagflat(np.asarray([[1,2], [3,4]]))
>>> print(output)
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
>>> output = np.diagflat(np.asarray([1,2]), 1)
>>> print(output)
[[0 1 0]
[0 0 2]
[0 0 0]]
tinyms.diagonal(a, offset=0, axis1=0, axis2=1)[source]

Returns specified diagonals.

If a is 2-D, returns the diagonal of a with the given offset, i.e., the collection of elements of the form a[i, i+offset]. If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-array whose diagonal is returned. The shape of the resulting array can be determined by removing axis1 and axis2 and appending an index to the right equal to the size of the resulting diagonals.

Parameters:
  • a (Tensor) – Array from which the diagonals are taken.

  • offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal.

  • axis1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

  • axis2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis.

Returns:

Tensor, if a is 2-D, then a 1-D array containing the diagonal. If a.ndim > 2, then the dimensions specified by axis1 and axis2 are removed, and a new axis inserted at the end corresponding to the diagonal.

Raises:

ValueError – If the input tensor has less than two dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape(2,2).astype(np.float32)
>>> print(a)
[[0. 1.]
[2. 3.]]
>>> output = np.diagonal(a)
>>> print(output)
[0. 3.]
>>> output = np.diagonal(a, 1)
>>> print(output)
[1.]
>>> a = np.arange(8).reshape(2, 2, 2).astype(np.float32)
>>> print(a)
[[[0. 1.]
[2. 3.]]
[[4. 5.]
[6. 7.]]]
>>> output = np.diagonal(a, 0, 0, 1)
>>> print(output)
[[0. 6.]
[1. 7.]]
tinyms.diff(a, n=1, axis=-1, prepend=None, append=None)[source]

Calculates the n-th discrete difference along the given axis.

The first difference is given by \(out[i] = a[i+1] - a[i]\) along the given axis, higher differences are calculated by using diff iteratively.

Note

Since zero-shaped Tensor is not supported in MindSpore, a value error is raised if an empty Tensor is encountered.

Parameters:
  • a (Tensor) – Input tensor.

  • n (int, optional) – The number of times values are differenced. If zero, the input is returned as-is. Default: 1.

  • axis (int, optional) – The axis along which the difference is taken, default is the last axis. Default: -1.

  • prepend/append (Tensor, optional) – Values to prepend or append to a along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match a except along axis. Default: None.

Returns:

The n-th differences. The shape of the output is the same as a except along axis where the dimension is smaller by n. The type of the output is the same as the type of the difference between any two elements of a. This is the same as the type of a in most cases.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> arr = np.array([1, 3, -1, 0, 4])
>>> print(np.diff(arr, n=2))
[-6  5  3]
tinyms.digitize(x, bins, right=False)[source]

Returns the indices of the bins to which each value in input array belongs. If values in x are beyond the bounds of bins, 0 or len(bins) is returned as appropriate.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – Input array to be binned.

  • bins (Union[list, tuple, Tensor]) – Array of bins. It has to be 1-dimensional and monotonic.

  • right (boolean, optional) – Indicating whether the intervals include the right or the left bin edge. Default behavior is (right==False) indicating that the interval does not include the right edge. The left bin end is open in this case, i.e., bins[i-1] <= x < bins[i] is the default behavior for monotonically increasing bins.

Returns:

Tensor of ints, output array of indices, of same shape as x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1.2, 10.0, 12.4, 15.5, 20.])
>>> bins = np.array([0, 5, 10, 15, 20])
>>> inds = np.digitize(x, bins)
>>> print(inds)
[1 3 3 4 5]
tinyms.divide(x1, x2, dtype=None)[source]

Returns a true division of the inputs, element-wise.

Instead of the Python traditional “floor division”, this returns a true division.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – the divident.

  • x2 (Tensor) – the divisor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2])
>>> x2 = np.full((3, 2), [3, 4])
>>> output = np.divide(x1, x2)
>>> print(output)
[[0.33333334 0.5       ]
[0.33333334 0.5       ]
[0.33333334 0.5       ]]
tinyms.divmod(x1, x2, dtype=None)

Returns element-wise quotient and remainder simultaneously.

Parameters:
  • x1 (Union[Tensor]) – Dividend tensor.

  • x2 (Union[Tensor, int, float, bool]) – Divisor. If x1.shape != x2.shape, they must be broadcastable to a common shape.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Element-wise quotient and remainder from floor division, in format of (quotient, remainder)

Raises:

TypeError – If x1 and x2 are not Tensor or scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([1, 2, 3, 4, 5])
>>> print(np.divmod(a, 1.5))
(Tensor(shape=[5], dtype=Float32,
 value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00,  2.00000000e+00,  3.00000000e+00]),
 Tensor(shape=[5], dtype=Float32,
 value= [ 1.00000000e+00,  5.00000000e-01,  0.00000000e+00,  1.00000000e+00,  5.00000000e-01]))
tinyms.dot(a, b)[source]

Returns the dot product of two arrays.

Specifically, If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation). If both a and b are 2-D arrays, it is matrix multiplication. If either a or b is 0-D (scalar), it is equivalent to multiply. If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b. If a is an N-D array and b is an M-D array (where M>=2), it is a sum product over the last axis of a and the second-to-last axis of b: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
Returns:

Tensor or scalar, the dot product of a and b. If a and b are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned

Raises:

ValueError – If the last dimension of a is not the same size as the second-to-last dimension of b.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.full((1, 3), 7).astype('float32')
>>> b = np.full((2, 3, 4), 5).astype('float32')
>>> output = np.dot(a, b)
>>> print(output)
[[[105. 105. 105. 105.]
[105. 105. 105. 105.]]]
tinyms.dsplit(x, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors along the 3rd axis (depth). It is equivalent to split with \(axis=2\) (default), the array is always split along the third axis regardless of the array dimension.

Parameters:
  • x (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – If integer, \(N\), the tensor will be divided into \(N\) equal tensors along axis. If tuple(int), list(int) or of sorted integers, the entries indicate where along axis the array is split. For example, \([2, 3]\) would, for \(axis=0\), result in three sub-tensors \(x[:2]\), \(x[2:3]`and :math:`x[3:]\). If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.

Returns:

A list of sub-tensors.

Raises:

TypeError – If argument indices_or_sections is not integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.arange(6).reshape((1, 2, 3)).astype('float32')
>>> output = np.dsplit(input_x, 3)
>>> print(output)
(Tensor(shape=[1, 2, 1], dtype=Float32,
value=[[[ 0.00000000e+00],
        [ 3.00000000e+00]]]),
Tensor(shape=[1, 2, 1], dtype=Float32,
value=[[[ 1.00000000e+00],
        [ 4.00000000e+00]]]),
Tensor(shape=[1, 2, 1], dtype=Float32,
value=[[[ 2.00000000e+00],
        [ 5.00000000e+00]]]))
tinyms.dstack(tup)[source]

Stacks tensors in sequence depth wise (along the third axis). This is equivalent to concatenation along the third axis. 1-D tensors \((N,)\) should be reshaped to \((1,N,1)\). 2-D tensors \((M,N)\) should be reshaped to \((M,N,1)\) before concatenation.

Parameters:

tup (Union[Tensor, tuple, list]) – A sequence of tensors. The tensors must have the same shape along all but the third axis. 1-D or 2-D tensors must have the same shape.

Returns:

Stacked Tensor, formed by stacking the given tensors.

Supported Platforms:

Ascend GPU CPU

Raises:

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([1, 2, 3]).astype('float32')
>>> x2 = np.array([4, 5, 6]).astype('float32')
>>> output = np.dstack((x1, x2))
>>> print(output)
[[[1. 4.]
  [2. 5.]
  [3. 6.]]]
tinyms.ediff1d(ary, to_end=None, to_begin=None)[source]

The differences between consecutive elements of a tensor.

Parameters:
  • ary (Tensor) – If necessary, will be flattened before the differences are taken.

  • to_end (Tensor or scalar, optional) – Number(s) to append at the end of the returned differences.

  • to_begin (Tensor or scalar, optional) – Number(s) to prepend at the beginning of the returned differences.

Returns:

The differences.

Raises:

TypeError – If inputs have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> arr = np.array([1, 3, -1, 0, 4])
>>> print(np.ediff1d(arr))
[ 2 -4  1  4]
tinyms.empty(shape, dtype=mindspore.float32)[source]

Returns a new array of given shape and type, without initializing entries.

Note

Numpy argument order is not supported. Object arrays are not supported.

Parameters:
  • shape (Union[int, tuple(int)]) – Shape of the empty array, e.g., (2, 3) or 2.

  • dtype (mindspore.dtype, optional) – Desired output data-type for the array, e.g, mstype.int8. Default is mstype.float32.

Returns:

Tensor, array of uninitialized (arbitrary) data of the given shape and dtype.

Raises:

TypeError – If the input shape or dtype is invalid.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.empty((2, 3))
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]]
tinyms.empty_like(prototype, dtype=None, shape=None)[source]

Returns a new array with the same shape and type as a given array.

Note

Input array must have the same size across a dimension. If prototype is not a Tensor, dtype is float32 by default if not provided.

Parameters:
  • prototype (Union[Tensor, list, tuple]) – The shape and data-type of prototype define these same attributes of the returned array.

  • dtype (mindspore.dtype, optional) – Overrides the data type of the result.

  • shape (int or sequence of ints, optional) – Overrides the shape of the result.

Returns:

Tensor, array of uninitialized (arbitrary) data with the same shape and type as prototype.

Raises:

ValueError – If prototype is not a Tensor, list or tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((4,1,2))
>>> output = np.empty_like(a)
>>> print(output)
[[[0. 0.]]
 [[0. 0.]]
 [[0. 0.]]
 [[0. 0.]]]
tinyms.equal(x1, x2, dtype=None)[source]

Returns the truth value of (x1 == x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.equal(np.array([0, 1, 3]), np.arange(3))
>>> print(output)
[ True  True False]
tinyms.exp(x, dtype=None)[source]

Calculates the exponential of all elements in the input array.

Note

Numpy arguments casting, order, subok, signature, and extobj are not supported. When where is provided, out must have a tensor value. out is not supported for storing the result, however it can be used in combination with where to set the value at indices for which where is set to False. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, np.float64.

Parameters:
  • x (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise exponential of x. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.exp(np.arange(5).astype(np.float32))
>>> print(output)
[ 1.         2.718282   7.3890557 20.085537  54.598145 ]
tinyms.exp2(x, dtype=None)[source]

Calculates 2**p for all p in the input array.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • x (Tensor) – input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise 2 to the power x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([2, 3]).astype(np.float32)
>>> output = np.exp2(x)
>>> print(output)
[4. 8.]
tinyms.expand_dims(a, axis)[source]

Expands the shape of a tensor.

Inserts a new axis that will appear at the axis position in the expanded tensor shape.

Parameters:
  • a (Tensor) – Input tensor array.

  • axis (Union[int, list(int), tuple(int)]) – Position in the expanded axes where the new axis is placed,

Returns:

Tensor, with the number of dimensions increased at specified axis.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If axis exceeds a.ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,2))
>>> x = np.expand_dims(x,0)
>>> print(x.shape)
(1, 2, 2)
tinyms.expm1(x, dtype=None)[source]

Calculates exp(x) - 1 for all elements in the array.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • x (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise exponential minus one, out = exp(x) - 1. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.expm1(np.arange(5).astype(np.float32))
>>> print(output)
[ 0.         1.7182819  6.389056  19.085537  53.59815  ]
tinyms.eye(N, M=None, k=0, dtype=mindspore.float32)[source]

Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.

Parameters:
  • N (int) – Number of rows in the output, must be larger than 0.

  • M (int, optional) – Number of columns in the output. If is None, defaults to N, if defined, must be larger than 0. Default is None.

  • k (int, optional) – Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. Default is 0.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype. Default is mstype.float32.

Returns:

A tensor of shape (N, M). A tensor where all elements are equal to zero, except for the k-th diagonal, whose values are equal to one.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.eye(2, 2))
[[1. 0.]
[0. 1.]]
tinyms.fabs(x, dtype=None)

Calculates the absolute value element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Currently the backend kernel only supports float calculation, if the input is not a float, then it will be casted to mstype.float32 and casted back.

Parameters:
  • x (Tensor) – Tensor to be used for calculation.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5], np.float32)
>>> output = np.absolute(x)
>>> print(output)
[1. 2. 3. 4. 5.]
tinyms.fix(x)[source]

Rounds to nearest integer towards zero.

Rounds an array of floats element-wise to nearest integer towards zero. The rounded values are returned as floats.

Note

Numpy argument out is not supported.

Parameters:

x (Tensor) – An array of floats to be rounded.

Returns:

Tensor.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.fix(np.array([2.1, 2.9, -2.1, -2.9]))
>>> print(output)
[ 2.  2. -2. -2.]
tinyms.flip(m, axis=None)[source]

Reverses the order of elements in an array along the given axis.

The shape of the array is preserved, but the elements are reordered.

Parameters:
  • m (Tensor) – Input array.

  • axis (None or int or tuple of integers, optional) – Axis or axes along which to flip over. The default, axis=None, will flip over all of the axes of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of integers, flipping is performed on all of the axes specified in the tuple.

Returns:

Tensor, with the entries of axis reversed.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

GPU CPU

Example

>>> import mindspore.numpy as np
>>> A = np.arange(8.0).reshape((2,2,2))
>>> output = np.flip(A)
>>> print(output)
[[[7. 6.]
[5. 4.]]
[[3. 2.]
[1. 0.]]]
>>> output = np.flip(A, (0, 2))
>>> print(output)
[[[5. 4.]
[7. 6.]]
[[1. 0.]
[3. 2.]]]
tinyms.fliplr(m)[source]

Flips the entries in each row in the left/right direction. Columns are preserved, but appear in a different order than before.

Parameters:

m (Tensor) – Input array.

Returns:

Tensor.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

GPU CPU

Example

>>> import mindspore.numpy as np
>>> A = np.arange(8.0).reshape((2,2,2))
>>> output = np.fliplr(A)
>>> print(output)
[[[2. 3.]
[0. 1.]]
[[6. 7.]
[4. 5.]]]
tinyms.flipud(m)[source]

Flips the entries in each column in the up/down direction. Rows are preserved, but appear in a different order than before.

Parameters:

m (Tensor) – Input array.

Returns:

Tensor.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

GPU CPU

Example

>>> import mindspore.numpy as np
>>> A = np.arange(8.0).reshape((2,2,2))
>>> output = np.flipud(A)
>>> print(output)
[[[4. 5.]
[6. 7.]]
[[0. 1.]
[2. 3.]]]
tinyms.float_power(x1, x2, dtype=None)[source]

First array elements raised to powers from second array, element-wise.

Raise each base in x1 to the positionally-corresponding power in x2. x1 and x2 must be broadcastable to the same shape. This differs from the power function in that integers, float16, and float64 are promoted to floats with a minimum precision of float32 so that the result is always inexact. The intent is that the function will return a usable result for negative powers and seldom overflow for positive powers.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Integers and floats are promoted to float32 instead of float64.

Parameters:
  • x1 (Tensor) – the bases.

  • x2 (Tensor) – the exponents.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the bases in x1 raised to the exponents in x2. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.arange(6)
>>> x2 = np.array(3)
>>> output = np.float_power(x1, x2)
>>> print(output)
[  0.   1.   8.  27.  64. 125.]
tinyms.floor(x, dtype=None)[source]

Returns the floor of the input, element-wise.

The floor of the scalar x is the largest integer i, such that i <= x.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • x (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the floor of each element in x. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.floor(np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]))
>>> print(output)
[-2. -2. -1.  0.  1.  1.  2.]
tinyms.floor_divide(x1, x2, dtype=None)[source]

Returns the largest integer smaller or equal to the division of the inputs. It is equivalent to the Python // operator and pairs with the Python % (remainder), function so that a = a % b + b * (a // b) up to roundoff.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.floor_divide(np.array([1., 2., 3., 4.]), np.array(2.5))
>>> print(output)
[0. 0. 1. 1.]
tinyms.fmod(x1, x2, dtype=None)[source]

Returns the element-wise remainder of division.

This is the NumPy implementation of the C library function fmod, the remainder has the same sign as the dividend x1. It is equivalent to the Matlab(TM) rem function and should not be confused with the Python modulus operator x1 % x2.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – the first input arrays.

  • x2 (Tensor) – the second input arrays.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the remainder of the division of x1 by x2. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.fmod(np.array([-3, -2, -1, 1, 2, 3]), np.array(2))
>>> print(output)
[-1  0 -1  1  0  1]
tinyms.full(shape, fill_value, dtype=None)[source]

Returns a new tensor of given shape and type, filled with fill_value.

Parameters:
  • shape (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

  • fill_value (Union[int, float, bool, list, tuple]) – Scalar or array_like fill value.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, if dtype is None, the data type of the new tensor will be inferred from fill_value. Default is None.

Returns:

Tensor, with the designated shape and dtype, filled with fill_value.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If shape has entries < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.full((2,2), True))
[[True True]
[True True]]
tinyms.full_like(a, fill_value, dtype=None, shape=None)[source]

Returns a full array with the same shape and type as a given array.

Note

Input array must have the same size across a dimension. If a is not a Tensor, dtype is float32 by default if not provided.

Parameters:
  • a (Union[Tensor, list, tuple]) – The shape and data-type of a define these same attributes of the returned array.

  • fill_value (scalar) – Fill value.

  • dtype (mindspore.dtype, optional) – Overrides the data type of the result.

  • shape (int or sequence of ints, optional) – Overrides the shape of the result.

Returns:

Tensor, array of fill_value with the same shape and type as a.

Raises:

ValueError – If a is not a Tensor, list or tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((4,1,2))
>>> output = np.full_like(a, 0.5)
>>> print(output)
[[[0.5 0.5]]
[[0.5 0.5]]
[[0.5 0.5]]
[[0.5 0.5]]]
tinyms.gcd(x1, x2, dtype=None)[source]

Returns the greatest common divisor of |x1| and |x2|.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input data.

  • x2 (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the greatest common divisor of the absolute value of the inputs. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.gcd(np.arange(6), np.array(20))
>>> print(output)
[20  1  2  1  4  5]
tinyms.geomspace(start, stop, num=50, endpoint=True, dtype=None, axis=0)[source]

Returns numbers spaced evenly on a log scale (a geometric progression).

This is similar to logspace, but with endpoints specified directly. Each output sample is a constant multiple of the previous.

Parameters:
  • start (Union[int, list(int), tuple(int), tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int), tensor]) – The final value of the sequence, unless endpoint is False. In that case, num + 1 values are spaced over the interval in log-space, of which all but the last (a sequence of length num) are returned.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, can be in format of np.float32, or float32.If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop is array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

Returns:

Tensor, with samples equally spaced on a log scale.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = np.geomspace(1, 256, num=9)
>>> print(output)
[  1.   2.   4.   8.  16.  32.  64. 128. 256.]
>>> output = np.geomspace(1, 256, num=8, endpoint=False)
>>> print(output)
[  1.   2.   4.   8.  16.  32.  64. 128.]
tinyms.gradient(f, *varargs, axis=None, edge_order=1)[source]

Returns the gradient of a N-dimensional array. The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.

Note

Currently we only support edge_order =1 and uniform spacing of varargs.

Parameters:
  • f (Union[tuple, list, Tensor]) – An N-dimensional array containing samples of a scalar function.

  • varargs (Union[tuple[number], tuple[tensor scalar]], optional) – Spacing between f values. Default unitary spacing for all dimensions. Spacing can be specified using: 1. single scalar to specify a sample distance for all dimensions. 2. N scalars to specify a constant sample distance for each dimension.

  • axis (Union[None, int, tuple(int), list(int)], optional) – Gradient is calculated only along the given axis or axes. The default (axis = None) is to calculate the gradient for all the axes of the input tensor. axis may be negative, in which case it counts from the last to the first axis.

  • edge_order (int) – Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1.

Returns:

gradient, a list of tensors (or a single tensor if there is only one dimension to be calculated). Each derivative has the same shape as f.

Raises:
  • TypeError – If the inputs have types not specified above.

  • ValueError – If axis values out of bounds, or shape of f has entries < 1.

  • NotImplementedError – If edge_order != 1, or varargs contains non-scalar entries.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.gradient([[1, 2, 6], [3, 4, 5]], axis=-1)
>>> print(output)
[[1.  2.5 4. ]
[1.  1.  1. ]]
tinyms.greater(x1, x2, dtype=None)[source]

Returns the truth value of (x1 > x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.greater(np.array([4, 2]), np.array([2, 2]))
>>> print(output)
[ True False]
tinyms.greater_equal(x1, x2, dtype=None)[source]

Returns the truth value of (x1 >= x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.greater_equal(np.array([4, 2, 1]), np.array([2, 2, 2]))
>>> print(output)
[ True  True False]
tinyms.hamming(M)[source]

Returns the Hamming window. The Hamming window is a taper formed by using a weighted cosine.

Parameters:

M (int) – Number of points in the output window. If zero or less, an empty array is returned.

Returns:

Tensor, the window, with the maximum value normalized to one (the value one appears only if the number of samples is odd).

Raises:

TypeError – If M is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.hamming(12))
[0.08000001 0.15302339 0.34890914 0.6054648  0.841236   0.9813669
0.9813668  0.8412359  0.6054647  0.34890908 0.15302327 0.08000001]
tinyms.hanning(M)[source]

Returns the Hanning window. The Hanning window is a taper formed by using a weighted cosine.

Parameters:

M (int) – Number of points in the output window. If zero or less, an empty array is returned.

Returns:

Tensor, the window, with the maximum value normalized to one (the value one appears only if the number of samples is odd).

Raises:

TypeError – If M is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.hanning(12))
[0.         0.07937324 0.29229254 0.5711574  0.8274304  0.9797465
0.97974646 0.82743025 0.5711573  0.29229245 0.07937312 0.        ]
tinyms.heaviside(x1, x2, dtype=None)[source]

Computes the Heaviside step function.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input values.

  • x2 (Tensor) – The value of the function when x1 is 0. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the output array, element-wise Heaviside step function of x1. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.heaviside(np.array([-1.5, 0, 2.0]), np.array(0.5))
>>> print(output)
[0.  0.5 1. ]
>>> output = np.heaviside(np.array([-1.5, 0, 2.0]), np.array(1))
>>> print(output)
[0. 1. 1.]
tinyms.histogram(a, bins=10, range=None, weights=None, density=False)[source]

Computes the histogram of a dataset.

Note

String values for bins is not supported. Deprecated numpy argument normed is not supported.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input data. The histogram is computed over the flattened array.

  • bins (Union[int, tuple, list, Tensor], optional) – If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths.

  • range ((float, float), optional) – The lower and upper range of the bins. If not provided, range is simply (a.min(), a.max()). Values outside the range are ignored. The first element of the range must be less than or equal to the second.

  • weights (Union[int, float, bool, list, tuple, Tensor], optional) – An array of weights, of the same shape as a. If density is True, the weights are normalized, so that the integral of the density over the range remains 1.

  • density (boolean, optional) – If False, the result will contain the number of samples in each bin. If True, the result is the value of the probability density function at the bin, normalized such that the integral over the range is 1. Note that the sum of the histogram values will not be equal to 1 unless bins of unity width are chosen; it is not a probability mass function.

Returns:

(Tensor, Tensor), the values of the histogram and the bin edges.

Raises:

ValueError – If x and weights do not have the same size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import numpy as np
>>> print(np.histogram([1, 2, 1], bins=[0, 1, 2, 3]))
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  2.00000000e+00,  1.00000000e+00]),
Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 3]))
>>> print(np.histogram(np.arange(4), bins=np.arange(5), density=True))
(Tensor(shape=[4], dtype=Float32, value=
[ 2.50000000e-01,  2.50000000e-01,  2.50000000e-01,  2.50000000e-01]),
Tensor(shape=[5], dtype=Int32, value= [0, 1, 2, 3, 4]))
>>> print(np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]))
(Tensor(shape=[3], dtype=Float32, value= [ 1.00000000e+00,  4.00000000e+00,  1.00000000e+00]),
Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 3]))
tinyms.histogram2d(x, y, bins=10, range=None, weights=None, density=False)[source]

Computes the multidimensional histogram of some data.

Note

Deprecated numpy argument normed is not supported.

Parameters:
  • x (Union[list, tuple, Tensor]) – An array with shape (N,) containing the x coordinates of the points to be histogrammed.

  • y (Union[list, tuple, Tensor]) – An array with shape (N,) containing the y coordinates of the points to be histogrammed.

  • bins (Union[int, tuple, list], optional) –

    The bin specification:

    If int, the number of bins for the two dimensions (nx=ny=bins).

    If array_like, the bin edges for the two dimensions (x_edges=y_edges=bins).

    If [int, int], the number of bins in each dimension (nx, ny = bins).

    If [array, array], the bin edges in each dimension (x_edges, y_edges = bins).

    A combination [int, array] or [array, int], where int is the number of bins and array is the bin edges.

  • range (Union[list, tuple], optional) – has shape (2, 2), the leftmost and rightmost edges of the bins along each dimension (if not specified explicitly in the bins parameters): [[xmin, xmax], [ymin, ymax]]. All values outside of this range will be considered outliers and not tallied in the histogram.

  • weights (Union[list, tuple, Tensor], optional) – An array with shape (N,) of values w_i weighing each sample (x_i, y_i).

  • density (boolean, optional) – If False, the default, returns the number of samples in each bin. If True, returns the probability density function at the bin, bin_count / sample_count / bin_volume.

Returns:

(Tensor, Tensor, Tensor), the values of the bi-directional histogram and the bin edges along the first and second dimensions.

Raises:

ValueError – If range does not have the same size as the number of samples.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import numpy as np
>>> x = np.arange(5)
>>> y = np.arange(2, 7)
>>> print(np.histogram2d(x, y, bins=(2, 3)))
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.00000000e+00,  0.00000000e+00,  0.00000000e+00],
[ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]]),
Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  2.00000000e+00,  4.00000000e+00]),
Tensor(shape=[4], dtype=Float32, value=
[ 2.00000000e+00,  3.33333349e+00,  4.66666698e+00,  6.00000000e+00]))
tinyms.histogram_bin_edges(a, bins=10, range=None, weights=None)[source]

Function to calculate only the edges of the bins used by the histogram function.

Note

String values for bins is not supported.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input data. The histogram is computed over the flattened array.

  • bins ((Union[int, tuple, list, Tensor])) – If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths.

  • range ((float, float), optional) – The lower and upper range of the bins. If not provided, range is simply (a.min(), a.max()). Values outside the range are ignored. The first element of the range must be less than or equal to the second. Default is None.

  • weights (Union[int, float, bool, list, tuple, Tensor], optional) – An array of weights, of the same shape as a. Each value in a only contributes its associated weight towards the bin count (instead of 1). This is currently not used by any of the bin estimators, but may be in the future. Default is None.

Returns:

Tensor, the edges to pass into histogram.

Supported Platforms:

Ascend GPU CPU

Raises:

TypeError – If bins is an array and not one-dimensional.

Examples

>>> import mindspore.numpy as np
>>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5])
>>> print(np.histogram_bin_edges(arr, bins=2))
[0.  2.5 5. ]
tinyms.histogramdd(sample, bins=10, range=None, weights=None, density=False)[source]

Computes the multidimensional histogram of some data.

Note

Deprecated numpy argument normed is not supported.

Parameters:
  • sample (Union[list, tuple, Tensor]) –

    The data to be histogrammed, either (N, D) array, or (D, N) array_like. Note the unusual interpretation of sample when an array_like:

    When an array, each row is a coordinate in a D-dimensional space, such as histogramdd(np.array([p1, p2, p3])).

    When an array_like, each element is the list of values for single coordinate, such as histogramdd((X, Y, Z)).

    The first form should be preferred.

  • bins (Union[int, tuple, list], optional) –

    The bin specification:

    A sequence of arrays describing the monotonically increasing bin edges along each dimension.

    The number of bins for each dimension (nx, ny, =bins)

    The number of bins for all dimensions (nx=ny=…=bins).

  • range (Union[list, tuple], optional) – A sequence of length D, each an optional (lower, upper) tuple giving the outer bin edges to be used if the edges are not given explicitly in bins. An entry of None in the sequence results in the minimum and maximum values being used for the corresponding dimension. The default, None, is equivalent to passing a tuple of D None values.

  • weights (Union[list, tuple, Tensor], optional) – An array with shape (N,) of values w_i weighing each sample (x_i, y_i, z_i, …).

  • density (boolean, optional) – If False, the default, returns the number of samples in each bin. If True, returns the probability density function at the bin, bin_count / sample_count / bin_volume.

Returns:

(Tensor, list of Tensor), the values of the histogram and the bin edges.

Raises:

ValueError – If range does not have the same size as the number of samples.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import numpy as np
>>> sample = np.arange(15).reshape(5, 3)
>>> print(sample)
[[ 0  1  2]
[ 3  4  5]
[ 6  7  8]
[ 9 10 11]
[12 13 14]]
>>> print(np.histogramdd(sample, bins=(2, 3, 4)))
(Tensor(shape=[2, 3, 4], dtype=Float32, value=
[[[ 1.00000000e+00,  1.00000000e+00,  0.00000000e+00,  0.00000000e+00],
[ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00],
[ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00]],
[[ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  0.00000000e+00],
[ 0.00000000e+00,  0.00000000e+00,  1.00000000e+00,  0.00000000e+00],
[ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00,  2.00000000e+00]]]),
[Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  6.00000000e+00,  1.20000000e+01]),
Tensor(shape=[4], dtype=Float32, value=
[ 1.00000000e+00,  5.00000000e+00,  9.00000000e+00,  1.30000000e+01]),
Tensor(shape=[5], dtype=Float32, value=
[ 2.00000000e+00,  5.00000000e+00,  8.00000000e+00,  1.10000000e+01,  1.40000000e+01])])
tinyms.hsplit(x, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors horizontally (column-wise). It is equivalent to split with \(axis=1\) (default), the array is always split along the second axis regardless of the array dimension.

Parameters:
  • x (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – If integer, \(N\), the tensor will be divided into \(N\) equal tensors along axis. If tuple(int), list(int) or of sorted integers, the entries indicate where along axis the array is split. For example, \([2, 3]\) would, for \(axis=0\), result in three sub-tensors \(x[:2]\), \(x[2:3]`and :math:`x[3:]\). If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.

Returns:

A list of sub-tensors.

Raises:

TypeError – If argument indices_or_sections is not integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.arange(6).reshape((2, 3)).astype('float32')
>>> output = np.hsplit(input_x, 3)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Float32,
value=[[ 0.00000000e+00],
       [ 3.00000000e+00]]),
Tensor(shape=[2, 1], dtype=Float32,
value=[[ 1.00000000e+00],
       [ 4.00000000e+00]]),
Tensor(shape=[2, 1], dtype=Float32,
value=[[ 2.00000000e+00],
       [ 5.00000000e+00]]))
tinyms.hstack(tup)[source]

Stacks tensors in sequence horizontally. This is equivalent to concatenation along the second axis, except for 1-D tensors where it concatenates along the first axis.

Parameters:

tup (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the second axis, except 1-D tensors which can be any length.

Returns:

Stacked Tensor, formed by stacking the given tensors.

Supported Platforms:

Ascend GPU CPU

Raises:

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([1, 2, 3]).astype('float32')
>>> x2 = np.array([4, 5, 6]).astype('float32')
>>> output = np.hstack((x1, x2))
>>> print(output)
[1. 2. 3. 4. 5. 6.]
tinyms.hypot(x1, x2, dtype=None)[source]

Given the “legs” of a right triangle, returns its hypotenuse.

Equivalent to sqrt(x1**2 + x2**2), element-wise. If x1 or x2 is scalar_like (i.e., unambiguously cast-able to a scalar type), it is broadcast for use with each element of the other argument. (See Examples)

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • x1 (Tensor) – Leg of the triangle(s).

  • x2 (Tensor) – Leg of the triangle(s). If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the hypotenuse of the triangle(s). This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.hypot(3*np.ones((3, 3)), 4*np.ones((3, 3)))
>>> print(output)
[[5. 5. 5.]
[5. 5. 5.]
[5. 5. 5.]]
>>> output = np.hypot(3*np.ones((3, 3)), np.array([4.0]))
>>> print(output)
[[5. 5. 5.]
[5. 5. 5.]
[5. 5. 5.]]
tinyms.identity(n, dtype=mindspore.float32)[source]

Returns the identity tensor.

Parameters:
  • n (int) – Number of rows and columns in the output, must be larger than 0.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, default is mstype.float32.

Returns:

A tensor of shape (n, n), where all elements are equal to zero, except for the diagonal, whose values are equal to one.

Supported Platforms:

Ascend GPU CPU

Raises:

TypeError – If input arguments have types not specified above.

Examples

>>> import mindspore.numpy as np
>>> print(np.identity(2))
[[1. 0.]
[0. 1.]]
tinyms.in1d(ar1, ar2, invert=False)[source]

Tests whether each element of a 1-D array is also present in a second array.

Returns a boolean array the same length as ar1 that is True where an element of ar1 is in ar2 and False otherwise.

Note

Numpy argument assume_unique is not supported since the implementation does not rely on the uniqueness of the input arrays.

Parameters:
  • ar1 (Union[int, float, bool, list, tuple, Tensor]) – Input array with shape (M,).

  • ar2 (Union[int, float, bool, list, tuple, Tensor]) – The values against which to test each value of ar1.

  • invert (boolean, optional) – If True, the values in the returned array are inverted (that is, False where an element of ar1 is in ar2 and True otherwise). Default is False.

Returns:

Tensor, with shape (M,). The values ar1[in1d] are in ar2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> test = np.array([0, 1, 2, 5, 0])
>>> states = [0, 2]
>>> mask = np.in1d(test, states)
>>> print(mask)
[ True False  True False  True]
>>> mask = np.in1d(test, states, invert=True)
>>> print(mask)
[False  True False  True False]
tinyms.indices(dimensions, dtype=mindspore.int32, sparse=False)[source]

Returns an array representing the indices of a grid.

Computes an array where the subarrays contain index values 0, 1, … varying only along the corresponding axis.

Parameters:
  • dimensions (tuple or list of ints) – The shape of the grid.

  • dtype (mindspore.dtype, optional) – Data type of the result.

  • sparse (boolean, optional) – Defaults to False. Return a sparse representation of the grid instead of a dense representation.

Returns:

Tensor or tuple of Tensor, If sparse is False, returns one array of grid indices, grid.shape = (len(dimensions),) + tuple(dimensions). If sparse is True, returns a tuple of arrays, with grid[i].shape = (1, ..., 1, dimensions[i], 1, ..., 1) with dimensions[i] in the ith place

Raises:

TypeError – If input dimensions is not a tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> grid = np.indices((2, 3))
>>> print(grid)
[Tensor(shape=[2, 3], dtype=Int32, value=
[[0, 0, 0],
[1, 1, 1]]), Tensor(shape=[2, 3], dtype=Int32, value=
[[0, 1, 2],
[0, 1, 2]])]
tinyms.inner(a, b)[source]

Returns the inner product of two tensors.

Ordinary inner product of vectors for 1-D tensors (without complex conjugation), in higher dimensions a sum product over the last axes.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • a (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

  • b (Tensor) – input tensor. If a and b are nonscalar, their last dimensions must match.

Returns:

Tensor or scalar.

Raises:

ValueError – If x1.shape[-1] != x2.shape[-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((5, 3))
>>> b = np.ones((2, 7, 3))
>>> output = np.inner(a, b)
>>> print(output)
[[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]
[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]
[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]
[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]
[[3. 3. 3. 3. 3. 3. 3.]
[3. 3. 3. 3. 3. 3. 3.]]]
tinyms.interp(x, xp, fp, left=None, right=None)[source]

One-dimensional linear interpolation for monotonically increasing sample points. Returns the one-dimensional piecewise linear interpolant to a function with given discrete data points (xp, fp), evaluated at x.

Note

Numpy argument period is not supported. Complex values are not supported.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – The x-coordinates at which to evaluate the interpolated values.

  • xp (Union[int, float, bool, list, tuple, Tensor]) – 1-D sequence of floats, the x-coordinates of the data points, must be increasing.

  • fp (Union[int, float, bool, list, tuple, Tensor]) – 1-D sequence of floats, the y-coordinates of the data points, same length as xp.

  • left (float, optional) – Value to return for x < xp[0], default is fp[0] once obtained.

  • right (float, optional) – Value to return for x > xp[-1], default is fp[-1] once obtained.

Returns:

Tensor, the interpolated values, same shape as x.

Raises:

ValueError – If xp or fp is not one-dimensional, or if xp and fp do not have the same length.

Supported Platforms:

Ascend GPU CPU

Examples

>>> xp = [1, 2, 3]
>>> fp = [3, 2, 0]
>>> print(np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp))
[3.         3.         2.5        0.55999994 0.        ]
>>> UNDEF = -99.0
>>> print(np.interp(3.14, xp, fp, right=UNDEF))
-99.0
tinyms.invert(x, dtype=None)[source]

Computes bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator ~. For signed integer inputs, the two’s complement is returned. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [1]. A N-bit two’s-complement system can represent every integer in the range -2^{N-1} to +2^{N-1}-1.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Supported dtypes on Ascend: np.int16, np.uint16.

Parameters:
  • x (Tensor) – Only integer and boolean types are handled.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar.

Supported Platforms:

Ascend

Examples

>>> import mindspore.numpy as np
>>> print(np.invert(np.array(13, dtype=np.uint16)))
65522
tinyms.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)[source]

Returns a boolean tensor where two tensors are element-wise equal within a tolerance.

The tolerance values are positive, typically very small numbers. The relative difference (\(rtol * abs(b)\)) and the absolute difference atol are added together to compare against the absolute difference between a and b.

Note

For finite values, isclose uses the following equation to test whether two floating point values are equivalent. \(absolute(a - b) <= (atol + rtol * absolute(b))\) On Ascend, input arrays containing inf or NaN are not supported.

Parameters:
  • a (Union[Tensor, list, tuple]) – Input first tensor to compare.

  • b (Union[Tensor, list, tuple]) – Input second tensor to compare.

  • rtol (numbers.Number) – The relative tolerance parameter (see Note).

  • atol (numbers.Number) – The absolute tolerance parameter (see Note).

  • equal_nan (bool) – Whether to compare NaN as equal. If True, NaN in a will be considered equal to NaN in b in the output tensor. Default: False.

Returns:

A bool tensor of where a and b are equal within the given tolerance.

Raises:

TypeError – If inputs have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = np.array([0,1,2,float('inf'),float('inf'),float('nan')])
>>> b = np.array([0,1,-2,float('-inf'),float('inf'),float('nan')])
>>> print(np.isclose(a, b))
[ True  True False False  True False]
>>> print(np.isclose(a, b, equal_nan=True))
[ True  True False False  True  True]
tinyms.isfinite(x, dtype=None)[source]

Tests element-wise for finiteness (not infinity or not Not a Number).

The result is returned as a boolean array.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • x (Tensor) – Input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, true where x is not positive infinity, negative infinity, or NaN; false otherwise. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isfinite(np.array([np.inf, 1., np.nan]).astype('float32'))
>>> print(output)
[False  True False]
tinyms.isin(element, test_elements, invert=False)[source]

Calculates element in test_elements, broadcasting over element only. Returns a boolean array of the same shape as element that is True where an element of element is in test_elements and False otherwise.

Note

Numpy argument assume_unique is not supported since the implementation does not rely on the uniqueness of the input arrays.

Parameters:
  • element (Union[int, float, bool, list, tuple, Tensor]) – Input array.

  • test_elements (Union[int, float, bool, list, tuple, Tensor]) – The values against which to test each value of element.

  • invert (boolean, optional) – If True, the values in the returned array are inverted, as if calculating element not in test_elements. Default is False.

Returns:

Tensor, has the same shape as element. The values element[isin] are in test_elements.

Supported Platforms:

Ascend GPU CPU

Examples

>>> element = 2*np.arange(4).reshape((2, 2))
>>> test_elements = [1, 2, 4, 8]
>>> mask = np.isin(element, test_elements)
>>> print(mask)
[[False  True]
[ True False]]
>>> mask = np.isin(element, test_elements, invert=True)
>>> print(mask)
[[ True False]
[False  True]]
tinyms.isinf(x, dtype=None)[source]

Tests element-wise for positive or negative infinity.

Returns a boolean array of the same shape as x, True where x == +/-inf, otherwise False.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Only np.float32 is currently supported.

Parameters:
  • x (Tensor) – Input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, true where x is positive or negative infinity, false otherwise. This is a scalar if x is a scalar.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isinf(np.array(np.inf, np.float32))
>>> print(output)
True
>>> output = np.isinf(np.array([np.inf, -np.inf, 1.0, np.nan], np.float32))
>>> print(output)
[ True  True False False]
tinyms.isnan(x, dtype=None)[source]

Tests element-wise for NaN and return result as a boolean array.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Only np.float32 is currently supported.

Parameters:
  • x (Tensor) – Input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, true where x is NaN, false otherwise. This is a scalar if x is a scalar.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isnan(np.array(np.nan, np.float32))
>>> print(output)
True
>>> output = np.isnan(np.array(np.inf, np.float32))
>>> print(output)
False
tinyms.isneginf(x)[source]

Tests element-wise for negative infinity, returns result as bool array.

Note

Numpy argument out is not supported. Only np.float32 is currently supported.

Parameters:

x (Tensor) – Input values.

Returns:

Tensor or scalar, true where x is negative infinity, false otherwise. This is a scalar if x is a scalar.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isneginf(np.array([-np.inf, 0., np.inf, np.nan], np.float32))
>>> print(output)
[ True False False False]
tinyms.isposinf(x)[source]

Tests element-wise for positive infinity, returns result as bool array.

Note

Numpy argument out is not supported. Only np.float32 is currently supported.

Parameters:

x (Tensor) – Input values.

Returns:

Tensor or scalar, true where x is positive infinity, false otherwise. This is a scalar if x is a scalar.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isposinf(np.array([-np.inf, 0., np.inf, np.nan], np.float32))
>>> print(output)
[False False  True False]
tinyms.isscalar(element)[source]

Returns True if the type of element is a scalar type.

Note

Only object types recognized by the mindspore parser are supported, which includes objects, types, methods and functions defined within the scope of mindspore. Other built-in types are not supported.

Parameters:

element (any) – Input argument, can be of any type and shape.

Returns:

Boolean, True if element is a scalar type, False if it is not.

Raises:

TypeError – If the type of element is not supported by mindspore parser.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.isscalar(3.1)
>>> print(output)
True
>>> output = np.isscalar(np.array(3.1))
>>> print(output)
False
>>> output = np.isscalar(False)
>>> print(output)
True
>>> output = np.isscalar('numpy')
>>> print(output)
True
tinyms.ix_(*args)[source]

Constructs an open mesh from multiple sequences.

This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions. Using ix_ one can quickly construct index arrays that will index the cross product. a[np.ix_([1,3],[2,5])] returns the array [[a[1,2] a[1,5]], [a[3,2] a[3,5]]].

Note

Boolean masks are not supported.

Parameters:

*args (Tensor) – 1-D sequences.

Returns:

Tuple of Tensor, N arrays with N dimensions each, with N the number of input sequences. Together these arrays form an open mesh.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> ixgrid = np.ix_(np.array([0, 1]), np.array([2, 4]))
>>> print(ixgrid)
(Tensor(shape=[2, 1], dtype=Int32, value=
[[0],
[1]]), Tensor(shape=[1, 2], dtype=Int32, value=
[[2, 4]]))
tinyms.kron(a, b)[source]

Kronecker product of two arrays.

Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first.

Note

Booleans are not supported.

Parameters:
Returns:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.kron([1,10,100], [5,6,7])
>>> print(output)
[  5   6   7  50  60  70 500 600 700]
>>> output = np.kron([5,6,7], [1,10,100])
>>> print(output)
[  5  50 500   6  60 600   7  70 700]
>>> output = np.kron(np.eye(2), np.ones((2,2)))
>>> print(output)
[[1. 1. 0. 0.]
[1. 1. 0. 0.]
[0. 0. 1. 1.]
[0. 0. 1. 1.]]
tinyms.lcm(x1, x2, dtype=None)[source]

Returns the lowest common multiple of |x1| and |x2|.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input data.

  • x2 (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the lowest common multiple of the absolute value of the inputs. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.lcm(np.arange(6), np.array(20))
>>> print(output)
[ 0 20 20 60 20 20]
tinyms.less(x1, x2, dtype=None)[source]

Returns the truth value of (x1 < x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.less(np.array([1, 2]), np.array([2, 2]))
>>> print(output)
[ True False]
tinyms.less_equal(x1, x2, dtype=None)[source]

Returns the truth value of (x1 <= x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.less_equal(np.array([4, 2, 1]), np.array([2, 2, 2]))
>>> print(output)
[False  True  True]
tinyms.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0)[source]

Returns evenly spaced values within a given interval.

Parameters:
  • start (Union[int, list(int), tuple(int), tensor]) – The starting value of the sequence.

  • stop (Union[int, list(int), tuple(int), tensor]) – The end value of the sequence, unless endpoint is set to False. In that case, the sequence consists of all but the last of num + 1 evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • retstep (bool, optional) – If True, return (samples, step), where step is the spacing between samples.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop are array-like. By default, the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

Returns:

Tensor, with num equally spaced samples in the closed interval \([start, stop]\) or the half-open interval \([start, stop)\) (depending on whether endpoint is True or False).

Step, the size of spacing between samples, only returned if retstep is True.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.linspace(0, 5, 6))
[0. 1. 2. 3. 4. 5.]
tinyms.log(x, dtype=None)[source]

Returns the natural logarithm, element-wise.

The natural logarithm log is the inverse of the exponential function, so that log(exp(x)) = x. The natural logarithm is logarithm in base e.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • x (Tensor) – Input array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the natural logarithm of x, element-wise. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([2, 3, 4]).astype('float32')
>>> output = np.log(x)
>>> print(output)
[0.69314575 1.09861    1.3862929 ]
tinyms.log10(x, dtype=None)[source]

Base-10 logarithm of x.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([10, 100, 1000]).astype('float16')
>>> output = np.log10(x)
>>> print(output)
[1. 2. 3.]
tinyms.log1p(x, dtype=None)[source]

Returns the natural logarithm of one plus the input array, element-wise.

Calculates log(1 + x).

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input array.

  • dtype (mindspore.dtype) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1, 2, 3]).astype('float16')
>>> output = np.log1p(x)
>>> print(output)
[0.6934 1.099 1.387 ]
tinyms.log2(x, dtype=None)[source]

Base-2 logarithm of x.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([2, 4, 8]).astype('float16')
>>> output = np.log2(x)
>>> print(output)
[1. 2. 3.]
tinyms.logaddexp(x1, x2, dtype=None)[source]

Logarithm of the sum of exponentiations of the inputs.

Calculates log(exp(x1) + exp(x2)). This function is useful in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the logarithm of the calculated probability is stored. This function allows adding probabilities stored in such a fashion.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input array.

  • x2 (Tensor) – Input array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([1, 2, 3]).astype('float16')
>>> x2 = np.array(2).astype('float16')
>>> output = np.logaddexp(x1, x2)
>>> print(output)
[2.312 2.693 3.312]
tinyms.logaddexp2(x1, x2, dtype=None)[source]

Logarithm of the sum of exponentiations of the inputs in base of 2.

Calculates log2(2**x1 + 2**x2). This function is useful in machine learning when the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the base-2 logarithm of the calculated probability can be used instead. This function allows adding probabilities stored in such a fashion.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input tensor.

  • x2 (Tensor) – Input tensor. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([2, 4, 8]).astype('float16')
>>> x2 = np.array(2).astype('float16')
>>> output = np.logaddexp2(x1, x2)
>>> print(output)
[3. 4.32 8.02]
tinyms.logical_and(x1, x2, dtype=None)[source]

Computes the truth value of x1 AND x2 element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input tensor.

  • x2 (Tensor) – Input tensor. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. Boolean result of the logical AND operation applied to the elements of x1 and x2; the shape is determined by broadcasting. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([True, False])
>>> x2 = np.array([False, False])
>>> output = np.logical_and(x1, x2)
>>> print(output)
[False False]
tinyms.logical_not(a, dtype=None)[source]

Computes the truth value of NOT a element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • a (Tensor) – The input tensor whose dtype is bool.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. Boolean result with the same shape as a of the NOT operation on elements of a. This is a scalar if a is a scalar.

Raises:

TypeError – If the input is not a tensor or its dtype is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([True, False])
>>> output = np.logical_not(a)
>>> print(output)
[False  True]
tinyms.logical_or(x1, x2, dtype=None)[source]

Computes the truth value of x1 OR x2 element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input tensor.

  • x2 (Tensor) – Input tensor. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype=object is passed. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([True, False])
>>> x2 = np.array([False, True])
>>> output = np.logical_or(x1, x2)
>>> print(output)
[ True  True]
tinyms.logical_xor(x1, x2, dtype=None)[source]

Computes the truth value of x1 XOR x2, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – Input tensor.

  • x2 (Tensor) – Input tensor. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. Boolean result of the logical AND operation applied to the elements of x1 and x2; the shape is determined by broadcasting. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([True, False])
>>> x2 = np.array([False, False])
>>> output = np.logical_xor(x1, x2)
>>> print(output)
[True False]
tinyms.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None, axis=0)[source]

Returns numbers spaced evenly on a log scale.

In linear space, the sequence starts at base ** start (base to the power of start) and ends with base ** stop (see endpoint below).

Parameters:
  • start (Union[int, list(int), tuple(int), tensor]) – base ** start is the starting value of the sequence.

  • stop (Union[int, list(int), tuple(int), tensor]) – base ** stop is the final value of the sequence, unless endpoint is False. In that case, num + 1 values are spaced over the interval in log-space, of which all but the last (a sequence of length num) are returned.

  • num (int, optional) – Number of samples to generate. Default is 50.

  • endpoint (bool, optional) – If True, stop is the last sample. Otherwise, it is not included. Default is True.

  • base (Union[int, float], optional) – The base of the log space. The step size between the elements in \(ln(samples) / ln(base)\) (or \(log_{base}(samples)\)) is uniform. Default is 10.0.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype. If dtype is None, infer the data type from other input arguments. Default is None.

  • axis (int, optional) – The axis in the result to store the samples. Relevant only if start or stop is array-like. By default, the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. Default is 0.

Returns:

Tensor, equally spaced on a log scale.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.logspace(0, 5, 6, base=2.0))
[ 1.  2.  4.  8. 16. 32.]
tinyms.matmul(x1, x2, dtype=None)[source]

Returns the matrix product of two arrays.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • x1 (Tensor) – Input tensor, scalar not allowed.

  • x2 (Tensor) – Input tensor, scalar not allowed.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors.

Raises:

ValueError – If the last dimension of x1 is not the same size as the second-to-last dimension of x2, or if a scalar value is passed in.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.arange(2*3*4).reshape(2, 3, 4).astype('float32')
>>> x2 = np.arange(4*5).reshape(4, 5).astype('float32')
>>> output = np.matmul(x1, x2)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
tinyms.max(a, axis=None, keepdims=False, initial=None, where=True)

Returns the maximum of an array or maximum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Tensor) – Input data.

  • axis (None or int or tuple of integers, optional) – Defaults to None. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of integers, the maximum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (scalar, optional) – Defaults to None. The minimum value of an output element. Must be present to allow computation on empty slice.

  • where (boolean Tensor, optional) – Defaults to True. A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns:

Tensor or scalar, maximum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amax(a)
>>> print(output)
3.0
>>> output = np.amax(a, axis=0)
>>> print(output)
[2. 3.]
>>> output = np.amax(a, axis=1)
>>> print(output)
[1. 3.]
>>> output = np.amax(a, where=np.array([False, True]), initial=-1, axis=0)
>>> print(output)
[-1.  3.]
tinyms.maximum(x1, x2, dtype=None)[source]

Returns the element-wise maximum of array elements.

Compares two arrays and returns a new array containing the element-wise maxima.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On Ascend, input arrays containing inf or NaN are not supported.

Parameters:
  • x1 (Tensor) – Input array

  • x2 (Tensor) – The array holding the elements to be compared. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the maximum of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.maximum(np.array([2, 3, 4]), np.array([1, 5, 2]))
>>> print(output)
[2 5 4]
tinyms.mean(a, axis=None, keepdims=False, dtype=None)[source]

Computes the arithmetic mean along the specified axis.

Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Tensor) – input tensor containing numbers whose mean is desired. If a is not an array, a conversion is attempted.

  • axis (None or int or tuple of integers, optional) – Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. If this is a tuple of ints, a mean is performed over multiple axes.

  • keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, an array containing the mean values.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(6, dtype='float32')
>>> output = np.mean(a, 0)
>>> print(output)
2.5
tinyms.meshgrid(*xi, sparse=False, indexing='xy')[source]

Returns coordinate matrices from coordinate vectors.

Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn.

Note

Numpy argument copy is not supported, and a copy is always returned.

Parameters:
  • *xi (Tensor) – 1-D arrays representing the coordinates of a grid.

  • indexing ('xy', 'ij', optional) – Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing.

  • sparse (bool, optional) – If True a sparse grid is returned in order to conserve memory. Default is False.

Returns:

Tuple of tensors, for vectors x1, x2,…, xn with lengths Ni=len(xi), return (N1, N2, N3,…Nn) shaped arrays if indexing='ij' or (N2, N1, N3,…Nn) shaped arrays if indexing='xy' with the elements of xi repeated to fill the matrix along the first dimension for x1, the second for x2 and so on.

Raises:

TypeError – If the input is not a tensor, or sparse is not boolean, or indexing is not ‘xy’ or ‘ij’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.linspace(0, 1, 3)
>>> y = np.linspace(0, 1, 2)
>>> xv, yv = np.meshgrid(x, y)
>>> print(xv)
[[0.  0.5 1. ]
[0.  0.5 1. ]]
>>> print(yv)
[[0.  0.  0.]
[1.  1.  1.]]
>>> xv, yv = np.meshgrid(x, y, sparse=True)
>>> print(xv)
[[0.  0.5  1. ]]
>>> print(yv)
[[0.]
[1.]]
tinyms.min(a, axis=None, keepdims=False, initial=None, where=True)

Returns the minimum of an array or minimum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Tensor) – Input data.

  • axis (None or int or tuple of integers, optional) – Defaults to None. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of integers, the minimum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (bool, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (Number, optional) – Defaults to None. The maximum value of an output element. Must be present to allow computation on empty slice.

  • where (bool Tensor, optional) – Defaults to True. A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns:

Tensor or scalar, minimum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amin(a)
>>> print(output)
0.0
>>> output = np.amin(a, axis=0)
>>> print(output)
[0. 1.]
>>> output = np.amin(a, axis=1)
>>> print(output)
[0. 2.]
>>> output = np.amin(a, where=np.array([False, True]), initial=10, axis=0)
>>> print(output)
[10.  1.]
tinyms.minimum(x1, x2, dtype=None)[source]

Element-wise minimum of tensor elements.

Compares two tensors and returns a new tensor containing the element-wise minima.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On Ascend, input arrays containing inf or NaN are not supported.

Parameters:
  • x1 (Tensor) – first input tensor to be compared.

  • x2 (Tensor) – second input tensor to be compared.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, element-wise minimum of x1 and x2.

Raises:
  • TypeError – If inputs have types not specified above.

  • ValueError – If the shapes of x1 and x2 cannot be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, 2])
>>> b = np.asarray([[1, 3],[1, 4]])
>>> print(np.minimum(a, b))
[[1 2]
[1 2]]
tinyms.mod(x1, x2, dtype=None)

Returns element-wise remainder of division.

Computes the remainder complementary to the floor_divide function. It is equivalent to the Python modulus operator x1 % x2 and has the same sign as the divisor x2. The MATLAB function equivalent to np.remainder is mod.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input array.

  • x2 (Tensor) – input array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the element-wise remainder of the quotient floor_divide(x1, x2). This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.remainder(np.array([4, 7]), np.array([2, 3]))
>>> print(output)
[0 1]
>>> output = np.remainder(np.arange(7), np.array(5))
>>> print(output)
[0 1 2 3 4 0 1]
tinyms.moveaxis(a, source, destination)[source]

Moves axes of an array to new positions.

Other axes remain in their original order.

Parameters:
  • a (Tensor) – The array whose axes should be reordered.

  • source (int or sequence of ints) – Original positions of the axes to move. These must be unique.

  • destination (int or sequence of ints) – Destination positions for each of the original axes. These must also be unique.

Returns:

Tensor, array with moved axes.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.zeros((3, 4, 5))
>>> output = np.moveaxis(x, 0, -1)
>>> print(output.shape)
(4, 5, 3)
>>> output = np.moveaxis(x, -1, 0)
>>> print(output.shape)
(5, 3, 4)
>>> output = np.moveaxis(x, [0, 1, 2], [-1, -2, -3])
>>> print(output.shape)
(5, 4, 3)
tinyms.multi_dot(arrays)[source]

Computes the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. multi_dot chains numpy.dot and uses optimal parenthesization of the matrices. For more information, refer to the wiki page. Depending on the shapes of the matrices, this can speed up the multiplication a lot. If the first argument is 1-D, it is treated as a row vector. If the last argument is 1-D, it is treated as a column vector. The other arguments must be 2-D.

Note

Numpy argument out is not supported.

Parameters:

arrays (sequence of array_like) – If the first argument is 1-D, it is treated as row vector. If the last argument is 1-D, it is treated as column vector. The other arguments must be 2-D.

Returns:

Tensor, the dot product of the supplied arrays.

Raises:

ValueError – arrays are not 2-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> A = np.ones((10000, 100))
>>> B = np.ones((100, 1000))
>>> C = np.ones((1000, 5))
>>> D = np.ones((5, 333))
>>> output = np.multi_dot([A, B, C, D])
>>> print(output)
[[500000. 500000. 500000. ... 500000. 500000. 500000.]
[500000. 500000. 500000. ... 500000. 500000. 500000.]
[500000. 500000. 500000. ... 500000. 500000. 500000.]
...
[500000. 500000. 500000. ... 500000. 500000. 500000.]
[500000. 500000. 500000. ... 500000. 500000. 500000.]
[500000. 500000. 500000. ... 500000. 500000. 500000.]]
tinyms.multiply(x1, x2, dtype=None)[source]

Multiplies arguments element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input tensor to be multiplied.

  • x2 (Tensor) – input tensor to be multiplied.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the product of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2])
>>> x2 = np.full((3, 2), [3, 4])
>>> output = np.multiply(x1, x2)
>>> print(output)
[[3 8]
[3 8]
[3 8]]
tinyms.nancumsum(a, axis=None, dtype=None)[source]

Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. The cumulative sum does not change when NaNs are encountered and leading NaNs are replaced by zeros.

Zeros are returned for slices that are all-NaN or empty.

Note

If a.dtype is int8, int16 or bool, the result dtype will be elevated to int32.

Parameters:
  • a (Tensor) – Input tensor.

  • axis (int, optional) – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

  • dtype (mindspore.dtype, optional) – If not specified, stay the same as a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used.

Returns:

Tensor.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If axis is out of range.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, 2], [3, np.nan]])
>>> output = np.nancumsum(a)
>>> print(output)
[1. 3. 6. 6.]
>>> output = np.nancumsum(a, axis=0)
>>> print(output)
[[1. 2.]
[4. 2.]]
>>> output = np.nancumsum(a, axis=1)
>>> print(output)
[[1. 3.]
[3. 3.]]
tinyms.nanmax(a, axis=None, dtype=None, keepdims=False)[source]

Return the maximum of an array or maximum along an axis, ignoring any NaNs.

Note

Numpy arguments out is not supported. For all NaN slices, a very small negative number is returned instead of NaN.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Array containing numbers whose maximum is desired. If a is not an array, a conversion is attempted.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the maximum is computed. The default is to compute the maximum of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, 2], [3, np.nan]])
>>> output = np.nanmax(a)
>>> print(output)
3.0
>>> output = np.nanmax(a, axis=0)
>>> print(output)
[3. 2.]
tinyms.nanmean(a, axis=None, dtype=None, keepdims=False)[source]

Computes the arithmetic mean along the specified axis, ignoring NaNs.

Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. float32 intermediate and return values are used for integer inputs.

Note

Numpy arguments out is not supported.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Array containing numbers whose mean is desired. If a is not an array, a conversion is attempted.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the mean is computed. The default is to compute the mean of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, np.nan], [3, 4]])
>>> output = np.nanmean(a)
>>> print(output)
2.6666667
>>> output = np.nanmean(a, axis=0)
>>> print(output)
[2. 4.]
>>> output = np.nanmean(a, axis=1)
>>> print(output)
[1.  3.5]
tinyms.nanmin(a, axis=None, dtype=None, keepdims=False)[source]

Returns the minimum of array elements over a given axis, ignoring any NaNs.

Note

Numpy arguments out is not supported. For all-NaN slices, a very large number is returned instead of NaN. On Ascend, since checking for NaN is currently not supported, it is not recommended to use np.nanmin. If the array does not contain NaN, np.min should be used instead.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Array containing numbers whose minimum is desired. If a is not an array, a conversion is attempted.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the minimum is computed. The default is to compute the minimum of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, 2], [3, np.nan]])
>>> output = np.nanmin(a)
>>> print(output)
1.0
>>> output = np.nanmin(a, axis=0)
>>> print(output)
[1. 2.]
tinyms.nanstd(a, axis=None, dtype=None, ddof=0, keepdims=False)[source]

Computes the standard deviation along the specified axis, while ignoring NaNs.

Returns the standard deviation, a measure of the spread of a distribution, of the non-NaN array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Calculates the standard deviation of the non-NaN values.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • ddof (int, optional) – “Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of non-NaN elements. By default ddof is zero.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, np.nan], [3, 4]])
>>> output = np.nanstd(a)
>>> print(output)
1.2472192
>>> output = np.nanstd(a, axis=0)
>>> print(output)
[1. 0.]
>>> output = np.nanstd(a, axis=1)
>>> print(output)
[0.  0.5]
tinyms.nansum(a, axis=None, dtype=None, keepdims=False)[source]

Returns the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero.

Note

Numpy arguments out is not supported.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Array containing numbers whose sum is desired. If a is not an array, a conversion is attempted.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the sum is computed. The default is to compute the sum of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, 1], [1, np.nan]])
>>> output = np.nansum(a)
>>> print(output)
3.0
>>> output = np.nansum(a, axis=0)
>>> print(output)
[2. 1.]
tinyms.nanvar(a, axis=None, dtype=None, ddof=0, keepdims=False)[source]

Computes the variance along the specified axis, while ignoring NaNs.

Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • a (Union[int, float, list, tuple, Tensor]) – Array containing numbers whose variance is desired. If a is not an array, a conversion is attempted.

  • axis (Union[int, tuple of int, None], optional) – Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • ddof (int, optional) – “Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of non-NaN elements. By default ddof is zero.

  • keepdims (boolean, optional) – Defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.

Returns:

Tensor.

Raises:

ValueError – If axes are out of the range of [-a.ndim, a.ndim), or if the axes contain duplicates.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([[1, np.nan], [3, 4]])
>>> output = np.nanvar(a)
>>> print(output)
1.5555557
>>> output = np.nanvar(a, axis=0)
>>> print(output)
[1. 0.]
>>> output = np.nanvar(a, axis=1)
>>> print(output)
[0.   0.25]
tinyms.negative(a, dtype=None)[source]

Numerical negative, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • a (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, -1]).astype('float32')
>>> output = np.negative(a)
>>> print(output)
[-1. 1.]
tinyms.norm(x, ord=None, axis=None, keepdims=False)[source]

Matrix or vector norm. This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter.

Note

Nuclear norm and 2-norm are not supported for matrices.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – Input array. If axis is None, x must be 1-D or 2-D, unless ord is None. If both axis and ord are None, the 2-norm of x.ravel will be returned.

  • ord (Union[None, 'fro', 'nuc', inf, -inf, int, float], optional) – Order of the norm. inf means numpy’s inf object. The default is None.

  • axis (Union[None, int, 2-tuple of integers], optional) – If axis is an integer, it specifies the axis of x along which to compute the vector norms. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If axis is None then either a vector norm (when x is 1-D) or a matrix norm (when x is 2-D) is returned. The default is None.

  • keepdims (boolean, optional) – If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x.

Returns:

Tensor, norm of the matrix or vector(s).

Raises:

ValueError – If the norm order is not defined.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.norm(np.arange(9).astype(np.float32)))
14.282857
tinyms.not_equal(x1, x2, dtype=None)[source]

Returns (x1 != x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – First input tensor to be compared.

  • x2 (Tensor) – Second input tensor to be compared.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, 2])
>>> b = np.asarray([[1, 3],[1, 4]])
>>> print(np.not_equal(a, b))
[[False  True]
[False  True]]
tinyms.not_equal(x1, x2, dtype=None)[source]

Returns (x1 != x2) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – First input tensor to be compared.

  • x2 (Tensor) – Second input tensor to be compared.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise comparison of x1 and x2. Typically of type bool, unless dtype is passed. This is a scalar if both x1 and x2 are scalars.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, 2])
>>> b = np.asarray([[1, 3],[1, 4]])
>>> print(np.not_equal(a, b))
[[False  True]
[False  True]]
tinyms.ones(shape, dtype=mindspore.float32)[source]

Returns a new tensor of given shape and type, filled with ones.

Parameters:
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype. Default is mstype.float32.

Returns:

Tensor, with the designated shape and dtype, filled with ones.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If shape entries have values \(< 0\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.ones((2,2)))
[[1. 1.]
[1. 1.]]
tinyms.ones_like(a, dtype=None, shape=None)[source]

Returns an array of ones with the same shape and type as a given array.

Note

Input array must have the same size across a dimension. If a is not a Tensor, dtype is float32 by default if not provided.

Parameters:
  • a (Union[Tensor, list, tuple]) – The shape and data-type of a define these same attributes of the returned array.

  • dtype (mindspore.dtype, optional) – Overrides the data type of the result.

  • shape (int or sequence of ints, optional) – Overrides the shape of the result.

Returns:

Tensor, array of ones with the same shape and type as a.

Raises:

ValueError – If a is not a Tensor, list or tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((4,1,2))
>>> output = np.ones_like(a)
>>> print(output)
[[[1. 1.]]
[[1. 1.]]
[[1. 1.]]
[[1. 1.]]]
tinyms.outer(a, b)[source]

Computes the outer product of two vectors.

Given two vectors, a = [a0, a1, ..., aM] and b = [b0, b1, ..., bN], the outer product is:

[[a0*b0  a0*b1 ... a0*bN ]

[a1*b0    .              ]

[ ...          .         ]

[aM*b0            aM*bN ]]

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • a (Tensor) – first input vector. Input is flattened if not already 1-dimensional.

  • b (Tensor) – second input vector. Input is flattened if not already 1-dimensional.

Returns:

Tensor or scalar, out[i, j] = a[i] * b[j].

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.full(7, 2).astype('float32')
>>> b = np.full(4, 3).astype('float32')
>>> output = np.outer(a, b)
>>> print(output)
[[6. 6. 6. 6.]
[6. 6. 6. 6.]
[6. 6. 6. 6.]
[6. 6. 6. 6.]
[6. 6. 6. 6.]
[6. 6. 6. 6.]
[6. 6. 6. 6.]]
tinyms.pad(arr, pad_width, mode='constant', stat_length=None, constant_values=0, end_values=0, reflect_type='even', **kwargs)[source]

Pads an array.

Note

Currently, median mode is not supported. reflect and symmetric mode only supports GPU backend.

Parameters:
  • arr (Union[list, tuple, Tensor]) – The array to pad.

  • pad_width (Union[int, tuple, list]) – Number of values padded to the edges of each axis. ((before_1, after_1), ... (before_N, after_N)) creates unique pad widths for each axis. ((before, after),) yields same before and after pad for each axis. (pad,) or int is a shortcut for before = after = pad width for all axes.

  • mode (string, optional) –

    One of the following string values:

    • constant (default): Pads with a constant value.

    • edge: Pads with the edge values of arr.

    • linear_ramp: Pads with the linear ramp between end_value and the arr edge value.

    • maximum: Pads with the maximum value of all or part of the vector along each axis.

    • mean: Pads with the mean value of all or part of the vector along each axis.

    • median: Pads with the median value of all or part of the vector along each axis.

    • minimum: Pads with the minimum value of all or part of the vector along each axis.

    • reflect: Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis.

    • symmetric: Pads with the reflection of the vector mirrored along the edge of the arr.

    • wrap: Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning.

    • empty: Pads with undefined values.

    • <function>: The padding function, if used, should modify and return a new 1-d tensor. It has the following signature: padding_func(tensor, iaxis_pad_width, iaxis, kwargs)

  • stat_length (Union[tuple, list, int], optional) – Used in ‘maximum’, ‘mean’, ‘median’, and ‘minimum’. Number of values at edge of each axis used to calculate the statistic value. ((before_1, after_1), ... (before_N, after_N)) creates unique statistic lengths for each axis. ((before, after),) yields same before and after statistic lengths for each axis. (stat_length,) or int is a shortcut for before = after = statistic length for all axes. Default is None, to use the entire axis.

  • constant_values (Union[tuple, list, int], optional) – Used in constant mode. The values to set the padded values for each axis. ((before_1, after_1), ... (before_N, after_N)) creates unique pad constants for each axis. ((before, after),) yields same before and after constants for each axis. (constant,) or constant is a shortcut for before = after = constant for all axes. Default is 0.

  • end_values (Union[tuple, list, int], optional) – Used in ‘linear_ramp’. The values used for the ending value of the linear_ramp and that will form the edge of the padded arr. ((before_1, after_1), ... (before_N, after_N)) unique end values for each axis. ((before, after),) yields same before and after end values for each axis. (constant,) or constant is a shortcut for before = after = constant for all axes. Default is 0.

  • reflect_type (string, optional) – ‘reflect’, and ‘symmetric’. The ‘even’ style is the default with an unaltered reflection around the edge value. For the ‘odd’ style, the extended part of the arr is created by subtracting the reflected values from two times the edge value.

  • kwargs (anytype, optional) – Any keyword arguments that will be used only in <function> mode.

Returns:

Padded tensor of rank equal to arr with shape increased according to pad_width.

Raises:
  • TypeError – If arr, pad_width, stat_length, constant_values or end_values have types not specified above.

  • ValueError – If mode cannot be recognized, or if pad_width, stat_length, constant_values, end_values cannot broadcast to (arr.ndim, 2), or if keyword arguments got unexpected inputs.

  • NotImplementedError – If mode is function or ‘median’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> tensor = np.array([1., 2., 3., 4., 5.])
>>> print(np.pad(tensor, (3, 4)))
[0. 0. 0. 1. 2. 3. 4. 5. 0. 0. 0. 0.]
>>> print(np.pad(tensor, (3, 4), mode="wrap"))
[3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4.]
>>> print(np.pad(tensor, (3, 4), mode="linear_ramp", end_values=(10, 10)))
[10.    7.    4.    1.    2.    3.    4.    5.    6.25  7.5   8.75 10.  ]
tinyms.piecewise(x, condlist, funclist, *args, **kw)[source]

Evaluates a piecewise-defined function. Given a set of conditions and corresponding functions, evaluate each function on the input data wherever its condition is true.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – The input domain.

  • condlist (Union[bool, list of bool Tensor]) – Each boolean array corresponds to a function in funclist. Wherever condlist[i] is True, funclist[i](x) is used as the output value. Each boolean array in condlist selects a piece of x, and should therefore be of the same shape as x. The length of condlist must correspond to that of funclist. If one extra function is given, i.e. if len(funclist) == len(condlist) + 1, then that extra function is the default value, used wherever all conditions are false.

  • funclist (Union[list of callables, list of scalars]) – Each function is evaluated over x wherever its corresponding condition is True. It should take a 1d array as input and give an 1d array or a scalar value as output. If, instead of a callable, a scalar is provided then a constant function (lambda x: scalar) is assumed.

  • args (any) – Any further arguments given to piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., 1, 'a'), then each function is called as f(x, 1, 'a').

  • kw (any) – Keyword arguments used in calling piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., alpha=1), then each function is called as f(x, alpha=1).

Returns:

Tensor, the output is the same shape and type as x and is found by calling the functions in funclist on the appropriate portions of x, as defined by the boolean arrays in condlist. Portions not covered by any condition have a default value of 0.

Supported Platforms:

Ascend GPU CPU

Raises:

ValueError – If length of funclist is not in (len(condlist), len(condlist) + 1)

Examples

>>> import mindspore.numpy as np
>>> x = np.linspace(-2.5, 2.5, 6)
>>> print(np.piecewise(x, [x < 0, x >= 0], [-1, 1]))
[-1 -1 -1  1  1  1]
tinyms.polyadd(a1, a2)[source]

Finds the sum of two polynomials. Returns the polynomial resulting from the sum of two input polynomials.

Note

Numpy object poly1d is currently not supported.

Parameters:
Returns:

Tensor, the sum of the inputs.

Raises:

ValueError – If the input array has more than 1 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polyadd([1, 2], [9, 5, 4]))
[9 6 6]
tinyms.polyder(p, m=1)[source]

Returns the derivative of the specified order of a polynomial.

Note

Numpy object poly1d is currently not supported.

Parameters:
  • p (Union[int, float, bool, list, tuple, Tensor) – Polynomial to differentiate. A sequence is interpreted as polynomial coefficients.

  • m (int, optional) – Defaults to 1, order of differentiation.

Returns:

Tensor, a new polynomial representing the derivative.

Raises:

ValueError – If p has more than 1 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polyder([1, 1, 1, 1]))
[3 2 1]
tinyms.polyint(p, m=1, k=None)[source]

Returns an antiderivative (indefinite integral) of a polynomial.

Note

Numpy object poly1d is currently not supported.

Parameters:
  • p (Union[int, float, bool, list, tuple, Tensor) – Polynomial to integrate. A sequence is interpreted as polynomial coefficients.

  • m (int, optional) – Defaults to 1, Order of the antiderivative.

  • k (Union[int, list of int]y, optinoal) – Integration constants. They are given in the order of integration: those corresponding to highest-order terms come first. If None (default), all constants are assumed to be zero. If m = 1, a single scalar can be given instead of a list.

Returns:

Tensor, a new polynomial representing the antiderivative.

Raises:

ValueError – If p has more than 1 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polyint([1, 1, 1]))
[0.33333334 0.5        1.         0.        ]
tinyms.polymul(a1, a2)[source]

Finds the product of two polynomials.

Note

Numpy object poly1d is currently not supported.

Parameters:
Returns:

Tensor, a new polynomial representing the derivative.

Raises:

ValueError – If the input array has more than 1 dimensions.

Supported Platforms:

GPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polymul([3, 1, 2], [2, 5]))
[ 6 17  9 10]
tinyms.polysub(a1, a2)[source]

Difference (subtraction) of two polynomials. Given two polynomials a1 and a2, returns a1 - a2.

Note

Numpy object poly1d is currently not supported.

Parameters:
Returns:

Tensor, the difference of the inputs.

Raises:

ValueError – If the input array has more than 1 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polysub([2, 10, -2], [3, 10, -4]))
[-1  0  2]
tinyms.polyval(p, x)[source]

Evaluates a polynomial at specific values. If p is of length N, this function returns the value: p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1] If x is a sequence, then p(x) is returned for each element of x. If x is another polynomial then the composite polynomial p(x(t)) is returned.

Note

Numpy object poly1d is currently not supported.

Parameters:
  • p (Union[int, float, bool, list, tuple, Tensor) – 1D array of polynomial coefficients (including coefficients equal to zero) from highest degree to the constant term.

  • x (Union[int, float, bool, list, tuple, Tensor) – A number, an array of numbers, at which to evaluate p.

Returns:

Tensor.

Raises:

ValueError – If p has more than 1 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.polyval([3.,0.,1.], 5.))
76.0
tinyms.positive(a, dtype=None)[source]

Numerical positive, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • a (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.asarray([1, -1]).astype('float32')
>>> output = np.positive(a)
>>> print(output)
[1. -1.]
tinyms.power(x1, x2, dtype=None)[source]

First array elements raised to powers from second array, element-wise.

Raises each base in x1 to the positionally-corresponding power in x2.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters:
  • x1 (Tensor) – The bases.

  • x2 (Tensor) – The exponents.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the bases in x1 raised to the exponents in x2. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2]).astype('float32')
>>> x2 = np.full((3, 2), [3, 4]).astype('float32')
>>> output = np.power(x1, x2)
>>> print(output)
[[ 1. 16.]
[ 1. 16.]
[ 1. 16.]]
tinyms.promote_types(type1, type2)[source]

Returns the data type with the smallest size and smallest scalar kind.

Note

The promotion rule is slightly different from original Numpy, but more like jax, due to the preference on 32-bit over 64-bit data types.

Parameters:
  • type1 (Union[mindspore.dtype, str]) – First data type.

  • type2 (Union[mindspore.dtype, str]) – Second data type.

Returns:

The promoted data type.

Raises:

TypeError – If the input are not valid mindspore.dtype input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.promote_types(np.float32, np.float64)
>>> print(output)
Float64
tinyms.ptp(x, axis=None, keepdims=False)[source]

Range of values (maximum - minimum) along an axis. The name of the function comes from the acronym for “peak to peak”.

Note

Numpy arguments dtype and out are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • axis (Union[None, int, tuple(int)]) – Axis or axes along which the range is computed. The default is to compute the variance of the flattened array. Default: None.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor. If the default value is passed, then keepdims will not be passed through to the ptp method of sub-classes of tensor, however any non-default value will be. Default is False.

Returns:

Tensor.

Raises:

TypeError – If inputs have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([[4.0, 9.0, 2.0, 10.0], [6.0, 9.0, 7.0, 12.0]])
>>> print(np.ptp(x, axis=1))
[8. 6.]
>>> print(np.ptp(x, axis=0))
[2. 0. 5. 2.]
tinyms.rad2deg(x, dtype=None)[source]

Converts angles from radians to degrees.

Parameters:
  • x (Tensor) – Angles in radians.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, the corresponding angle in degrees. This is a tensor scalar if x is a tensor scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5])
>>> output = np.rad2deg(x)
>>> print(output)
[  57.295776  114.59155   171.88733  -229.1831   -286.47888 ]
tinyms.radians(x, dtype=None)[source]

Converts angles from degrees to radians.

Parameters:
  • x (Tensor) – Angles in degrees.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, the corresponding radian values. This is a tensor scalar if x is a tensor scalar.

Raises:

TypeError – If x is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([1, 2, 3, -4, -5])
>>> output = np.radians(x)
>>> print(output)
[ 0.01745329  0.03490658  0.05235988 -0.06981317 -0.08726647]
tinyms.rand(*shape, dtype=mindspore.float32)[source]

Returns a new Tensor with given shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Parameters:
  • *shape (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, it must be float type. Default is mindspore.float32.

Returns:

Tensor, with the designated shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If dtype is not float type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> from mindspore import set_seed
>>> set_seed(1)
>>> print(np.rand((2,3)))
[[4.1702199e-01 9.9718481e-01 7.2032452e-01]
[9.3255734e-01 1.1438108e-04 1.2812445e-01]]
tinyms.randint(minval, maxval=None, shape=None, dtype=mindspore.int32)[source]

Return random integers from minval (inclusive) to maxval (exclusive). Return random integers from the discrete uniform distribution of the specified dtype in the “half-open” interval \([minval, maxval)\). If maxval is None (the default), the value range will be \([0, minval)\), in this case, minval must be greater than 0.

Parameters:
  • minval (Union[int]) – Start value of interval. The interval includes this value. When maxval is None, minval must be greater than 0. When maxval is not None, minval must be less than maxval.

  • maxval (Union[int], optional) – End value of interval. The interval does not include this value.

  • shape (Union[int, tuple(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, it must be int type. Default is mindspore.int32.

Returns:

Tensor, with the designated shape and dtype, filled with random integers from minval (inclusive) to maxval (exclusive).

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input arguments have values not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> from mindspore import set_seed
>>> set_seed(1)
>>> print(np.randint(1, 10, (2,3)))
[[4 9 7]
[9 1 2]]
tinyms.randn(*shape, dtype=mindspore.float32)[source]

Returns a new Tensor with given shape and dtype, filled with a sample (or samples) from the standard normal distribution.

Parameters:
  • *shape (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype, it must be float type. Default is mindspore.float32.

Returns:

Tensor, with the designated shape and dtype, filled with a sample (or samples) from the “standard normal” distribution.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If dtype is not float type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> from mindspore import set_seed
>>> set_seed(1)
>>> print(np.randn((2,3)))
[[ 0.30639967 -0.42438635 -0.20454668]
[-0.4287376   1.3054721   0.64747655]]
tinyms.ravel(x)[source]

Returns a contiguous flattened tensor.

A 1-D tensor, containing the elements of the input, is returned.

Parameters:

x (Tensor) – A tensor to be flattened.

Returns:

Flattened tensor, has the same data type as the original tensor x.

Raises:

TypeError – If x is not tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.ravel(x)
>>> print(output.shape)
(24,)
tinyms.ravel_multi_index(multi_index, dims, mode='clip', order='C')[source]

Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index.

Note

raise mode is not supported. Default mode is clip.

Parameters:
  • multi_index (tuple of array_like) – A tuple of integer arrays, one array for each dimension.

  • dims (Union[int, tuple of integers]) – The shape of array into which the indices from multi_index apply.

  • mode ({wrap, clip}) –

    Specifies how out-of-bounds indices are handled. Default: clip.

    • wrap: wrap around

    • clip: clip to the range

    In clip mode, a negative index which would normally wrap will clip to 0 instead.

  • order ({C, F}) – Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order.

Returns:

Raveled_indices array. An array of indices into the flattened version of an array of dimensions dims.

Raises:
  • TypeError – If multi_index or dims can not be converted to tensor or dims is not a sequence of integer values.

  • ValueError – If the length of multi_index and that of dims are not equal.

Supported Platforms:

GPU

Examples

>>> import mindspore.numpy as np
>>> arr = np.array([[3, 6, 6], [4, 5, 1]])
>>> output = np.ravel_multi_index(arr, (7, 6))
>>> print(output)
[22. 41. 37.]
>>> output = np.ravel_multi_index((3, 1, 4, 1), (6, 7, 8, 9))
>>> print(output)
1621.0
tinyms.reciprocal(x, dtype=None)[source]

Returns the reciprocal of the argument, element-wise.

Calculates 1/x.

Note

Numpy arguments casting, order, subok, signature, and extobj are not supported. When where is provided, out must have a tensor value. out is not supported for storing the result, however it can be used in combination with where to set the value at indices for which where is set to False.

Parameters:
  • x (Tensor) – Input array. For integer arguments with absolute value larger than 1 the result is always zero because of the way Python handles integer division. For integer zero the result is an overflow.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(1, 7).reshape(2, 3).astype('float32')
>>> output = np.reciprocal(x)
>>> print(output)
[[1.         0.5        0.33333334]
[0.25       0.2        0.16666667]]
tinyms.remainder(x1, x2, dtype=None)[source]

Returns element-wise remainder of division.

Computes the remainder complementary to the floor_divide function. It is equivalent to the Python modulus operator x1 % x2 and has the same sign as the divisor x2. The MATLAB function equivalent to np.remainder is mod.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – input array.

  • x2 (Tensor) – input array.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the element-wise remainder of the quotient floor_divide(x1, x2). This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.remainder(np.array([4, 7]), np.array([2, 3]))
>>> print(output)
[0 1]
>>> output = np.remainder(np.arange(7), np.array(5))
>>> print(output)
[0 1 2 3 4 0 1]
tinyms.repeat(a, repeats, axis=None)[source]

Repeats elements of an array.

Parameters:
  • a (Tensor) – Input array.

  • repeats (int or sequence of ints) – The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.

  • axis (int, optional) – The axis along which to repeat values. By default, use the flattened input array, and return a flat output array. Defaults to None.

Returns:

Tensor, output array which has the same shape as a, except along the given axis.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.repeat(np.array(3), 4)
>>> print(output)
[3 3 3 3]
>>> x = np.array([[1,2],[3,4]])
>>> output = np.repeat(x, 2)
>>> print(output)
[1 1 2 2 3 3 4 4]
>>> output = np.repeat(x, 3, axis=1)
>>> print(output)
[[1 1 1 2 2 2]
[3 3 3 4 4 4]]
>>> output = np.repeat(x, [1, 2], axis=0)
>>> print(output)
[[1 2]
[3 4]
[3 4]]
tinyms.reshape(x, new_shape)[source]

Reshapes a tensor without changing its data.

Parameters:
  • x (Tensor) – A tensor to be reshaped.

  • new_shape (Union[int, list(int), tuple(int)]) – The new shape should be compatible with the original shape. If the tuple has only one element, the result will be a 1-D tensor of that length. One shape dimension can be \(-1\). In this case, the value is inferred from the length of the tensor and remaining dimensions.

Returns:

Reshaped Tensor. Has the same data type as the original tensor x.

Raises:
  • TypeError – If new_shape is not integer, list or tuple, or x is not tensor.

  • ValueError – If new_shape is not compatible with the original shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.asarray([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> output = np.reshape(x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (3, -1))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
>>> output = np.reshape(x, (6, ))
>>> print(output)
[-0.1  0.3  3.6  0.4  0.5 -3.2]
tinyms.result_type(*arrays_and_dtypes)[source]

Returns the type that results from applying the type promotion rules to the arguments.

Note

The promotion rule is slightly different from original Numpy, but more like jax, due to the preference on 32-bit over 64-bit data types. Complex dtypes are not supported.

Parameters:

*arrays_and_dtypes (Union[int, float, bool, list, tuple, Tensor, mindspore.dtype, str]) – The operands of some operation whose result type is needed.

Returns:

mindspore.dtype, the result type.

Raises:

TypeError – If the input is not a valid data type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.result_type('i2', np.float32, True))
Float32
tinyms.rint(x, dtype=None)[source]

Rounds elements of the array to the nearest integer.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Ascend does not support dtype float64 currently.

Parameters:
  • x (Union[float, list, tuple, Tensor]) – Input tensor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Output tensor is same shape and type as x. This is a scalar if x is a scalar.

Raises:

TypeError – If x can not be converted to tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([-1.7, -1.5, 0.2, 1.5, 1.7, 2.0])
>>> print(np.rint(x))
[-2. -2. 0. 2. 2. 2.]
tinyms.roll(a, shift, axis=None)[source]

Rolls a tensor along given axes.

Elements that rolls beyond the last position are re-introduced at the first.

Parameters:
  • a (Tensor) – Input tensor.

  • shift (Union[int, tuple(int)]) – The number of places by which elements are shifted. If a tuple, then axis must be a tuple of the same size, and each of the given axes is shifted by the corresponding number. If shift is an int while axis is a tuple of integers, then the same value is used for all given axes.

  • axis (Union[int, tuple(int)], optional) – Axis or axes along which elements are shifted. By default, the array is flattened before shifting, after which the original shape is restored. Default: None.

Returns:

Tensor, with the same shape as a.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If axis exceeds a.ndim, or shift and axis cannot broadcast.

Examples

>>> import mindspore.numpy as np
>>> a = np.reshape(np.arange(12), (3, 4))
>>> print(np.roll(a, [2,-3], [0,-1]))
    [[ 7  4  5  6]
     [11  8  9 10]
     [ 3  0  1  2]]
tinyms.rollaxis(x, axis, start=0)[source]

Rolls the specified axis backwards, until it lies in the given position. The positions of the other axes do not change relative to one another.

Parameters:
  • x (Tensor) – A Tensor to be transposed.

  • axis (int) – The axis to be rolled.

  • start (int) –

    Default: 0. If \(start <= axis\), the axis is rolled back until it lies in this position (start). If \(start > axis\): the axis is rolled until it lies before this position (start). If \(start < 0\), the start will be normalized as a non-negative number (more details can be seen in the source code.)

Returns:

Transposed Tensor. Has the same data type as the original tensor x.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If axis or start is not integer, or x is not tensor.

  • ValueError – If axis is not in the range of \([-ndim, ndim-1]\) or start is not in the range of \([-ndim, ndim]\).

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.rollaxis(x, 0, 2)
>>> print(output.shape)
(3, 2, 4)
tinyms.rot90(a, k=1, axes=(0, 1))[source]

Rotates a tensor by 90 degrees in the plane specified by axes. Rotation direction is from the first towards the second axis.

Parameters:
  • a (Tensor) – Input tensor of two or more dimensions.

  • k (int) – Number of times the tensor is rotated by 90 degrees. Default: 1.

  • axes (Union[tuple(int), list(int)]) – The tensor is rotated in the plane defined by the axes. Default: (0, 1). Axes must be different and with the shape of (2,).

Returns:

Tensor.

Raises:
  • TypeError – If input a is not a Tensor or the argument k is not integer or the argument axes is not tuple of integers or list of ints.

  • ValueError – If any axis is out of range or the length of axes is not 2.

Supported Platforms:

GPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(24).reshape((2, 3, 4))
>>> output = np.rot90(a)
>>> print(output)
[[[ 8  9 10 11]
  [20 21 22 23]]
 [[ 4  5  6  7]
  [16 17 18 19]]
 [[ 0  1  2  3]
  [12 13 14 15]]]
>>> output = np.rot90(a, 3, (1, 2))
>>> print(output)
[[[ 8  4  0]
  [ 9  5  1]
  [10  6  2]
  [11  7  3]]
 [[20 16 12]
  [21 17 13]
  [22 18 14]
  [23 19 15]]]
tinyms.searchsorted(a, v, side='left', sorter=None)[source]

Finds indices where elements should be inserted to maintain order. Finds the indices into a sorted array a such that, if the corresponding elements in v were inserted before the indices, the order of a would be preserved.

Parameters:
  • a (Union[list, tuple, Tensor]) – 1-D input array. If sorter is None, then it must be sorted in ascending order, otherwise sorter must be an array of indices that sort it.

  • v (Union[int, float, bool, list, tuple, Tensor]) – Values to insert into a.

  • side ('left', 'right', optional) – If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of a).

  • sorter (Union[int, float, bool, list, tuple, Tensor]) – 1-D optional array of integer indices that sort array a into ascending order. They are typically the result of argsort.

Returns:

Tensor, array of insertion points with the same shape as v.

Raises:

ValueError – If argument for side or sorter is invalid.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import numpy as np
>>> print(np.searchsorted([1,2,3,4,5], 3))
2
>>> print(np.searchsorted([1,2,3,4,5], 3, side='right'))
3
>>> print(np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]))
[0 5 1 2]
tinyms.select(condlist, choicelist, default=0)[source]

Returns an array drawn from elements in choicelist, depending on conditions.

Parameters:
  • condlist (Union[int, float, bool, list, tuple, Tensor]) – The list of conditions which determine from which array in choicelist the output elements are taken. When multiple conditions are satisfied, the first one encountered in condlist is used.

  • choicelist (Union[int, float, bool, list, tuple, Tensor]) – The list of arrays from which the output elements are taken. It has to be of the same length as condlist.

  • default (scalar, optional) – The element inserted in output when all conditions evaluate to False. Defaults to 0.

Returns:

Tensor, the output at position m is the m-th element of the array in choicelist where the m-th element of the corresponding array in condlist is True.

Raises:

ValueError – If len(condlist) != len(choicelist).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> condlist = [[True, True, True, False, False], [False, False, True, False, True]]
>>> choicelist = [[0, 1, 2, 3, 4], [0, 1, 4, 9, 16]]
>>> output = np.select(condlist, choicelist)
>>> print(output)
[ 0  1  2  0 16]
tinyms.sign(x, dtype=None)[source]

Returns an element-wise indication of the sign of a number.

The sign function returns -1 if x < 0, 0 if x == 0, 1 if x > 0. nan is returned for nan inputs.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. Complex inputs are not supported now. On Ascend, integer inputs are not supported.

Parameters:
  • x (Union[int, float, list, tuple, Tensor]) – Input values.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

The sign of x. This is a tensor or a scalar when x is a scalar.

Raises:

TypeError – If dtype of the input is not in the given types or the input can not be converted to tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.sign(np.array([-1., 0., 1., 1.2]))
>>> print(output)
[-1.  0.  1.  1.]
tinyms.signbit(x, dtype=None)[source]

Returns element-wise True where signbit is set (less than zero).

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – The input value(s).

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor.

Raises:

TypeError – If input is not array_like or dtype is not None or bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1, -2.3, 2.1]).astype('float32')
>>> output = np.signbit(x)
>>> print(output)
[False  True False]
tinyms.sin(x, dtype=None)[source]

Trigonometric sine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([-5, -1, 0, 2, 4, 100]).astype('float32')
>>> output = np.sin(x)
>>> print(output)
[ 0.9589243  -0.84147096  0.   0.9092974  -0.7568025  -0.50636566]
tinyms.sinh(x, dtype=None)[source]

Hyperbolic sine, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5).astype('float32')
>>> print(np.sinh(x))
[ 0.         1.1752012  3.6268604 10.017875  27.289917 ]
tinyms.size(a, axis=None)[source]

Returns the number of elements along a given axis.

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input data.

  • axis (int) – Axis along which the elements are counted. Default: None. If None, give the total number of elements.

Returns:

Number of elements along the specified axis.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If input is not array_like or axis is not int.

  • ValueError – If any axis is out of range or duplicate axes exist.

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(10).reshape(2, 5).astype('float32')
>>> print(np.size(x))
10
>>> print(np.size(x, axis=1))
5
tinyms.sometrue(a, axis=None, keepdims=False)[source]

Tests whether any array element along a given axis evaluates to True.

Returns single boolean unless axis is not None

Parameters:
  • a (Union[int, float, bool, list, tuple, Tensor]) – Input tensor or object that can be converted to an array.

  • axis (Union[None, int, tuple(int)]) – Axis or axes along which a logical OR reduction is performed. Default: None. If None, perform a logical OR over all the dimensions of the input array. If negative, it counts from the last to the first axis. If tuple of integers, a reduction is performed on multiple axes, instead of a single axis or all the axes as before.

  • keepdims (bool) – Default: False. If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the any method of sub-classes of ndarray, however any non-default value will be. If the sub-class method does not implement keepdims any exceptions will be raised.

Returns:

Returns single boolean unless axis is not None

Raises:
  • TypeError – If input is not array_like or axis is not int or tuple of integers or keepdims is not integer or initial is not scalar.

  • ValueError – If any axis is out of range or duplicate axes exist.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([1, -2.3, 2.1]).astype('float32')
>>> output = np.sometrue(x)
>>> print(output)
True
tinyms.split(x, indices_or_sections, axis=0)[source]

Splits a tensor into multiple sub-tensors along the given axis.

Parameters:
  • x (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – If integer, \(N\), the tensor will be divided into \(N\) equal tensors along axis. If tuple(int), list(int) or of sorted integers, the entries indicate where along axis the array is split. For example, \([2, 3]\) would, for \(axis=0\), result in three sub-tensors \(x[:2]\), \(x[2:3]`and :math:`x[3:]\). If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument indices_or_sections is not integer, tuple(int) or list(int) or argument axis is not integer.

  • ValueError – If argument axis is out of range of \([-x.ndim, x.ndim)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.arange(9).astype("float32")
>>> output = np.split(input_x, 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32,
  value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
 Tensor(shape=[3], dtype=Float32,
  value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
 Tensor(shape=[3], dtype=Float32,
  value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.sqrt(x, dtype=None)[source]

Returns the non-negative square-root of an array, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • x (Tensor) – The values whose square-roots are required.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, an array of the same shape as x, containing the positive square-root of each element in x. For negative elements, nan is returned. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(6).reshape(2, 3).astype('float32')
>>> x_squared = np.square(x)
>>> output = np.sqrt(x_squared)
>>> print(output)
[[ 0. 1. 2.]
[ 3. 4. 5.]]
tinyms.square(x, dtype=None)[source]

Returns the element-wise square of the input.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • x (Tensor) – Input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, element-wise x*x, of the same shape and dtype as x. This is a scalar if x is a scalar..

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.square(np.arange(6).reshape(2, 3).astype('float32'))
>>> print(x)
[[ 0.  1.  4.]
[ 9. 16. 25.]]
tinyms.squeeze(a, axis=None)[source]

Removes single-dimensional entries from the shape of a tensor.

Parameters:
Returns:

Tensor, with all or a subset of the dimensions of length \(1\) removed.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If specified axis has shape entry \(> 1\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,2,1))
>>> x = np.squeeze(x)
>>> print(x.shape)
(2, 2)
tinyms.stack(arrays, axis=0)[source]

Joins a sequence of arrays along a new axis.

The axis parameter specifies the index of the new axis in the dimensions of the result. For example, if axis=0 it will be the first dimension and if axis=-1 it will be the last dimension.

Note

Numpy argument out is not supported.

Parameters:
  • arrays (sequence of Tensor) – Each array must have the same shape.

  • axis (int, optional) – The axis in the result array along which the input arrays are stacked. Default: 0.

Returns:

Tensor, The stacked array has one more dimension than the input arrays.

Raises:

ValueError – If input is not Tensor, tuple, or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> arrays = [np.ones((3, 4)) for _ in range(10)]
>>> output = np.stack(arrays, axis=0)
>>> print(output.shape)
(10, 3, 4)
>>> output = np.stack(arrays, axis=1)
>>> print(output.shape)
(3, 10, 4)
>>> output = np.stack(arrays, axis=2)
>>> print(output.shape)
(3, 4, 10)
tinyms.std(x, axis=None, ddof=0, keepdims=False)[source]

Computes the standard deviation along the specified axis. The standard deviation is the square root of the average of the squared deviations from the mean, i.e., \(std = sqrt(mean(abs(x - x.mean())**2))\).

Returns the standard deviation, which is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype, out and where are not supported.

Parameters:
  • x (Tensor) – A Tensor to be calculated.

  • axis (Union[None, int, tuple(int)]) –

    Axis or axes along which the standard deviation is computed. Default: None.

    If None, compute the standard deviation of the flattened array.

  • ddof (int) – Means Delta Degrees of Freedom. The divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. Default: 0.

  • keepdims – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor. If the default value is passed, then keepdims will not be passed through to the std method of sub-classes of tensor, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised. Default: False.

Returns:

Standard deviation tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.array([1., 2., 3., 4.])
>>> output = np.std(input_x)
>>> print(output)
1.118034
tinyms.subtract(x1, x2, dtype=None)[source]

Subtracts arguments, element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – The input to be subtracted from.

  • x2 (Tensor) – The input to be subtracted by.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the difference of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2])
>>> x2 = np.full((3, 2), [3, 4])
>>> output = np.subtract(x1, x2)
>>> print(output)
[[-2 -2]
[-2 -2]
[-2 -2]]
tinyms.sum(a, axis=None, dtype=None, keepdims=False, initial=None)

Returns sum of array elements over a given axis.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Union[int, float, bool, list, tuple, Tensor]) – Elements to sum.

  • axis (Union[None, int, tuple(int)]) – Axis or axes along which a sum is performed. Default: None. If None, sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of integers, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the sum method of sub-classes of ndarray, however any non-default value will be. If the sub-class method does not implement keepdims any exceptions will be raised. Default: False.

  • initial (scalar) – Starting value for the sum, if None, which refers to the first element of the reduction. Default: None.

Returns:

Tensor. An array with the same shape as a, with the specified axis removed. If a is a 0-d array, or if axis is None, a scalar is returned. If an output array is specified, a reference to out is returned.

Raises:
  • TypeError – If input is not array_like or axis is not int or tuple of integers or keepdims is not integer or initial is not scalar.

  • ValueError – If any axis is out of range or duplicate axes exist.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.sum([0.5, 1.5]))
2.0
>>> x = np.arange(10).reshape(2, 5).astype('float32')
>>> print(np.sum(x, axis=1))
[10. 35.]
tinyms.swapaxes(x, axis1, axis2)[source]

Interchanges two axes of a tensor.

Parameters:
  • x (Tensor) – A tensor to be transposed.

  • axis1 (int) – First axis.

  • axis2 (int) – Second axis.

Returns:

Transposed tensor, has the same data type as the original tensor x.

Raises:
  • TypeError – If axis1 or axis2 is not integer, or x is not tensor.

  • ValueError – If axis1 or axis2 is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((2,3,4))
>>> output = np.swapaxes(x, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.take(a, indices, axis=None, mode='clip')[source]

Takes elements from an array along an axis.

When axis is not None, this function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis. A call such as np.take(arr, indices, axis=3) is equivalent to arr[:,:,:,indices,...].

Note

Numpy argument out is not supported. mode = 'raise' is not supported, and the default mode is ‘clip’ instead.

Parameters:
  • a (Tensor) – Source array with shape (Ni…, M, Nk…).

  • indices (Tensor) – The indices with shape (Nj…) of the values to extract.

  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used. Defaults to None.

  • mode ('raise', 'wrap', 'clip', optional) –

    Specifies how out-of-bounds indices will behave. Defaults to “clip”.

    ’raise’ – raise an error;

    ’wrap’ – wrap around;

    ’clip’ – clip to the range. ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.

Returns:

Tensor, the indexed result.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([4, 3, 5, 7, 6, 8])
>>> indices = np.array([0, 1, 4])
>>> output = np.take(a, indices)
>>> print(output)
[4 3 6]
>>> indices = np.array([[0, 1], [2, 3]])
>>> output = np.take(a, indices)
>>> print(output)
[[4 3]
[5 7]]
tinyms.take_along_axis(arr, indices, axis)[source]

Takes values from the input array by matching 1d index and data slices.

This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to look up values in the latter. These slices can be different lengths.

Parameters:
  • arr (Tensor) – Source array with shape (Ni…, M, Nk…).

  • indices (Tensor) – Indices with shape (Ni…, J, Nk…) to take along each 1d slice of arr. This must match the dimension of arr, but dimensions Ni and Nj only need to broadcast against arr.

  • axis (int) – The axis to take 1d slices along. If axis is None, the input array is treated as if it had first been flattened to 1d.

Returns:

Tensor, the indexed result, with shape (Ni…, J, Nk…).

Raises:
  • ValueError – If input array and indices have different number of dimensions.

  • TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Example

>>> import mindspore.numpy as np
>>> x = np.arange(12).reshape(3, 4)
>>> indices = np.arange(3).reshape(1, 3)
>>> output = np.take_along_axis(x, indices, 1)
>>> print(output)
[[ 0  1  2]
[ 4  5  6]
[ 8  9 10]]
tinyms.tan(x, dtype=None)[source]

Computes tangent element-wise.

Equivalent to \(np.sin(x)/np.cos(x)\) element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Raises:

TypeError – If the input is not a tensor or is tensor.dtype is mindspore.float64.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.array([-5, -1, 0, 2, 4, 100]).astype('float32')
>>> print(np.tan(x))
[ 3.380515   -1.5574077   0.         -2.1850398   1.1578213  -0.58721393]
tinyms.tanh(x, dtype=None)[source]

Computes hyperbolic tangent element-wise.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – Input tensor.

  • dtype (mindspore.dtype, optional) – Default: None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.arange(5).astype('float32')
>>> print(np.tanh(x))
[0.        0.7615942 0.9640276 0.9950548 0.9993293]
tinyms.tensordot(a, b, axes=2)[source]

Computes tensor dot product along specified axes.

Given two tensors, a and b, and an array_like object containing two array_like objects, (a_axes, b_axes), sum the products of a’s and b’s elements (components) over the axes specified by a_axes and b_axes. The third argument can be a single non-negative integer_like scalar, N; if it is such, then the last N dimensions of a and the first N dimensions of b are summed over. Three common use cases are:

  • axes = 0 : tensor product

  • axes = 1 : tensor dot product

  • axes = 2 : (default) tensor double contraction

When axes is integer_like, the sequence for evaluation will be: first the -Nth axis in a and 0th axis in b, and the -1th axis in a and Nth axis in b last. When there is more than one axis to sum over - and they are not the last (first) axes of a (b) - the argument axes should consist of two sequences of the same length, with the first axis to sum over given first in both sequences, the second axis second, and so forth. The shape of the result consists of the non-contracted axes of the first tensor, followed by the non-contracted axes of the second.

Note

On CPU, the supported dypes are np.float16 and np.float32. On GPU, the supported dypes are np.float16 and np.float32.

Parameters:
  • a (Tensor) – Tensor to “dot”.

  • b (Tensor) – Tensor to “dot”.

  • axes (int or sequence of ints) –

    integer_like: If an int N, sum over the last N axes of a and the first N axes of b in order. The sizes of the corresponding axes must match.

    sequence of ints: Or, a list of axes to be summed over, first sequence applying to a, second to b. Both elements array_like must be of the same length.

Returns:

Tensor, or list of tensors, the tensor dot product of the input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((3, 4, 5))
>>> b = np.ones((4, 3, 2))
>>> output = np.tensordot(a, b, axes=([1,0],[0,1]))
>>> print(output.shape)
(5, 2)
tinyms.tile(a, reps)[source]

Constructs an array by repeating a the number of times given by reps.

If reps has length d, the result will have dimension of max(d, a.ndim). If a.ndim < d, a is promoted to be d-dimensional by prepending new axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication. If this is not the desired behavior, promote a to d-dimensions manually before calling this function. If a.ndim > d, reps is promoted to a.ndim by pre-pending 1’s to it. Thus for an a of shape (2, 3, 4, 5), a reps of (2, 2) is treated as (1, 1, 2, 2).

Parameters:
  • a (Tensor) – The input array.

  • reps (int or sequence of ints) – The number of repetitions of a along each axis.

Returns:

Tensor, the tiled output array.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.array([0, 1, 2])
>>> output = np.tile(a, 2)
>>> print(output)
[0 1 2 0 1 2]
>>> output = np.tile(a, (2, 2))
>>> print(output)
[[0 1 2 0 1 2]
[0 1 2 0 1 2]]
>>> output = np.tile(a, (2, 1, 2))
>>> print(output)
[[[0 1 2 0 1 2]]
[[0 1 2 0 1 2]]]
tinyms.trace(a, offset=0, axis1=0, axis2=1, dtype=None)[source]

Returns the sum along diagonals of the array.

If a is 2-D, the sum along its diagonal with the given offset is returned, i.e., the sum of elements a[i,i+offset] for all i. If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-arrays whose traces are returned. The shape of the resulting array is the same as that of a with axis1 and axis2 removed.

Note

On GPU, the supported dtypes are np.float16, and np.float32. On CPU, the supported dtypes are np.float16, np.float32, and np.float64.

Parameters:
  • a (Tensor) – Array from which the diagonals are taken.

  • offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal.

  • axis1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

  • axis2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor, sum_along_diagonals. If a is 2-D, the sum along the diagonal is returned. If a has larger dimensions, then an array of sums along diagonals is returned.

Raises:

ValueError – If the input tensor has less than two dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.trace(np.eye(3))
>>> print(output)
3.0
tinyms.transpose(a, axes=None)[source]

Reverses or permutes the axes of a tensor; returns the modified tensor.

Parameters:
  • a (Tensor) – a tensor to be transposed

  • axes (Union[None, tuple, list]) – the axes order, if axes is None, transpose the entire tensor. Default is None.

Returns:

Tensor, the transposed tensor array.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If the number of axes is not equal to a.ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x = np.ones((1,2,3))
>>> x = np.transpose(x)
>>> print(x.shape)
(3, 2, 1)
tinyms.trapz(y, x=None, dx=1.0, axis=-1)[source]

Integrates along the given axis using the composite trapezoidal rule.

Integrates y (x) along given axis.

Parameters:
  • y (Tensor) – Input array to integrate.

  • x (Union[int, float, bool, list, tuple, Tensor], optional) – The sample points corresponding to the y values. If x is None, the sample points are assumed to be evenly spaced dx apart. The default is None.

  • dx (scalar, optional) – The spacing between sample points when x is None. The default is 1.0.

  • axis (int, optional) – The axis along which to integrate. Defaults to -1.

Returns:

Tensor of float, definite integral as approximated by trapezoidal rule.

Raises:

ValueError – If axis is out of range of [-y.ndim, y.ndim).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(6).reshape(2, 3)
>>> output = np.trapz(a,  x=[-2, 1, 2], axis=1)
>>> print(output)
[ 3. 15.]
>>> output = np.trapz(a,  dx=3, axis=0)
>>> print(output)
[ 4.5  7.5 10.5]
tinyms.tri(N, M=None, k=0, dtype=mindspore.float32)[source]

Returns a tensor with ones at and below the given diagonal and zeros elsewhere.

Parameters:
  • N (int) – Number of rows in the array.

  • M (int, optional) – Number of columns in the array. By default, M is taken equal to N.

  • k (int, optional) – The sub-diagonal at and below which the array is filled. \(k = 0\) is the main diagonal, while \(k < 0\) is below it, and \(k > 0\) is above. The default is 0.

  • dtype (mindspore.dtype, optional) – Data type of the returned array. The default is mstype.float32.

Returns:

Tensor with shape (N, M), with its lower triangle filled with ones and zeros elsewhere; in other words \(T[i,j] = 1\) for \(j <= i + k\), 0 otherwise.

Raises:

TypeError – If input arguments have types not specified above.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.tri(3, 3, 1)
>>> print(output)
[[1. 1. 0.]
[1. 1. 1.]
[1. 1. 1.]]
tinyms.tril(m, k=0)[source]

Returns a lower triangle of a tensor.

Returns a copy of a tensor with elements above the k-th diagonal zeroed.

Parameters:
  • m (Union[Tensor, list, tuple]) – The shape and data-type of m define these same attributes of the returned tensor.

  • k (int, optional) – Diagonal above which to zero elements. \(k = 0\) (the default) is the main diagonal, \(k < 0\) is below it and \(k > 0\) is above.

Returns:

Lower triangle of m, of same shape and data-type as m.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input m’s rank \(< 1\).

Examples

>>> import mindspore.numpy as np
>>> output = np.tril(np.ones((3, 3)))
>>> print(output)
[[1. 0. 0.]
[1. 1. 0.]
[1. 1. 1.]]
tinyms.tril_indices(n, k=0, m=None)[source]

Returns the indices for the lower-triangle of an (n, m) array.

Parameters:
  • n (int) – The size of the arrays for which the returned indices will be valid.

  • k (int, optional) – Diagonal offset, default is 0.

  • m (int, optional) – The column dimension of the arrays for which the returned arrays will be valid. By default m is taken equal to n.

Returns:

The indices for the triangle. The returned tuple contains two tensors, each with the indices along one dimension of the tensor.

Raises:

TypeError – If n, k, m are not numbers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.tril_indices(3))
(Tensor(shape=[6], dtype=Int32, value= [0, 1, 1, 2, 2, 2]),
Tensor(shape=[6], dtype=Int32, value= [0, 0, 1, 0, 1, 2]))
tinyms.tril_indices_from(arr, k=0)[source]

Returns the indices for the lower-triangle of arr.

Parameters:
  • arr (Union[Tensor, list, tuple]) – 2-dimensional array.

  • k (int, optional) – Diagonal offset, default is 0.

Returns:

triu_indices_from, tuple of 2 tensor, shape(N) Indices for the upper-triangle of arr.

Raises:
  • TypeError – If arr cannot be converted to tensor, or k is not a number.

  • ValueError – If arr cannot be converted to a 2-dimensional tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> tensor = np.ones((3,3))
>>> print(np.tril_indices_from(tensor))
(Tensor(shape=[6], dtype=Int32, value= [0, 1, 1, 2, 2, 2]),
 Tensor(shape=[6], dtype=Int32, value= [0, 0, 1, 0, 1, 2]))
tinyms.triu(m, k=0)[source]

Returns an upper triangle of a tensor.

Returns a copy of a tensor with elements below the k-th diagonal zeroed.

Parameters:
  • m (Union[Tensor, list, tuple]) – The shape and data-type of m define these same attributes of the returned tensor.

  • k (int, optional) – Diagonal below which to zero elements. \(k = 0\) (the default) is the main diagonal, \(k < 0\) is below it and \(k > 0\) is above.

Returns:

Upper triangle of m, of same shape and data-type as m.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If input m’s rank < 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.triu(np.ones((3, 3)))
>>> print(output)
[[1. 1. 1.]
[0. 1. 1.]
[0. 0. 1.]]
tinyms.triu_indices(n, k=0, m=None)[source]

Returns the indices for the upper-triangle of an (n, m) array.

Parameters:
  • n (int) – The size of the arrays for which the returned indices will be valid.

  • k (int, optional) – Diagonal offset, default is 0.

  • m (int, optional) – The column dimension of the arrays for which the returned arrays will be valid. By default m is taken equal to n.

Returns:

The indices for the triangle. The returned tuple contains two tensors, each with the indices along one dimension of the tensor.

Raises:

TypeError – If n, k, m are not numbers.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.triu_indices(3))
(Tensor(shape=[6], dtype=Int32, value= [0, 0, 0, 1, 1, 2]),
 Tensor(shape=[6], dtype=Int32, value= [0, 1, 2, 1, 2, 2]))
tinyms.triu_indices_from(arr, k=0)[source]

Returns the indices for the upper-triangle of arr.

Parameters:
  • arr (Union[Tensor, list, tuple]) – 2-dimensional array.

  • k (int, optional) – Diagonal offset, default is 0.

Returns:

triu_indices_from, tuple of 2 tensor, shape(N) Indices for the upper-triangle of arr.

Raises:
  • TypeError – If arr cannot be converted to tensor, or k is not a number.

  • ValueError – If arr cannot be converted to a 2-dimensional tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> tensor = np.ones((3,3))
>>> print(np.triu_indices_from(tensor))
(Tensor(shape=[6], dtype=Int32, value= [0, 0, 0, 1, 1, 2]),
Tensor(shape=[6], dtype=Int32, value= [0, 1, 2, 1, 2, 2]))
tinyms.true_divide(x1, x2, dtype=None)[source]

Returns a true division of the inputs, element-wise.

Instead of the Python traditional “floor division”, this returns a true division.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x1 (Tensor) – the dividend.

  • x2 (Tensor) – the divisor.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, this is a scalar if both x1 and x2 are scalars.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.full((3, 2), [1, 2])
>>> x2 = np.full((3, 2), [3, 4])
>>> output = np.true_divide(x1, x2)
>>> print(output)
[[0.33333334 0.5       ]
[0.33333334 0.5       ]
[0.33333334 0.5       ]]
tinyms.trunc(x, dtype=None)[source]

Returns the truncated value of the input, element-wise.

The truncated value of the scalar x is the nearest integer i which is closer to zero than x is. In short, the fractional part of the signed number x is discarded.

Note

Numpy arguments out, where, casting, order, subok, signature, and extobj are not supported.

Parameters:
  • x (Tensor) – input data.

  • dtype (mindspore.dtype, optional) – Defaults to None. Overrides the dtype of the output Tensor.

Returns:

Tensor or scalar, the truncated value of each element in x. This is a scalar if x is a scalar.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> output = np.trunc(np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]))
>>> print(output)
[-1. -1. -0.  0.  1.  1.  2.]
tinyms.unique(x, return_inverse=False)[source]

Finds the unique elements of a tensor. The input tensor will be flattened first when it has more than one dimension.

Note

Numpy arguments axis, return_index and return_counts are not supported. On CPU, this operator must be executed in graph mode.

Parameters:
  • x (Tensor) – The input tensor to be processed.

  • return_inverse (bool) – If True, also return the indices of the unique tensor. Default: False.

Returns:

Tensor or tuple of Tensors. If return_inverse is False, return the unique tensor, otherwise return tuple of tensors.

Supported Platforms:

Ascend GPU CPU

Raises:

TypeError – If x is not tensor.

Examples

>>> import mindspore.numpy as np
>>> import mindspore as ms
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> input_x = np.asarray([1, 2, 2, 2, 3, 4, 5]).astype('int32')
>>> output_x = np.unique(input_x)
>>> print(output_x)
[1 2 3 4 5]
>>> output_x = np.unique(input_x, return_inverse=True)
>>> print(output_x)
(Tensor(shape=[5], dtype=Int32, value= [ 1, 2, 3, 4, 5]), Tensor(shape=[7], dtype=Int32,
    value= [0, 1, 1, 1, 2, 3, 4]))
tinyms.unravel_index(indices, shape, order='C')[source]

Converts a flat index or array of flat indices into a tuple of coordinate arrays.

Note

Out-of-bound indices are clipped by the boundaries of shape instead of raising an error.

Parameters:
  • indices (Union[int, float, bool, list, tuple, Tensor]) – An integer array whose elements are indices into the flattened version of an array of dimensions shape.

  • shape (tuple of integers) – The shape of the array to use for unraveling indices.

  • order (Union['C', 'F'], optional) – Determines whether the indices should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. Defaults to “C”.

Returns:

Tensor, each array in the tuple has the same shape as the indices array.

Supported Platforms:

Ascend GPU CPU

Raises:

ValueError – If order is not ‘C’ or ‘F’.

Examples

>>> import mindspore.numpy as np
>>> print(np.unravel_index([22, 41, 37], (7,6)))
(Tensor(shape=[3], dtype=Int32, value= [3, 6, 6]),
Tensor(shape=[3], dtype=Int32, value= [4, 5, 1]))
>>> print(np.unravel_index([31, 41, 13], (7,6), order='F'))
(Tensor(shape=[3], dtype=Int32, value= [3, 6, 6]),
Tensor(shape=[3], dtype=Int32, value= [4, 5, 1]))
tinyms.unwrap(p, discont=3.141592653589793, axis=-1)[source]

Unwraps by changing deltas between values to 2*pi complement. Unwraps radian phase p by changing absolute jumps greater than discont to their 2*pi complement along the given axis.

Note

For absolute jumps that are within a very close range to pi, unwrapping may be done differently than numpy due to differences in round-off.

Parameters:
  • p (Union[int, float, bool, list, tuple, Tensor) – Input array.

  • discont (float, optional) – Maximum discontinuity between values, default is pi.

  • axis (int, optional) – Axis along which unwrap will operate, default is -1.

Returns:

Tensor.

Raises:

ValueError – If the axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> phase = np.add(np.linspace(0, np.pi, num=5), [0, 0, 0, np.pi, np.pi])
>>> print(phase)
[0.        0.7853982 1.5707964 5.4977875 6.2831855]
>>> print(np.unwrap(phase))
[ 0.0000000e+00  7.8539819e-01  1.5707964e+00 -7.8539848e-01 -4.7683716e-07]
tinyms.vander(x, N=None, increasing=False)[source]

Generates a Vandermonde matrix.

The columns of the output matrix are powers of the input vector. The order of the powers is determined by the increasing boolean argument. Specifically, when increasing is False, the i-th output column is the input vector raised element-wise to the power of \(N - i - 1\). Such a matrix with a geometric progression in each row is named for Alexandre-Theophile Vandermonde.

Parameters:
  • x (Union[list, tuple, Tensor]) – 1-D input array.

  • N (int, optional) – Number of columns in the output. If N is not specified, a square array is returned (N = len(x)).

  • increasing (bool, optional) – Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed.

Returns:

Vandermonde matrix. If increasing is False, the first column is \(x^{(N-1)}\), the second \(x^{(N-2)}\) and so forth. If increasing is True, the columns are \(x^0, x^1, ..., x^{(N-1)}\).

Raises:
  • TypeError – If inputs have types not specified above.

  • ValueError – If x is not 1-D, or N < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.vander([1., 2., 3., 4., 5.]))
[[  1.   1.   1.   1.   1.]
 [ 16.   8.   4.   2.   1.]
 [ 81.  27.   9.   3.   1.]
 [256.  64.  16.   4.   1.]
 [625. 125.  25.   5.   1.]]
tinyms.var(x, axis=None, ddof=0, keepdims=False)[source]

Computes the variance along the specified axis. The variance is the average of the squared deviations from the mean, i.e., \(var = mean(abs(x - x.mean())**2)\).

Returns the variance, which is computed for the flattened array by default, otherwise over the specified axis.

Note

Numpy arguments dtype, out and where are not supported.

Parameters:
  • x (Tensor) – A Tensor to be calculated.

  • axis (Union[None, int, tuple(int)]) – Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. Default: None.

  • ddof (int) – Means Delta Degrees of Freedom. Default: 0. The divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor. If the default value is passed, then keepdims will not be passed through to the var method of sub-classes of tensor, however any non-default value will be. If the sub-class method does not implement keepdims any exceptions will be raised. Default: False.

Supported Platforms:

Ascend GPU CPU

Returns:

Standard deviation tensor.

Examples

>>> import mindspore.numpy as np
>>> input_x = np.array([1., 2., 3., 4.])
>>> output = np.var(input_x)
>>> print(output)
1.25
tinyms.vsplit(x, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors vertically (row-wise). It is equivalent to split with \(axis=0\) (default), the array is always split along the first axis regardless of the array dimension.

Parameters:
  • x (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – If integer, \(N\), the tensor will be divided into \(N\) equal tensors along axis. If tuple(int), list(int) or of sorted integers, the entries indicate where along axis the array is split. For example, \([2, 3]\) would, for \(axis=0\), result in three sub-tensors \(x[:2]\), \(x[2:3]`and :math:`x[3:]\). If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.

Returns:

A list of sub-tensors.

Raises:

TypeError – If argument indices_or_sections is not integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> input_x = np.arange(9).reshape((3, 3)).astype('float32')
>>> output = np.vsplit(input_x, 3)
>>> print(output)
(Tensor(shape=[1, 3], dtype=Float32,
  value=[[ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32,
  value=[[ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32,
  value=[[ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]]))
tinyms.vstack(tup)[source]

Stacks tensors in sequence vertically. This is equivalent to concatenation along the first axis. 1-D tensors should firstly be reshaped to (1, N), and then be concatenated along the first axis.

Parameters:

tup (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the first axis. 1-D tensors must have the same shape.

Returns:

Stacked Tensor, formed by stacking the given tensors.

Supported Platforms:

Ascend GPU CPU

Raises:

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([1, 2, 3]).astype('int32')
>>> x2 = np.array([4, 5, 6]).astype('int32')
>>> output = np.vstack((x1, x2))
>>> print(output)
[[1 2 3]
 [4 5 6]]
tinyms.where(condition, x=None, y=None)[source]

Returns elements chosen from x or y depending on condition.

Note

As nonzero is not supported, both x and y must be provided Tensor

input.

Parameters:
  • condition (Tensor) – where True, yield x, otherwise yield y.

  • x (Tensor) – Values from which to choose. Defaults to None.

  • y (Tensor) – Values from which to choose. x, y and condition need to be broadcastable to some shape. Defaults to None.

Returns:

Tensor or scalar, with elements from x where condition is True, and elements from y elsewhere.

Raises:

ValueError – If operands cannot be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> condition = np.full((1, 1, 2), [False, True])
>>> x = np.full((1, 3, 2), 5)
>>> y = np.full((2, 1, 1), 7)
>>> output = np.where(condition, x, y)
>>> print(output)
[[[7 5]
[7 5]
[7 5]]
[[7 5]
[7 5]
[7 5]]]
tinyms.zeros(shape, dtype=mindspore.float32)[source]

Returns a new tensor of given shape and type, filled with zeros.

Parameters:
  • shape (Union[int, tuple, list]) – the shape of the new tensor.

  • dtype (Union[mindspore.dtype, str], optional) – Designated tensor dtype. Default is mstype.float32.

Returns:

Tensor, with the designated shape and dtype, filled with zeros.

Raises:
  • TypeError – If input arguments have types not specified above.

  • ValueError – If shape entries have values \(< 0\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> print(np.zeros((2,2)))
[[0. 0.]
[0. 0.]]
tinyms.zeros_like(a, dtype=None, shape=None)[source]

Returns an array of zeros with the same shape and type as a given array.

Note

Input array must have the same size across a dimension. If a is not a Tensor, dtype is float32 by default if not provided.

Parameters:
  • a (Union[Tensor, list, tuple]) – The shape and data-type of a define these same attributes of the returned array.

  • dtype (mindspore.dtype, optional) – Overrides the data type of the result.

  • shape (int or sequence of ints, optional) – Overrides the shape of the result.

Returns:

Tensor, array of zeros with the same shape and type as a.

Raises:

ValueError – If a is not a Tensor, list or tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.ones((4,1,2))
>>> output = np.zeros_like(a)
>>> print(output)
[[[0. 0.]]
[[0. 0.]]
[[0. 0.]]
[[0. 0.]]]

tinyms.context

The context of TinyMS, used to configure the current execution environment, includeing the execution mode, execution backend and other feature switches.

tinyms.context.set_context(**kwargs)[source]

Set context for running environment.

Context should be configured before running your program. If there is no configuration, it will be automatically set according to the device target by default.

Note

Attribute name is required for setting attributes. The mode is not recommended to be changed after net was initialized because the implementations of some operations are different in graph mode and pynative mode. Default: PYNATIVE_MODE.

Some configurations are device specific, see the below table for details:

Function Classification

Configuration Parameters

Hardware Platform Support

System Configuration

device_id

CPU/GPU/Ascend

device_target

CPU/GPU/Ascend

max_device_memory

GPU/Ascend

variable_memory_max_size

Ascend

mempool_block_size

GPU/Ascend

op_timeout

Ascend

Debug Configuration

save_graphs

CPU/GPU/Ascend

save_graphs_path

CPU/GPU/Ascend

enable_dump

Ascend

save_dump_path

Ascend

deterministic

Ascend

print_file_path

Ascend

env_config_path

CPU/GPU/Ascend

precompile_only

CPU/GPU/Ascend

reserve_class_name_in_scope

CPU/GPU/Ascend

pynative_synchronize

GPU/Ascend

Executive Control

mode

CPU/GPU/Ascend

enable_graph_kernel

Ascend/GPU

graph_kernel_flags

Ascend/GPU

enable_reduce_precision

Ascend

auto_tune_mode

Ascend

check_bprop

CPU/GPU/Ascend

max_call_depth

CPU/GPU/Ascend

grad_for_scalar

CPU/GPU/Ascend

enable_compile_cache

CPU/GPU/Ascend

inter_op_parallel_num

CPU/GPU/Ascend

runtime_num_threads

CPU/GPU/Ascend

compile_cache_path

CPU/GPU/Ascend

disable_format_transform

GPU

support_binary

CPU/GPU/Ascend

memory_optimize_level

CPU/GPU/Ascend

memory_offload

GPU/Ascend

ascend_config

Ascend

Parameters:
  • device_id (int) – ID of the target device, the value must be in [0, device_num_per_host-1], while device_num_per_host should be no more than 4096. Default: 0.

  • device_target (str) – The target device to run, support “Ascend”, “GPU”, and “CPU”. If device target is not set, the version of MindSpore package is used.

  • max_device_memory (str) – Set the maximum memory available for devices. The format is “xxGB”. Default: “1024GB”. The actual used memory size is the minimum of the available memory of the device and max_device_memory.

  • variable_memory_max_size (str) – This parameter is deprecated, and will be removed in a future version. Please use parameter ‘max_device_memory’ instead.

  • mempool_block_size (str) – Set the size of the memory pool block in PyNative mode for devices. The format is “xxGB”. Default: “1GB”. Minimum size is “1G”. The actual used memory block size is the minimum of the available memory of the device and mempool_block_size.

  • op_timeout (int) – Set the maximum duration of executing an operator in seconds. If the execution time exceeds this value, system will terminate the task. 0 means endless wait. Default: 1900.

  • save_graphs (bool or int) –

    Whether to save intermediate compilation graphs. Default: 0. Available values are:

    • False or 0: disable saving of intermediate compilation graphs.

    • 1: some intermediate files will be generated during graph compliation.

    • True or 2: Generate more ir files related to backend process.

    • 3: Generate visualization computing graphs and detailed frontend ir graphs.

    When the save_graphs attribute is set as True, 1, 2 or 3, attribute of save_graphs_path is used to set the intermediate compilation graph storage path. By default, the graphs are saved in the current directory.

  • save_graphs_path (str) – Path to save graphs. Default: “.”. If the specified directory does not exist, the system will automatically create the directory. During distributed training, graphs will be saved to the directory of save_graphs_path/rank_${rank_id}/. rank_id is the ID of the current device in the cluster.

  • deterministic (str) –

    Whether to enable op run in deterministic mode. The value must be in the range of [‘ON’, ‘OFF’], and the default value is ‘OFF’.

    • ”ON”: Enable operator deterministic running mode.

    • ”OFF”: Disable operator deterministic running mode.

    When deterministic mode is on, model ops will be deterministic in Ascend. This means that if op run multiple times with the same inputs on the same hardware, it will have the exact same outputs each time. This is useful for debugging models.

  • enable_dump (bool) – This parameters is deprecated, and will be deleted in the next version.

  • save_dump_path (str) – This parameters is deprecated, and will be deleted in the next version.

  • print_file_path (str) – The path of saving print data. If this parameter is set, print data is saved to a file by default, and print_file_path is not set, the screen will be displayed. If the saved file already exists, the timestamp suffix will be added to the file. Saving data to a file solves the problem of data loss in screen printing when a large amount of data is generated. If it is not set, an error will be reported: prompt to set the upper absolute path.

  • env_config_path (str) –

    Config path for DFX. Through mindspore.set_context(env_config_path=”./mindspore_config.json”)

    configure RDR:

    • enable: controls whether the RDR is enabled to collect the key data during training and save key data in the fault scenario. When set to true, the RDR will be turned on. When set to false, the RDR will be turned off.

    • mode: sets the mode of RDR on exporting data. When set to 1, the RDR only exports data in the fault scenario. When set to 2, the RDR exports data in the fault scenario and the normal end scenario. Default: 1.

    • path: sets the path where RDR saves data. The current path must be absolute.

    Memory reuse:

    • mem_Reuse: controls whether the memory reuse function is turned on. When set to True,

    • the memory reuse function is turned on. When set to False, the memory reuse function is turned off.

  • precompile_only (bool) – Whether to only precompile the network. Default: False. If set to True, the network will only be compiled, not executed.

  • reserve_class_name_in_scope (bool) –

    Whether to save the network class name in the scope. Default: True. Each node has a scope. A scope of a subnode is the name of its parent node. If reserve_class_name_in_scope is set to True, the class name will be saved after keyword ‘net-’ in the scope. For example:

    Default/net-Net1/net-Net2 (reserve_class_name_in_scope=True)

    Default/net/net (reserve_class_name_in_scope=False)

  • pynative_synchronize (bool) – Whether to enable synchronous execution of the device in PyNative mode. Default: False. When the value is set to False, the operator is executed asynchronously on the device. When an error occurs in the execution of the operator, the specific error script code location cannot be located, when the value is set to True, the operator is executed synchronously on the device. It will reduce the execution performance of the program. At this time, when an error occurs in the execution of the operator, the location of the error script code can be located according to the call stack of the error.

  • mode (int) – Running in GRAPH_MODE(0) or PYNATIVE_MODE(1). Both modes support all backends. Default: PYNATIVE_MODE.

  • enable_graph_kernel (bool) – Whether to enable graph kernel fusion to optimize network execution performance. Default: False. Indicates whether to enable image-computing convergence to optimize network execution performance. If enable_graph_kernel is set to True, acceleration can be enabled. For details of graph kernel fusion, please check Enabling Graph Kernel Fusion.

  • graph_kernel_flags (str) –

    Optimization options of graph kernel fusion, and the priority is higher when it conflicts with enable_graph_kernel. Only for experienced users. For example, mindspore.set_context(graph_kernel_flags=”–opt_level=2 –dump_as_text”). Some general options:

    • opt_level: Set the optimization level. Default: 2. Graph kernel fusion can be enabled equivalently by setting opt_level greater than 0. Available values are:

      • 0: disables graph kernel fusion;

      • 1: enables the basic fusion of operators;

      • 2: includes all optimizations of level 1, and turns on more optimizations such as CSE, arithmetic simplification and so on;

      • 3: includes all optimizations of level 2, and turns on more optimizations such as SitchingFusion, ParallelFusion and so on. Optimizations of this level are radical and unstable in some scenarios. Be caution when using this level.

    • dump_as_text: dumps detail info as text files. Default: false.

    More options can refer to the implementation code.

  • enable_reduce_precision (bool) – Whether to enable precision reduction. If the operator does not support the user-specified precision, the precision will be changed automatically. Default: True.

  • auto_tune_mode (str) –

    The mode of auto tune when op building, get the best tiling performance. Default: NO_TUNE. The value must be in [‘RL’, ‘GA’, ‘RL,GA’].

    • RL: Reinforcement Learning tune.

    • GA: Genetic Algorithm tune.

    • RL,GA: When both RL and GA optimization are enabled, the tool automatically selects RL or GA based on different types of operators in the network model. The sequence of RL and GA is not differentiated. (Automatic selection).

    For more information about the enable operator tuning tool settings, please check Enable the operator optimization tool.

  • check_bprop (bool) – Whether to check back propagation nodes. The checking ensures that the shape and dtype of back propagation node outputs is the same as input parameters. Default: False.

  • max_call_depth (int) – Specify the maximum depth of function call. Must be positive integer. Default: 1000. The max_call_depth parameter needs to be set when the nested call is too deep or the number of subgraphs is too large. If max_call_depth is set larger than before, the system max stack depth should be set larger too, otherwise a core dumped exception may be raised because of system stack overflow.

  • grad_for_scalar (bool) – Whether to get gradient for scalar. Default: False. When grad_for_scalar is set to True, the function’s scalar input can be derived. The default value is False. Because the back-end does not support scaling operations currently, this interface only supports simple operations that can be deduced by the front-end.

  • enable_compile_cache (bool) – Whether to save or load the cache of the graph compiled by front-end. After enable_compile_cache is set to True, during the first execution, a hardware-independent compilation cache is generated and exported to a MINDIR file. When the network is executed again, if enable_compile_cache is still set to True and the network scripts are not changed, the compile cache is loaded. Note that only limited automatic detection for the changes of python scripts is supported by now, which means that there is a correctness risk. Default: False. This is an experimental prototype that is subject to change and/or deletion.

  • compile_cache_path (str) – Path to save the compile cache. Default: “.”. If the specified directory does not exist, the system will automatically create the directory. The cache will be saved to the directory of compile_cache_path/rank_${rank_id}/. The rank_id is the ID of the current device in the cluster.

  • inter_op_parallel_num (int) – The thread number of op parallel at the same time. Default value is 0, which means use the default num.

  • runtime_num_threads (int) – The thread pool number of cpu kernel used in runtime, which must bigger than or equal to 0. Default value is 30, if you run many processes at the same time, you should set the value smaller to avoid thread contention.

  • disable_format_transform (bool) – Whether to disable the automatic format transform function from NCHW to NHWC. When the network training performance of fp16 is worse than fp32, disable_format_transform can be set to True to try to improve training performance. Default: False.

  • support_binary (bool) – Whether to support run .pyc or .so in graph mode. If want to support run .so or .pyc in graph mode, coulde set ‘support_binary’ to be True, and run once .py file. It would save the source of the interfaces would be compiled by MindSpore to the interfaces definition .py file that should be guaranteed to be writable. Then compile the .py file to the .pyc or .so file, and could run in Graph mode.

  • memory_optimize_level (str) –

    The memory optimize level. Default: O0. The value must be in [‘O0’, ‘O1’].

    • O0: priority performance option, disable SOMAS (Safe Optimized Memory Allocation Solver).

    • O1: priority memory option, enable SOMAS.

  • memory_offload (str) –

    Whether to enable the memory offload function. When it is enabled, the idle data will be temporarily copied to the host side in the case of insufficient device memory. The value must be in the range of [‘ON’, ‘OFF’], and the default value is ‘OFF’.

    • ON: Enable the memory Offload function. On Ascend hardware platform, this parameter does not take effect when the environment variable “GRAPH_OP_RUN=1” is not set; This parameter does not take effect when memory_optimize_level is set ‘O1’.

    • OFF: Turn off the memory Offload function.

  • ascend_config (dict) –

    Set the parameters specific to Ascend hardware platform. It is not set by default. Currently, only setting precision_mode and jit_compile are supported on Ascend910B hardware platform. The default value of precision_mode and jit_compile are experimental parameters, may change in the future.

    • precision_mode (str): Mixed precision mode setting, on Ascend910B hardware platform, the default value of training network is based on the value of CANN, and the default value of inference network is force_fp16. The value range is as follows:

      • force_fp16: When the operator supports both float16 and float32, select float16 directly.

      • allow_fp32_to_fp16: When the operator does not support the float32 data type, directly reduce the precision of float16.

      • allow_mix_precision: Automatic mixing precision, facing the whole network operator, according to the built-in optimization strategy, automatically reduces the precision of some operators to float16 or bfloat16.

      • must_keep_origin_dtype: Keep the accuracy of the original drawing.

      • force_fp32: When the input of the matrix calculation operator is float16 and the output supports float16 and float32, output is forced to float32.

      • force_lowerprecision: When the operator supports both float16 or bfloat16 and float32, select float16 or bfloat16 directly.

      • allow_fp32_to_bf16: When the operator does not support the float32 data type, directly reduce the precision of bfloat16.

      • allow_fp32_to_lowprecision: When the operator does not support the float32 data type, directly reduce the precision of float16 or bfloat16.

      • allow_mix_precision_fp16: Automatic mixing precision, facing the whole network operator, automatically reduces the precision of some operators to float16 according to the built-in optimization strategy.

      • allow_mix_precision_bf16: Automatic mixing precision, facing the whole network operator, according to the built-in optimization strategy, automatically reduces the precision of some operators to bfloat16.

    • jit_compile (bool): Whether to select online compilation. the default value is based on CANN.

Raises:

ValueError – If input key is not an attribute in context.

Examples

>>> import mindspore as ms
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> ms.set_context(precompile_only=True)
>>> ms.set_context(device_target="Ascend")
>>> ms.set_context(device_id=0)
>>> ms.set_context(save_graphs=True, save_graphs_path="./model.ms")
>>> ms.set_context(enable_reduce_precision=True)
>>> ms.set_context(enable_graph_kernel=True)
>>> ms.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text")
>>> ms.set_context(reserve_class_name_in_scope=True)
>>> ms.set_context(variable_memory_max_size="6GB")
>>> ms.set_context(check_bprop=True)
>>> ms.set_context(max_device_memory="3.5GB")
>>> ms.set_context(mempool_block_size="1GB")
>>> ms.set_context(print_file_path="print.pb")
>>> ms.set_context(max_call_depth=80)
>>> ms.set_context(env_config_path="./env_config.json")
>>> ms.set_context(auto_tune_mode="GA,RL")
>>> ms.set_context(grad_for_scalar=True)
>>> ms.set_context(enable_compile_cache=True, compile_cache_path="./cache.ms")
>>> ms.set_context(pynative_synchronize=True)
>>> ms.set_context(runtime_num_threads=10)
>>> ms.set_context(inter_op_parallel_num=4)
>>> ms.set_context(disable_format_transform=True)
>>> ms.set_context(memory_optimize_level='O0')
>>> ms.set_context(memory_offload='ON')
>>> ms.set_context(deterministic='ON')
>>> ms.set_context(ascend_config={"precision_mode": "force_fp16", "jit_compile": True})
tinyms.context.get_context(attr_key)[source]

Get context attribute value according to the input key. If some attributes are not set, they will be automatically obtained.

Parameters:

attr_key (str) – The key of the attribute.

Returns:

Object, The value of given attribute key.

Raises:

ValueError – If input key is not an attribute in context.

Examples

>>> import mindspore as ms
>>> ms.get_context("device_target")
>>> ms.get_context("device_id")
tinyms.context.set_auto_parallel_context(**kwargs)[source]

Set auto parallel context, only data parallel supported on CPU.

Note

Attribute name is required for setting attributes. If a program has tasks on different parallel modes, before setting a new parallel mode for the next task, interface mindspore.reset_auto_parallel_context() should be called to reset the configuration. Setting or changing parallel modes must be called before creating any Initializer, otherwise, it may have RuntimeError when compiling the network.

Some configurations are parallel mode specific, see the below table for details:

Common

AUTO_PARALLEL

device_num

gradient_fp32_sync

global_rank

loss_repeated_mean

gradients_mean

search_mode

parallel_mode

strategy_ckpt_load_file

all_reduce_fusion_config

strategy_ckpt_save_file

enable_parallel_optimizer

dataset_strategy

parallel_optimizer_config

pipeline_stages

enable_alltoall

grad_accumulation_step

auto_parallel_search_mode

comm_fusion

strategy_ckpt_config

Parameters:
  • device_num (int) – Available device number, the value must be in [1, 4096]. Default: 1.

  • global_rank (int) – Global rank id, the value must be in [0, 4095]. Default: 0.

  • gradients_mean (bool) – Whether to perform mean operator after allreduce of gradients. “stand_alone” do not support gradients_mean. Default: False.

  • gradient_fp32_sync (bool) – Run allreduce of gradients in fp32. “stand_alone”, “data_parallel” and “hybrid_parallel” do not support gradient_fp32_sync. Default: True.

  • parallel_mode (str) –

    There are five kinds of parallel modes, “stand_alone”, “data_parallel”, “hybrid_parallel”, “semi_auto_parallel” and “auto_parallel”. Note the pynative mode only supports the “stand_alone” and “data_parallel” mode. Default: “stand_alone”.

    • stand_alone: Only one processor is working.

    • data_parallel: Distributes the data across different processors.

    • hybrid_parallel: Achieves data parallelism and model parallelism manually.

    • semi_auto_parallel: Achieves data and model parallelism by setting parallel strategies.

    • auto_parallel: Achieving parallelism automatically.

  • search_mode (str) –

    There are three kinds of shard strategy search modes: “recursive_programming”, “dynamic_programming” and “sharding_propagation”. Default: “dynamic_programming”.

    • recursive_programming: Recursive programming search mode.

    • dynamic_programming: Dynamic programming search mode.

    • sharding_propagation: Propagate shardings from configured ops to non-configured ops.

  • auto_parallel_search_mode (str) – This is the old version of ‘search_mode’. Here, remaining this attribute is for forward compatibility, and this attribute will be deleted in a future MindSpore version.

  • parameter_broadcast (bool) – Whether to broadcast parameters before training. Before training, in order to have the same network initialization parameter values for all devices, broadcast the parameters on device 0 to other devices. Parameter broadcasting in different parallel modes is different, data_parallel mode, all parameters are broadcast except for the parameter whose attribute layerwise_parallel is True. Hybrid_parallel, semi_auto_parallel and auto_parallel mode, the segmented parameters do not participate in broadcasting. Default: False.

  • strategy_ckpt_load_file (str) – The path to load parallel strategy checkpoint. The parameter is not to be recommended currently, it is better using ‘strategy_ckpt_config’ to replace it. Default: ‘’

  • strategy_ckpt_save_file (str) – The path to save parallel strategy checkpoint. The parameter is not to be recommended currently, it is better using ‘strategy_ckpt_config’ to replace it. Default: ‘’

  • full_batch (bool) – If you load whole batch datasets in auto_parallel mode, this parameter should be set as True. Default: False. The interface is not to be recommended currently, it is better using ‘dataset_strategy’ to replace it.

  • dataset_strategy (Union[str, tuple]) – Dataset sharding strategy. Default: “data_parallel”. dataset_strategy=”data_parallel” is equal to full_batch=False, dataset_strategy=”full_batch” is equal to full_batch=True. For execution mode is ‘GRAPH_MODE’ and dataset load into net by model parallel strategy likes ds_stra ((1, 8), (1, 8)), it requires using set_auto_parallel_context(dataset_strategy=ds_stra).

  • enable_parallel_optimizer (bool) – This is a developing feature, which shards the weight update computation for data parallel training in the benefit of time and memory saving. Currently, auto and semi auto parallel mode support all optimizers in both Ascend and GPU. Data parallel mode only supports Lamb and AdamWeightDecay in Ascend . Default: False.

  • enable_alltoall (bool) – A switch that allows AllToAll operators to be generated during communication. If its value is False, there will be a combination of operators such as AllGather, Split and Concat instead of AllToAll. Default: False.

  • all_reduce_fusion_config (list) – Set allreduce fusion strategy by parameters indices. Only support ReduceOp.SUM and HCCL_WORLD_GROUP/NCCL_WORLD_GROUP. No Default, if it is not set, the fusion is closed.

  • pipeline_stages (int) – Set the stage information for pipeline parallel. This indicates how the devices are distributed alone in the pipeline. The total devices will be divided into ‘pipeline_stags’ stages. Currently, this could only be used when parallel mode semi_auto_parallel is enabled. Default: 1.

  • grad_accumulation_step (int) – Set the accumulation steps of gradients in auto and semi auto parallel mode. This should be a positive int. Default: 1.

  • parallel_optimizer_config (dict) –

    A dict contains the keys and values for setting the parallel optimizer configure. The configure provides more detailed behavior control about parallel training when parallel optimizer is enabled. Currently it supports the key gradient_accumulation_shard. The configure will be effective when we use mindspore.set_auto_parallel_context(enable_parallel_optimizer=True). It supports the following keys.

    • gradient_accumulation_shard(bool): If true, the accumulation gradient parameters will be sharded across the data parallel devices. This will introduce additional communication(ReduceScatter) at each step when accumulate the gradients, but saves a lot of device memories, thus can make model be trained with larger batch size. This configure is effective only when the model runs on pipeline training or gradient accumulation with data parallel. Default True.

    • parallel_optimizer_threshold(int): Set the threshold of parallel optimizer. When parallel optimizer is enabled, parameters with size smaller than this threshold will not be sharded across the devices. Parameter size = shape[0] * … * shape[n] * size(dtype). Non-negative. Unit: KB. Default: 64.

  • comm_fusion (dict) –

    A dict contains the types and configurations for setting the communication fusion. each communication fusion config has two keys: “mode” and “config”. It supports following communication fusion types and configurations:

    • openstate: Whether turn on the communication fusion or not. If openstate is True, turn on the communication fusion, otherwise, turn off the communication fusion. Default: True.

    • allreduce: If communication fusion type is allreduce. The mode contains: auto, size and index. In auto mode, AllReduce fusion is configured by gradients size and the default fusion threshold is 64 MB. In ‘size’ mode, AllReduce fusion is configured by gradients size manually, and the fusion threshold must be larger than 0 MB. In index mode, it is same as all_reduce_fusion_config.

    • allgather: If communication fusion type is allgather. The mode contains: auto, size. In auto mode, AllGather fusion is configured by gradients size, and the default fusion threshold is 64 MB. In ‘size’ mode, AllGather fusion is configured by gradients size manually, and the fusion threshold must be larger than 0 MB.

    • reducescatter: If communication fusion type is reducescatter. The mode contains: auto and size. Config is same as allgather.

  • strategy_ckpt_config (dict) –

    A dict contains the configurations for setting the parallel strategy file. This interface contains the functions of parameter strategy_ckpt_load_file and strategy_ckpt_save_file, it is recommonded to use this parameter to replace those two parameters. It contains following configurations:

    • load_file (str): The path to load parallel strategy checkpoint. If the file name extension is .json, the file is loaded in JSON format. Otherwise, the file is loaded in ProtoBuf format. Default: ‘’

    • save_file (str): The path to save parallel strategy checkpoint. If the file name extension is .json, the file is saved in JSON format. Otherwise, the file is saved in ProtoBuf format. Default: ‘’

    • only_trainable_params (bool): Only save/load the strategy information for trainable parameter. Default: True.

Raises:

ValueError – If input key is not attribute in auto parallel context.

Examples

>>> import mindspore as ms
>>> ms.set_auto_parallel_context(device_num=8)
>>> ms.set_auto_parallel_context(global_rank=0)
>>> ms.set_auto_parallel_context(gradients_mean=True)
>>> ms.set_auto_parallel_context(gradient_fp32_sync=False)
>>> ms.set_auto_parallel_context(parallel_mode="auto_parallel")
>>> ms.set_auto_parallel_context(search_mode="dynamic_programming")
>>> ms.set_auto_parallel_context(auto_parallel_search_mode="dynamic_programming")
>>> ms.set_auto_parallel_context(parameter_broadcast=False)
>>> ms.set_auto_parallel_context(strategy_ckpt_load_file="./strategy_stage1.ckpt")
>>> ms.set_auto_parallel_context(strategy_ckpt_save_file="./strategy_stage1.ckpt")
>>> ms.set_auto_parallel_context(dataset_strategy=((1, 8), (1, 8)))
>>> ms.set_auto_parallel_context(enable_parallel_optimizer=False)
>>> ms.set_auto_parallel_context(enable_alltoall=False)
>>> ms.set_auto_parallel_context(all_reduce_fusion_config=[8, 160])
>>> ms.set_auto_parallel_context(pipeline_stages=2)
>>> parallel_config = {"gradient_accumulation_shard": True, "parallel_optimizer_threshold": 24}
>>> ms.set_auto_parallel_context(parallel_optimizer_config=parallel_config, enable_parallel_optimizer=True)
>>> config = {"allreduce": {"mode": "size", "config": 32}, "allgather": {"mode": "size", "config": 32}}
>>> ms.set_auto_parallel_context(comm_fusion=config)
>>> stra_ckpt_dict = {"load_file": "./stra0.ckpt", "save_file": "./stra1.ckpt", "only_trainable_params": False}
>>> ms.set_auto_parallel_context(strategy_ckpt_config=stra_ckpt_dict)
tinyms.context.get_auto_parallel_context(attr_key)[source]

Get auto parallel context attribute value according to the key.

Parameters:

attr_key (str) – The key of the attribute.

Returns:

Returns attribute value according to the key.

Raises:

ValueError – If input key is not attribute in auto parallel context.

Examples

>>> import mindspore as ms
>>> parallel_mode = ms.get_auto_parallel_context("parallel_mode")
>>> dataset_strategy = ms.get_auto_parallel_context("dataset_strategy")
tinyms.context.reset_auto_parallel_context()[source]

Reset auto parallel context attributes to the default values.

  • device_num: 1.

  • global_rank: 0.

  • gradients_mean: False.

  • gradient_fp32_sync: True.

  • parallel_mode: ‘stand_alone’.

  • search_mode: ‘dynamic_programming’.

  • auto_parallel_search_mode: ‘dynamic_programming’.

  • parameter_broadcast: False.

  • strategy_ckpt_load_file: ‘’.

  • strategy_ckpt_save_file: ‘’.

  • full_batch: False.

  • enable_parallel_optimizer: False.

  • enable_alltoall: False.

  • pipeline_stages: 1.

  • fusion_threshold: 64.

class tinyms.context.ParallelMode[source]

Parallel mode options.

There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”, “HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.

  • STAND_ALONE: Only one processor is working.

  • DATA_PARALLEL: Distributes the data across different processors.

  • HYBRID_PARALLEL: Achieves data parallelism and model parallelism manually.

  • SEMI_AUTO_PARALLEL: Achieves data parallelism and model parallelism by setting parallel strategies.

  • AUTO_PARALLEL: Achieves parallelism automatically.

MODE_LIST: The list of all supported parallel modes.

tinyms.context.set_ps_context(**kwargs)[source]

Set parameter server training mode context.

Note

Parameter server mode is only supported in graph mode. Some other environment variables should also be set for parameter server training mode. These environment variables are listed below:

MS_SERVER_NUM: Server number

MS_WORKER_NUM: Worker number

MS_SCHED_HOST: Scheduler IP address

MS_SCHED_PORT: Scheduler port

MS_ROLE: The role of this process:

MS_SCHED: represents the scheduler,

MS_WORKER: represents the worker,

MS_PSERVER/MS_SERVER: represents the Server

Parameters:
  • enable_ps (bool) – Whether to enable parameter server training mode. Only after enable_ps is set True, the environment variables will be effective. Default: False.

  • config_file_path (string) – Configuration file path used by recovery, parameter server training mode only supports Server disaster recovery currently. Default: ‘’.

  • scheduler_manage_port (int) – Scheduler manage port used to scale out/in. Default: 11202.

  • enable_ssl (bool) – Set PS SSL mode enabled or disabled. Default: False.

  • client_password (str) – Password to decrypt the secret key stored in the client certificate. Default: ‘’.

  • server_password (str) – Password to decrypt the secret key stored in the server certificate. Default: ‘’.

Raises:

ValueError – If input key is not the attribute in parameter server training mode context.

Examples

>>> import mindspore as ms
>>> ms.set_ps_context(enable_ps=True, enable_ssl=True, client_password='123456', server_password='123456')
tinyms.context.get_ps_context(attr_key)[source]

Get parameter server training mode context attribute value according to the key.

Parameters:

attr_key (str) –

The key of the attribute:

  • enable_ps (bool): Whether to enable parameter server training mode.

  • config_file_path (string): Configuration file path used by recovery, parameter server training mode only supports Server disaster recovery currently. Default: ‘’.

  • scheduler_manage_port (int): Scheduler manage port used to scale out/in. Default: 11202.

  • enable_ssl (bool): Set PS SSL mode enabled or disabled. Default: False.

  • client_password (str): Password to decrypt the secret key stored in the client certificate. Default: ‘’.

  • server_password (str): Password to decrypt the secret key stored in the server certificate. Default: ‘’.

Returns:

Returns attribute value according to the key.

Raises:

ValueError – If input key is not attribute in auto parallel context.

Examples

>>> import mindspore as ms
>>> ms.get_ps_context("enable_ps")
tinyms.context.reset_ps_context()[source]

Reset parameter server training mode context attributes to the default values:

  • enable_ps: False.

Meaning of each field and its default value refer to mindspore.set_ps_context().

tinyms.context.set_offload_context(offload_config)[source]

Set offload context. Some configurations are offload specific, see the below table for details:

Parameters:

offload_config (dict) –

A dict contains the keys and values for setting the offload context

configure.It supports the following keys.

enable_offload (bool): The flag of whether enabling offload. Default: False. offload_param (str): The param for offload destination, cpu or disk. offload_path (str): The path of offload. offload_checkpoint (str): The checkpoint for offload destination, cpu or disk. offload_ddr_size (int): The ddr size for offload. offload_disk_size (int): The disk size for offload. enable_aio (bool): The flag of whether enabling aio. Default: True. aio_block_size (int): The size of aio block. aio_queue_depth (int): The depth of aio queue. enable_pinned_mem (bool): The flag of whether enabling pinned memory.

Raises:

ValueError – If input key is not attribute in auto parallel context.

Examples

>>> from mindspore import context
>>> context.set_offload_context(offload_config={"offload_param"="cpu"})
tinyms.context.get_offload_context()[source]

Get offload context. .. rubric:: Examples

>>> from mindspore import context
>>> offload_config = context.get_offload_context()

tinyms.data

class tinyms.data.UnalignedDataset(dataset_path, phase, max_dataset_size=inf, shuffle=True)[source]

This dataset class can load unaligned/unpaired datasets.

Parameters:
  • dataset_path (str) – The path of images (should have subfolders trainA, trainB, testA, testB, etc).

  • phase (str) – Train or test. It requires two directories in dataset_path, like trainA and trainB to. host training images from domain A ‘{dataset_path}/trainA’ and from domain B ‘{dataset_path}/trainB’ respectively.

  • max_dataset_size (int) – Maximum number of return image paths.

Returns:

Two domain image path list.

class tinyms.data.GanImageFolderDataset(dataset_path, max_dataset_size=inf)[source]

This dataset class can load images from image folder.

Parameters:
  • dataset_path (str) – ‘{dataset_path}/testA’, ‘{dataset_path}/testB’, etc.

  • max_dataset_size (int) – Maximum number of return image paths.

Returns:

Image path list.

class tinyms.data.ImdbDataset(imdb_path, glove_path, embed_size=300)[source]

parse aclImdb data to features and labels. sentence->tokenized->encoded->padding->features

Parameters:
  • imdb_path (str) – The path where the aclImdb dataset stored.

  • glove_path (str) – The path where the GloVe stored.

  • embed_size (int) – Embed_size. Default: 300.

Examples

>>> from tinyms.data import ImdbDataset
>>>
>>> imdb_ds = ImdbDataset('./aclImdb', './glove')
convert_to_mindrecord(preprocess_path, shard_num=1)[source]

convert imdb dataset to mindrecoed dataset

get_datas(seg)[source]

get features, labels, and weight by gensim.

parse()[source]

parse imdb data to memory

class tinyms.data.BertDataset(data_dir, schema_dir=None, shuffle=True, num_parallel_workers=None)[source]

This dataset class can load bert from data folder.

Parameters:
  • data_dir (str) – ‘{data_dir}/result1.tfrecord’, ‘{data_dir}/result2.tfrecord’, etc.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • shuffle (Union[bool, Shuffle level], optional) –

    Perform reshuffling of the data every epoch (default=Shuffle.GLOBAL). If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Otherwise, there are two levels of shuffling:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • schema (Union[str, Schema], optional) – Path to the JSON schema file or schema object (default=None). If the schema is not provided, the meta data from the TFData file is considered the schema.

Examples

>>> from tinyms.data import BertDataset
>>>
>>> bert_ds = BertDataset('data')
class tinyms.data.KaggleDisplayAdvertisingDataset(data_dir, num_parallel_workers=None, shuffle=True)[source]

parse aclImdb data to features and labels. sentence->tokenized->encoded->padding->features

Parameters:
  • data_dir (str) – The path where the uncompressed dataset stored.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • shuffle (bol) – Whether the dataset needs to be shuffled. Default: True.

Examples

>>> from tinyms.data import KaggleDisplayAdvertisingDataset
>>>
>>> kaggle_display_advertising_ds = KaggleDisplayAdvertisingDataset('data')
>>> kaggle_display_advertising_ds.stats_data()
>>> kaggle_display_advertising_ds.convert_to_mindrecord()
>>> train_ds = kaggle_display_advertising_ds.load_mindreocrd_dataset(usage='train')
>>> test_ds = kaggle_display_advertising_ds.load_mindreocrd_dataset(usage='test')
load_mindreocrd_dataset(usage='train', batch_size=1000)[source]

load mindrecord dataset. :param usage: Dataset mode. Default: ‘train’. :type usage: str :param batch_size: batch size. Default: 1000. :type batch_size: int

Returns:

MindDataset

stats_data()[source]

stats data

class tinyms.data.DistributedSampler(dataset_size, num_replicas=None, rank=None, shuffle=True)[source]

Distributed sampler.

Parameters:
  • dataset_size (int) – Dataset list length

  • num_replicas (int) – Replicas num.

  • rank (int) – Device rank.

  • shuffle (bool) – Whether the dataset needs to be shuffled. Default: True.

Returns:

DistributedSampler instance.

class tinyms.data.Caltech101Dataset(dataset_dir, target_type=None, num_samples=None, num_parallel_workers=1, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None)[source]

Caltech 101 dataset.

The columns of the generated dataset depend on the value of target_type .

  • When target_type is ‘category’, the columns are [image, category] .

  • When target_type is ‘annotation’, the columns are [image, annotation] .

  • When target_type is ‘all’, the columns are [image, category, annotation] .

The tensor of column image is of the uint8 type. The tensor of column category is of the uint32 type. The tensor of column annotation is a 2-dimensional ndarray that stores the contour of the image and consists of a series of points.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset. This root directory contains two subdirectories, one is called 101_ObjectCategories, which stores images, and the other is called Annotations, which stores annotations.

  • target_type (str, optional) – Target of the image. If target_type is ‘category’, return category represents the target class. If target_type is ‘annotation’, return annotation. If target_type is ‘all’, return category and annotation. Default: None, means ‘category’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker subprocesses to read the data. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Whether or not to decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If target_type is not set correctly.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> caltech101_dataset_directory = "/path/to/caltech101_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in caltech101_dataset_directory with 8 threads
>>> dataset = ds.Caltech101Dataset(dataset_dir=caltech101_dataset_directory, num_parallel_workers=8)
>>>
>>> # 2) Read all samples (image files) with the target_type "annotation"
>>> dataset = ds.Caltech101Dataset(dataset_dir=caltech101_dataset_directory, target_type="annotation")

About Caltech101Dataset:

Pictures of objects belonging to 101 categories, about 40 to 800 images per category. Most categories have about 50 images. The size of each image is roughly 300 x 200 pixels. The official provides the contour data of each object in each picture, which is the annotation.

Here is the original Caltech101 dataset structure, and you can unzip the dataset files into the following directory structure, which are read by MindSpore API.

.
└── caltech101_dataset_directory
    ├── 101_ObjectCategories
    │    ├── Faces
    │    │    ├── image_0001.jpg
    │    │    ├── image_0002.jpg
    │    │    ...
    │    ├── Faces_easy
    │    │    ├── image_0001.jpg
    │    │    ├── image_0002.jpg
    │    │    ...
    │    ├── ...
    └── Annotations
         ├── Airplanes_Side_2
         │    ├── annotation_0001.mat
         │    ├── annotation_0002.mat
         │    ...
         ├── Faces_2
         │    ├── annotation_0001.mat
         │    ├── annotation_0002.mat
         │    ...
         ├── ...

Citation:

@article{FeiFei2004LearningGV,
author    = {Li Fei-Fei and Rob Fergus and Pietro Perona},
title     = {Learning Generative Visual Models from Few Training Examples:
            An Incremental Bayesian Approach Tested on 101 Object Categories},
journal   = {Computer Vision and Pattern Recognition Workshop},
year      = {2004},
url       = {https://data.caltech.edu/records/mzrjq-6wc02},
}
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-int mapping from label name to index.

class tinyms.data.Caltech256Dataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Caltech 256 dataset.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Whether or not to decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If target_type is not ‘category’, ‘annotation’ or ‘all’.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> caltech256_dataset_dir = "/path/to/caltech256_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in caltech256_dataset_dir with 8 threads
>>> dataset = ds.Caltech256Dataset(dataset_dir=caltech256_dataset_dir, num_parallel_workers=8)

About Caltech256Dataset:

Caltech-256 is an object recognition dataset containing 30,607 real-world images, of different sizes, spanning 257 classes (256 object classes and an additional clutter class). Each class is represented by at least 80 images. The dataset is a superset of the Caltech-101 dataset.

.
└── caltech256_dataset_directory
     ├── 001.ak47
     │    ├── 001_0001.jpg
     │    ├── 001_0002.jpg
     │    ...
     ├── 002.american-flag
     │    ├── 002_0001.jpg
     │    ├── 002_0002.jpg
     │    ...
     ├── 003.backpack
     │    ├── 003_0001.jpg
     │    ├── 003_0002.jpg
     │    ...
     ├── ...

Citation:

@article{griffin2007caltech,
title     = {Caltech-256 object category dataset},
added-at  = {2021-01-21T02:54:42.000+0100},
author    = {Griffin, Gregory and Holub, Alex and Perona, Pietro},
biburl    = {https://www.bibsonomy.org/bibtex/21f746f23ff0307826cca3e3be45f8de7/s364315},
interhash = {bfe1e648c1778c04baa60f23d1223375},
intrahash = {1f746f23ff0307826cca3e3be45f8de7},
publisher = {California Institute of Technology},
timestamp = {2021-01-21T02:54:42.000+0100},
year      = {2007}
}
class tinyms.data.CelebADataset(dataset_dir, num_parallel_workers=None, shuffle=None, usage='all', sampler=None, decode=False, extensions=None, num_samples=None, num_shards=None, shard_id=None, cache=None, decrypt=None)[source]

CelebA(CelebFaces Attributes) dataset.

Only support to read list_attr_celeba.txt currently, which is the attribute annotations of the dataset. The generated dataset has two columns: [image, attr] . The tensor of column image is of the uint8 type. The tensor of column attr is of the uint32 type and one hot encoded.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None.

  • usage (str, optional) – Specify the ‘train’, ‘valid’, ‘test’ part or ‘all’ parts of dataset. Default: ‘all’, will read all samples.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None.

  • decode (bool, optional) – Whether to decode the images after reading. Default: False.

  • extensions (list[str], optional) – List of file extensions to be included in the dataset. Default: None.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will include all images.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • decrypt (callable, optional) – Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: None, no decryption.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If usage is not ‘train’, ‘valid’, ‘test’ or ‘all’.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> celeba_dataset_dir = "/path/to/celeba_dataset_directory"
>>>
>>> # Read 5 samples from CelebA dataset
>>> dataset = ds.CelebADataset(dataset_dir=celeba_dataset_dir, usage='train', num_samples=5)
>>>
>>> # Note: In celeba dataset, each data dictionary owns keys "image" and "attr"

About CelebA dataset:

CelebFaces Attributes Dataset (CelebA) is a large-scale dataset with more than 200K celebrity images, each with 40 attribute annotations.

The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotations, including

  • 10,177 number of identities,

  • 202,599 number of images,

  • 5 landmark locations, 40 binary attributes annotations per image.

The dataset can be employed as the training and test sets for the following computer vision tasks: attribute recognition, detection, landmark (or facial part) and localization.

Original CelebA dataset structure:

.
└── CelebA
     ├── README.md
     ├── Img
     │    ├── img_celeba.7z
     │    ├── img_align_celeba_png.7z
     │    └── img_align_celeba.zip
     ├── Eval
     │    └── list_eval_partition.txt
     └── Anno
          ├── list_landmarks_celeba.txt
          ├── list_landmarks_align_celeba.txt
          ├── list_bbox_celeba.txt
          ├── list_attr_celeba.txt
          └── identity_CelebA.txt

You can unzip the dataset files into the following structure and read by MindSpore’s API.

.
└── celeba_dataset_directory
    ├── list_attr_celeba.txt
    ├── 000001.jpg
    ├── 000002.jpg
    ├── 000003.jpg
    ├── ...

Citation:

@article{DBLP:journals/corr/LiuLWT14,
author        = {Ziwei Liu and Ping Luo and Xiaogang Wang and Xiaoou Tang},
title         = {Deep Learning Attributes in the Wild},
journal       = {CoRR},
volume        = {abs/1411.7766},
year          = {2014},
url           = {http://arxiv.org/abs/1411.7766},
archivePrefix = {arXiv},
eprint        = {1411.7766},
timestamp     = {Tue, 10 Dec 2019 15:37:26 +0100},
biburl        = {https://dblp.org/rec/journals/corr/LiuLWT14.bib},
bibsource     = {dblp computer science bibliography, https://dblp.org},
howpublished  = {http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html}
}
class tinyms.data.Cifar10Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

CIFAR-10 dataset.

This api only supports parsing CIFAR-10 file in binary version now. The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’ . ‘train’ will read from 50,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 60,000 samples. Default: None, all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If usage is not ‘train’, ‘test’ or ‘all’.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> cifar10_dataset_dir = "/path/to/cifar10_dataset_directory"
>>>
>>> # 1) Get all samples from CIFAR10 dataset in sequence
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from CIFAR10 dataset
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from CIFAR10 dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.Cifar10Dataset(dataset_dir=cifar10_dataset_dir, num_shards=2, shard_id=0)
>>>
>>> # In CIFAR10 dataset, each dictionary has keys "image" and "label"

About CIFAR-10 dataset:

The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks.

Here is the original CIFAR-10 dataset structure. You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── cifar-10-batches-bin
     ├── data_batch_1.bin
     ├── data_batch_2.bin
     ├── data_batch_3.bin
     ├── data_batch_4.bin
     ├── data_batch_5.bin
     ├── test_batch.bin
     ├── readme.html
     └── batches.meta.txt

Citation:

@techreport{Krizhevsky09,
author       = {Alex Krizhevsky},
title        = {Learning multiple layers of features from tiny images},
institution  = {},
year         = {2009},
howpublished = {http://www.cs.toronto.edu/~kriz/cifar.html}
}
class tinyms.data.Cifar100Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

CIFAR-100 dataset.

The generated dataset has three columns [image, coarse_label, fine_label] . The tensor of column image is of the uint8 type. The tensor of column coarse_label and fine_labels are each a scalar of uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’ . ‘train’ will read from 50,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 60,000 samples. Default: None, all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If usage is not ‘train’, ‘test’ or ‘all’.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> cifar100_dataset_dir = "/path/to/cifar100_dataset_directory"
>>>
>>> # 1) Get all samples from CIFAR100 dataset in sequence
>>> dataset = ds.Cifar100Dataset(dataset_dir=cifar100_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from CIFAR100 dataset
>>> dataset = ds.Cifar100Dataset(dataset_dir=cifar100_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # In CIFAR100 dataset, each dictionary has 3 keys: "image", "fine_label" and "coarse_label"

About CIFAR-100 dataset:

This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a “fine” label (the class to which it belongs) and a “coarse” label (the superclass to which it belongs).

Here is the original CIFAR-100 dataset structure. You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── cifar-100-binary
    ├── train.bin
    ├── test.bin
    ├── fine_label_names.txt
    └── coarse_label_names.txt

Citation:

@techreport{Krizhevsky09,
author       = {Alex Krizhevsky},
title        = {Learning multiple layers of features from tiny images},
institution  = {},
year         = {2009},
howpublished = {http://www.cs.toronto.edu/~kriz/cifar.html}
}
class tinyms.data.CityscapesDataset(dataset_dir, usage='train', quality_mode='fine', task='instance', num_samples=None, num_parallel_workers=None, shuffle=None, decode=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Cityscapes dataset.

The generated dataset has two columns [image, task] . The tensor of column image is of the uint8 type. The tensor of column task is of the uint8 type if task is not ‘polygon’ otherwise task is a string tensor with serialize json.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘test’, ‘val’ or ‘all’ if quality_mode is ‘fine’ otherwise ‘train’, ‘train_extra’, ‘val’ or ‘all’. Default: ‘train’.

  • quality_mode (str, optional) – Acceptable quality_modes include ‘fine’ or ‘coarse’. Default: ‘fine’.

  • task (str, optional) – Acceptable tasks include ‘instance’, ‘semantic’, ‘polygon’ or ‘color’. Default: ‘instance’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir is invalid or does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If task is invalid.

  • ValueError – If quality_mode is invalid.

  • ValueError – If usage is invalid.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> cityscapes_dataset_dir = "/path/to/cityscapes_dataset_directory"
>>>
>>> # 1) Get all samples from Cityscapes dataset in sequence
>>> dataset = ds.CityscapesDataset(dataset_dir=cityscapes_dataset_dir, task="instance", quality_mode="fine",
...                                usage="train", shuffle=False, num_parallel_workers=1)
>>>
>>> # 2) Randomly select 350 samples from Cityscapes dataset
>>> dataset = ds.CityscapesDataset(dataset_dir=cityscapes_dataset_dir, num_samples=350, shuffle=True,
...                                num_parallel_workers=1)
>>>
>>> # 3) Get samples from Cityscapes dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.CityscapesDataset(dataset_dir=cityscapes_dataset_dir, num_shards=2, shard_id=0,
...                                num_parallel_workers=1)
>>>
>>> # In Cityscapes dataset, each dictionary has keys "image" and "task"

About Cityscapes dataset:

The Cityscapes dataset consists of 5000 color images with high quality dense pixel annotations and 19998 color images with coarser polygonal annotations in 50 cities. There are 30 classes in this dataset and the polygonal annotations include dense semantic segmentation and instance segmentation for vehicle and people.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

Taking the quality_mode of fine as an example.

.
└── Cityscapes
     ├── leftImg8bit
     |    ├── train
     |    |    ├── aachen
     |    |    |    ├── aachen_000000_000019_leftImg8bit.png
     |    |    |    ├── aachen_000001_000019_leftImg8bit.png
     |    |    |    ├── ...
     |    |    ├── bochum
     |    |    |    ├── ...
     |    |    ├── ...
     |    ├── test
     |    |    ├── ...
     |    ├── val
     |    |    ├── ...
     └── gtFine
          ├── train
          |    ├── aachen
          |    |    ├── aachen_000000_000019_gtFine_color.png
          |    |    ├── aachen_000000_000019_gtFine_instanceIds.png
          |    |    ├── aachen_000000_000019_gtFine_labelIds.png
          |    |    ├── aachen_000000_000019_gtFine_polygons.json
          |    |    ├── aachen_000001_000019_gtFine_color.png
          |    |    ├── aachen_000001_000019_gtFine_instanceIds.png
          |    |    ├── aachen_000001_000019_gtFine_labelIds.png
          |    |    ├── aachen_000001_000019_gtFine_polygons.json
          |    |    ├── ...
          |    ├── bochum
          |    |    ├── ...
          |    ├── ...
          ├── test
          |    ├── ...
          └── val
               ├── ...

Citation:

@inproceedings{Cordts2016Cityscapes,
title       = {The Cityscapes Dataset for Semantic Urban Scene Understanding},
author      = {Cordts, Marius and Omran, Mohamed and Ramos, Sebastian and Rehfeld, Timo and Enzweiler,
                Markus and Benenson, Rodrigo and Franke, Uwe and Roth, Stefan and Schiele, Bernt},
booktitle   = {Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year        = {2016}
}
class tinyms.data.CocoDataset(dataset_dir, annotation_file, task='Detection', num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None, extra_metadata=False, decrypt=None)[source]

COCO(Common Objects in Context) dataset.

CocoDataset supports five kinds of tasks, which are Object Detection, Keypoint Detection, Stuff Segmentation, Panoptic Segmentation and Captioning of 2017 Train/Val/Test dataset.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • annotation_file (str) – Path to the annotation JSON file.

  • task (str, optional) – Set the task type for reading COCO data. Supported task types: ‘Detection’, ‘Stuff’, ‘Panoptic’, ‘Keypoint’ and ‘Captioning’. Default: ‘Detection’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • extra_metadata (bool, optional) – Flag to add extra meta-data to row. If True, an additional column will be output at the end [_meta-filename, dtype=string] . Default: False.

  • decrypt (callable, optional) – Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: None, no decryption.

The generated dataset with different task setting has different output columns:

task

Output column

Detection

[image, dtype=uint8]

[bbox, dtype=float32]

[category_id, dtype=uint32]

[iscrowd, dtype=uint32]

Stuff

[image, dtype=uint8]

[segmentation, dtype=float32]

[iscrowd, dtype=uint32]

Keypoint

[image, dtype=uint8]

[keypoints, dtype=float32]

[num_keypoints, dtype=uint32]

Panoptic

[image, dtype=uint8]

[bbox, dtype=float32]

[category_id, dtype=uint32]

[iscrowd, dtype=uint32]

[area, dtype=uint32]

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If parse JSON file failed.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If task is not in [‘Detection’, ‘Stuff’, ‘Panoptic’, ‘Keypoint’, ‘Captioning’].

  • ValueError – If annotation_file is not exist.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • Column ‘[_meta-filename, dtype=string]’ won’t be output unless an explicit rename dataset op is added to remove the prefix(‘_meta-‘).

  • Not support mindspore.dataset.PKSampler for sampler parameter yet.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> coco_dataset_dir = "/path/to/coco_dataset_directory/images"
>>> coco_annotation_file = "/path/to/coco_dataset_directory/annotation_file"
>>>
>>> # 1) Read COCO data for Detection task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Detection')
>>>
>>> # 2) Read COCO data for Stuff task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Stuff')
>>>
>>> # 3) Read COCO data for Panoptic task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Panoptic')
>>>
>>> # 4) Read COCO data for Keypoint task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Keypoint')
>>>
>>> # 5) Read COCO data for Captioning task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Captioning')
>>>
>>> # In COCO dataset, each dictionary has keys "image" and "annotation"

About COCO dataset:

COCO(Microsoft Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset with several features: Object segmentation, Recognition in context, Superpixel stuff segmentation, 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints. In contrast to the popular ImageNet dataset, COCO has fewer categories but more instances in per category.

You can unzip the original COCO-2017 dataset files into this directory structure and read by MindSpore’s API.

.
└── coco_dataset_directory
     ├── train2017
     │    ├── 000000000009.jpg
     │    ├── 000000000025.jpg
     │    ├── ...
     ├── test2017
     │    ├── 000000000001.jpg
     │    ├── 000000058136.jpg
     │    ├── ...
     ├── val2017
     │    ├── 000000000139.jpg
     │    ├── 000000057027.jpg
     │    ├── ...
     └── annotations
          ├── captions_train2017.json
          ├── captions_val2017.json
          ├── instances_train2017.json
          ├── instances_val2017.json
          ├── person_keypoints_train2017.json
          └── person_keypoints_val2017.json

Citation:

@article{DBLP:journals/corr/LinMBHPRDZ14,
author        = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and
                Lubomir D. Bourdev and  Ross B. Girshick and James Hays and
                Pietro Perona and Deva Ramanan and Piotr Doll{'{a}}r and C. Lawrence Zitnick},
title         = {Microsoft {COCO:} Common Objects in Context},
journal       = {CoRR},
volume        = {abs/1405.0312},
year          = {2014},
url           = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint        = {1405.0312},
timestamp     = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl        = {https://dblp.org/rec/journals/corr/LinMBHPRDZ14.bib},
bibsource     = {dblp computer science bibliography, https://dblp.org}
}
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-list<int> mapping from label name to index.

Examples

>>> coco_dataset_dir = "/path/to/coco_dataset_directory/images"
>>> coco_annotation_file = "/path/to/coco_dataset_directory/annotation_file"
>>>
>>> # Read COCO data for Detection task
>>> dataset = ds.CocoDataset(dataset_dir=coco_dataset_dir,
...                          annotation_file=coco_annotation_file,
...                          task='Detection')
>>>
>>> class_indexing = dataset.get_class_indexing()
class tinyms.data.DIV2KDataset(dataset_dir, usage='train', downgrade='bicubic', scale=2, num_samples=None, num_parallel_workers=None, shuffle=None, decode=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

DIV2K(DIVerse 2K resolution image) dataset.

The generated dataset has two columns [hr_image, lr_image] . The tensor of column hr_image and the tensor of column lr_image are of the uint8 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘valid’ or ‘all’. Default: ‘train’.

  • downgrade (str, optional) – Acceptable downgrades include ‘bicubic’, ‘unknown’, ‘mild’, ‘difficult’ or ‘wild’. Default: ‘bicubic’.

  • scale (str, optional) – Acceptable scales include 2, 3, 4 or 8. Default: 2. When downgrade is ‘bicubic’, scale can be 2, 3, 4, 8. When downgrade is ‘unknown’, scale can only be 2, 3, 4. When downgrade is ‘mild’, ‘difficult’ or ‘wild’, scale can only be 4.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir is invalid or does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If usage is invalid.

  • ValueError – If downgrade is invalid.

  • ValueError – If scale is invalid.

  • ValueError – If scale equal to 8 and downgrade not equal to ‘bicubic’.

  • ValueError – If downgrade in [‘mild’, ‘difficult’, ‘wild’] and scale not equal to 4.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> div2k_dataset_dir = "/path/to/div2k_dataset_directory"
>>>
>>> # 1) Get all samples from DIV2K dataset in sequence
>>> dataset = ds.DIV2KDataset(dataset_dir=div2k_dataset_dir, usage="train", scale=2, downgrade="bicubic",
...                           shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from DIV2K dataset
>>> dataset = ds.DIV2KDataset(dataset_dir=div2k_dataset_dir, usage="train", scale=2, downgrade="bicubic",
...                           num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from DIV2K dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.DIV2KDataset(dataset_dir=div2k_dataset_dir, usage="train", scale=2, downgrade="bicubic",
...                           num_shards=2, shard_id=0)
>>>
>>> # In DIV2K dataset, each dictionary has keys "hr_image" and "lr_image"

About DIV2K dataset:

The DIV2K dataset consists of 1000 2K resolution images, among which 800 images are for training, 100 images are for validation and 100 images are for testing. NTIRE 2017 and NTIRE 2018 include only training dataset and validation dataset.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

Take the training set as an example.

.
└── DIV2K
     ├── DIV2K_train_HR
     |    ├── 0001.png
     |    ├── 0002.png
     |    ├── ...
     ├── DIV2K_train_LR_bicubic
     |    ├── X2
     |    |    ├── 0001x2.png
     |    |    ├── 0002x2.png
     |    |    ├── ...
     |    ├── X3
     |    |    ├── 0001x3.png
     |    |    ├── 0002x3.png
     |    |    ├── ...
     |    └── X4
     |         ├── 0001x4.png
     |         ├── 0002x4.png
     |         ├── ...
     ├── DIV2K_train_LR_unknown
     |    ├── X2
     |    |    ├── 0001x2.png
     |    |    ├── 0002x2.png
     |    |    ├── ...
     |    ├── X3
     |    |    ├── 0001x3.png
     |    |    ├── 0002x3.png
     |    |    ├── ...
     |    └── X4
     |         ├── 0001x4.png
     |         ├── 0002x4.png
     |         ├── ...
     ├── DIV2K_train_LR_mild
     |    ├── 0001x4m.png
     |    ├── 0002x4m.png
     |    ├── ...
     ├── DIV2K_train_LR_difficult
     |    ├── 0001x4d.png
     |    ├── 0002x4d.png
     |    ├── ...
     ├── DIV2K_train_LR_wild
     |    ├── 0001x4w.png
     |    ├── 0002x4w.png
     |    ├── ...
     └── DIV2K_train_LR_x8
          ├── 0001x8.png
          ├── 0002x8.png
          ├── ...

Citation:

@InProceedings{Agustsson_2017_CVPR_Workshops,
author    = {Agustsson, Eirikur and Timofte, Radu},
title     = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
url       = "http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf",
month     = {July},
year      = {2017}
}
class tinyms.data.EMnistDataset(dataset_dir, name, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

EMNIST(Extended MNIST) dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • name (str) – Name of splits for this dataset, can be ‘byclass’, ‘bymerge’, ‘balanced’, ‘letters’, ‘digits’ or ‘mnist’.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’.’train’ will read from 60,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 70,000 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> emnist_dataset_dir = "/path/to/emnist_dataset_directory"
>>>
>>> # Read 3 samples from EMNIST dataset
>>> dataset = ds.EMnistDataset(dataset_dir=emnist_dataset_dir, name="mnist", num_samples=3)
>>>
>>> # Note: In emnist_dataset dataset, each dictionary has keys "image" and "label"

About EMNIST dataset:

The EMNIST dataset is a set of handwritten character digits derived from the NIST Special Database 19 and converted to a 28x28 pixel image format and dataset structure that directly matches the MNIST dataset. Further information on the dataset contents and conversion process can be found in the paper available at https://arxiv.org/abs/1702.05373v1.

The numbers of characters and classes of each split of EMNIST are as follows:

By Class: 814,255 characters and 62 unbalanced classes. By Merge: 814,255 characters and 47 unbalanced classes. Balanced: 131,600 characters and 47 balanced classes. Letters: 145,600 characters and 26 balanced classes. Digits: 280,000 characters and 10 balanced classes. MNIST: 70,000 characters and 10 balanced classes.

Here is the original EMNIST dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── mnist_dataset_dir
     ├── emnist-mnist-train-images-idx3-ubyte
     ├── emnist-mnist-train-labels-idx1-ubyte
     ├── emnist-mnist-test-images-idx3-ubyte
     ├── emnist-mnist-test-labels-idx1-ubyte
     ├── ...

Citation:

@article{cohen_afshar_tapson_schaik_2017,
title        = {EMNIST: Extending MNIST to handwritten letters},
DOI          = {10.1109/ijcnn.2017.7966217},
journal      = {2017 International Joint Conference on Neural Networks (IJCNN)},
author       = {Cohen, Gregory and Afshar, Saeed and Tapson, Jonathan and Schaik, Andre Van},
year         = {2017},
howpublished = {https://www.westernsydney.edu.au/icns/reproducible_research/
                publication_support_materials/emnist}
}
class tinyms.data.FakeImageDataset(num_images=1000, image_size=(224, 224, 3), num_classes=10, base_seed=0, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

A source dataset for generating fake images.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The column label is a scalar of the uint32 type.

Parameters:
  • num_images (int, optional) – Number of images to generate in the dataset. Default: 1000.

  • image_size (tuple, optional) – Size of the fake image. Default: (224, 224, 3).

  • num_classes (int, optional) – Number of classes in the dataset. Default: 10.

  • base_seed (int, optional) – Offsets the index-based random seed used to generate each image. Default: 0.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> # Read 3 samples from FakeImage dataset
>>> dataset = ds.FakeImageDataset(num_images=1000, image_size=(224,224,3),
...                               num_classes=10, base_seed=0, num_samples=3)
class tinyms.data.FashionMnistDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Fashion-MNIST dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. ‘train’ will read from 60,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 70,000 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> fashion_mnist_dataset_dir = "/path/to/fashion_mnist_dataset_directory"
>>>
>>> # Read 3 samples from FASHIONMNIST dataset
>>> dataset = ds.FashionMnistDataset(dataset_dir=fashion_mnist_dataset_dir, num_samples=3)
>>>
>>> # Note: In FASHIONMNIST dataset, each dictionary has keys "image" and "label"

About Fashion-MNIST dataset:

Fashion-MNIST is a dataset of Zalando’s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.

You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── fashionmnist_dataset_dir
     ├── t10k-images-idx3-ubyte
     ├── t10k-labels-idx1-ubyte
     ├── train-images-idx3-ubyte
     └── train-labels-idx1-ubyte

Citation:

@online{xiao2017/online,
  author       = {Han Xiao and Kashif Rasul and Roland Vollgraf},
  title        = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms},
  date         = {2017-08-28},
  year         = {2017},
  eprintclass  = {cs.LG},
  eprinttype   = {arXiv},
  eprint       = {cs.LG/1708.07747},
}
class tinyms.data.FlickrDataset(dataset_dir, annotation_file, num_samples=None, num_parallel_workers=None, shuffle=None, decode=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Flickr8k and Flickr30k datasets.

The generated dataset has two columns [image, annotation] . The tensor of column image is of the uint8 type. The tensor of column annotation is a tensor which contains 5 annotations string, such as [“a”, “b”, “c”, “d”, “e”].

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • annotation_file (str) – Path to the root directory that contains the annotation.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: None.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir is not valid or does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If annotation_file is not exist.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> flickr_dataset_dir = "/path/to/flickr_dataset_directory"
>>> annotation_file = "/path/to/flickr_annotation_file"
>>>
>>> # 1) Get all samples from FLICKR dataset in sequence
>>> dataset = ds.FlickrDataset(dataset_dir=flickr_dataset_dir,
...                            annotation_file=annotation_file,
...                            shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from FLICKR dataset
>>> dataset = ds.FlickrDataset(dataset_dir=flickr_dataset_dir,
...                            annotation_file=annotation_file,
...                            num_samples=350,
...                            shuffle=True)
>>>
>>> # 3) Get samples from FLICKR dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.FlickrDataset(dataset_dir=flickr_dataset_dir,
...                            annotation_file=annotation_file,
...                            num_shards=2,
...                            shard_id=0)
>>>
>>> # In FLICKR dataset, each dictionary has keys "image" and "annotation"

About Flickr8k dataset:

The Flickr8k dataset consists of 8092 color images. There are 40460 annotations in the Flickr8k.token.txt, each image has 5 annotations.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── Flickr8k
     ├── Flickr8k_Dataset
     │    ├── 1000268201_693b08cb0e.jpg
     │    ├── 1001773457_577c3a7d70.jpg
     │    ├── ...
     └── Flickr8k.token.txt

Citation:

@article{DBLP:journals/jair/HodoshYH13,
author    = {Micah Hodosh and Peter Young and Julia Hockenmaier},
title     = {Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics},
journal   = {J. Artif. Intell. Res.},
volume    = {47},
pages     = {853--899},
year      = {2013},
url       = {https://doi.org/10.1613/jair.3994},
doi       = {10.1613/jair.3994},
timestamp = {Mon, 21 Jan 2019 15:01:17 +0100},
biburl    = {https://dblp.org/rec/journals/jair/HodoshYH13.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}

About Flickr30k dataset:

The Flickr30k dataset consists of 31783 color images. There are 158915 annotations in the results_20130124.token, each image has 5 annotations.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── Flickr30k
     ├── flickr30k-images
     │    ├── 1000092795.jpg
     │    ├── 10002456.jpg
     │    ├── ...
     └── results_20130124.token

Citation:

@article{DBLP:journals/tacl/YoungLHH14,
author    = {Peter Young and Alice Lai and Micah Hodosh and Julia Hockenmaier},
title     = {From image descriptions to visual denotations: New similarity metrics
             for semantic inference over event descriptions},
journal   = {Trans. Assoc. Comput. Linguistics},
volume    = {2},
pages     = {67--78},
year      = {2014},
url       = {https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/229},
timestamp = {Wed, 17 Feb 2021 21:55:25 +0100},
biburl    = {https://dblp.org/rec/journals/tacl/YoungLHH14.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
class tinyms.data.Flowers102Dataset(dataset_dir, task='Classification', usage='all', num_samples=None, num_parallel_workers=1, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None)[source]

Oxfird 102 Flower dataset.

According to the given task configuration, the generated dataset has different output columns: - task = ‘Classification’, output columns: [image, dtype=uint8] , [label, dtype=uint32] . - task = ‘Segmentation’, output columns: [image, dtype=uint8] , [segmentation, dtype=uint8] , [label, dtype=uint32] .

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • task (str, optional) – Specify the ‘Classification’ or ‘Segmentation’ task. Default: ‘Classification’.

  • usage (str, optional) – Specify the ‘train’, ‘valid’, ‘test’ part or ‘all’ parts of dataset. Default: ‘all’, will read all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Whether or not to decode the images and segmentations after reading. Default: False.

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument must be specified only when num_shards is also specified.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> flowers102_dataset_dir = "/path/to/flowers102_dataset_directory"
>>> dataset = ds.Flowers102Dataset(dataset_dir=flowers102_dataset_dir,
...                                task="Classification",
...                                usage="all",
...                                decode=True)

About Flowers102 dataset:

Flowers102 dataset consists of 102 flower categories. The flowers commonly occur in the United Kingdom. Each class consists of between 40 and 258 images.

Here is the original Flowers102 dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── flowes102_dataset_dir
     ├── imagelabels.mat
     ├── setid.mat
     ├── jpg
          ├── image_00001.jpg
          ├── image_00002.jpg
          ├── ...
     ├── segmim
          ├── segmim_00001.jpg
          ├── segmim_00002.jpg
          ├── ...

Citation:

@InProceedings{Nilsback08,
  author       = "Maria-Elena Nilsback and Andrew Zisserman",
  title        = "Automated Flower Classification over a Large Number of Classes",
  booktitle    = "Indian Conference on Computer Vision, Graphics and Image Processing",
  month        = "Dec",
  year         = "2008",
}
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-int mapping from label name to index.

class tinyms.data.Food101Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Food101 dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, or ‘all’. ‘train’ will read from 75,750 samples, ‘test’ will read from 25,250 samples, and ‘all’ will read all ‘train’ and ‘test’ samples. Default: None, will be set to ‘all’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. When this argument is specified, num_samples reflects the maximum sample number of per shard. Default: None.

  • shard_id (int, optional) – The shard ID within num_shards . This argument can only be specified when num_shards is also specified. Default: None.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If the value of usage is not ‘train’, ‘test’, or ‘all’.

  • ValueError – If dataset_dir is not exist.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> food101_dataset_dir = "/path/to/food101_dataset_directory"
>>>
>>> # Read 3 samples from Food101 dataset
>>> dataset = ds.Food101Dataset(dataset_dir=food101_dataset_dir, num_samples=3)

About Food101 dataset:

The Food101 is a dataset of 101 food categories, with 101,000 images. There are 250 test imgaes and 750 training images in each class. All images were rescaled to have a maximum side length of 512 pixels.

The following is the original Food101 dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── food101_dir
     ├── images
     │    ├── apple_pie
     │    │    ├── 1005649.jpg
     │    │    ├── 1014775.jpg
     │    │    ├──...
     │    ├── baby_back_rips
     │    │    ├── 1005293.jpg
     │    │    ├── 1007102.jpg
     │    │    ├──...
     │    └──...
     └── meta
          ├── train.txt
          ├── test.txt
          ├── classes.txt
          ├── train.json
          ├── test.json
          └── train.txt

Citation:

@inproceedings{bossard14,
title     = {Food-101 -- Mining Discriminative Components with Random Forests},
author    = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
booktitle = {European Conference on Computer Vision},
year      = {2014}
}
class tinyms.data.ImageFolderDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, extensions=None, class_indexing=None, decode=False, num_shards=None, shard_id=None, cache=None, decrypt=None)[source]

A source dataset that reads images from a tree of directories. All images within one folder have the same label.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of a scalar of uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • extensions (list[str], optional) – List of file extensions to be included in the dataset. Default: None.

  • class_indexing (dict, optional) – A str-to-int mapping from folder name to index Default: None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • decrypt (callable, optional) – Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: None, no decryption.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If class_indexing is not a dictionary.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • The shape of the image column is [image_size] if decode flag is False, or [H,W,C] otherwise.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> image_folder_dataset_dir = "/path/to/image_folder_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in image_folder_dataset_dir with 8 threads
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 num_parallel_workers=8)
>>>
>>> # 2) Read all samples (image files) from folder cat and folder dog with label 0 and 1
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 class_indexing={"cat":0, "dog":1})
>>>
>>> # 3) Read all samples (image files) in image_folder_dataset_dir with extensions .JPEG
>>> #    and .png (case sensitive)
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir,
...                                 extensions=[".JPEG", ".png"])

About ImageFolderDataset:

You can construct the following directory structure from your dataset files and read by MindSpore’s API.

.
└── image_folder_dataset_directory
     ├── class1
     │    ├── 000000000001.jpg
     │    ├── 000000000002.jpg
     │    ├── ...
     ├── class2
     │    ├── 000000000001.jpg
     │    ├── 000000000002.jpg
     │    ├── ...
     ├── class3
     │    ├── 000000000001.jpg
     │    ├── 000000000002.jpg
     │    ├── ...
     ├── classN
     ├── ...
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-int mapping from label name to index.

Examples

>>> image_folder_dataset_dir = "/path/to/image_folder_dataset_directory"
>>>
>>> dataset = ds.ImageFolderDataset(dataset_dir=image_folder_dataset_dir)
>>> class_indexing = dataset.get_class_indexing()
class tinyms.data.KITTIDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

KITTI dataset.

When usage is “train”, the generated dataset has multiple columns: [image, label, truncated, occluded, alpha, bbox, dimensions, location, rotation_y] ; When usage is “test”, the generated dataset has only one column: [image] . The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type. The tensor of column truncated is of the float32 type. The tensor of column occluded is of the uint32 type. The tensor of column alpha is of the float32 type. The tensor of column bbox is of the float32 type. The tensor of column dimensions is of the float32 type. The tensor of column location is of the float32 type. The tensor of column rotation_y is of the float32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be train or test . train will read 7481 train samples, test will read from 7518 test samples without label. Default: None, will use train .

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards. Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> kitti_dataset_dir = "/path/to/kitti_dataset_directory"
>>>
>>> # 1) Read all KITTI train dataset samples in kitti_dataset_dir in sequence
>>> dataset = ds.KITTIDataset(dataset_dir=kitti_dataset_dir, usage="train")
>>>
>>> # 2) Read then decode all KITTI test dataset samples in kitti_dataset_dir in sequence
>>> dataset = ds.KITTIDataset(dataset_dir=kitti_dataset_dir, usage="test",
...                           decode=True, shuffle=False)

About KITTI dataset:

KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vehicles and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence.

You can unzip the original KITTI dataset files into this directory structure and read by MindSpore’s API.

.
└── kitti_dataset_directory
    ├── data_object_image_2
    │    ├──training
    │    │    ├──image_2
    │    │    │    ├── 000000000001.jpg
    │    │    │    ├── 000000000002.jpg
    │    │    │    ├── ...
    │    ├──testing
    │    │    ├── image_2
    │    │    │    ├── 000000000001.jpg
    │    │    │    ├── 000000000002.jpg
    │    │    │    ├── ...
    ├── data_object_label_2
    │    ├──training
    │    │    ├──label_2
    │    │    │    ├── 000000000001.jpg
    │    │    │    ├── 000000000002.jpg
    │    │    │    ├── ...

Citation:

@INPROCEEDINGS{Geiger2012CVPR,
author={Andreas Geiger and Philip Lenz and Raquel Urtasun},
title={Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2012}
}
class tinyms.data.KMnistDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

KMNIST(Kuzushiji-MNIST) dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’ . ‘train’ will read from 60,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 70,000 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> kmnist_dataset_dir = "/path/to/kmnist_dataset_directory"
>>>
>>> # Read 3 samples from KMNIST dataset
>>> dataset = ds.KMnistDataset(dataset_dir=kmnist_dataset_dir, num_samples=3)

About KMNIST dataset:

KMNIST is a dataset, adapted from Kuzushiji Dataset, as a drop-in replacement for MNIST dataset, which is the most famous dataset in the machine learning community.

Here is the original KMNIST dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── kmnist_dataset_dir
     ├── t10k-images-idx3-ubyte
     ├── t10k-labels-idx1-ubyte
     ├── train-images-idx3-ubyte
     └── train-labels-idx1-ubyte

Citation:

@online{clanuwat2018deep,
  author       = {Tarin Clanuwat and Mikel Bober-Irizar and Asanobu Kitamoto and
                   Alex Lamb and Kazuaki Yamamoto and David Ha},
  title        = {Deep Learning for Classical Japanese Literature},
  date         = {2018-12-03},
  year         = {2018},
  eprintclass  = {cs.CV},
  eprinttype   = {arXiv},
  eprint       = {cs.CV/1812.01718},
}
class tinyms.data.LFWDataset(dataset_dir, task=None, usage=None, image_set=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

LFW(Labeled Faces in the Wild) dataset.

When task is ‘people’, the generated dataset has two columns: [image, label]; When task is ‘pairs’, the generated dataset has three columns: [image1, image2, label] . The tensor of column image is of the uint8 type. The tensor of column image1 is of the uint8 type. The tensor of column image2 is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • task (str, optional) – Set the task type of reading lfw data, support ‘people’ and ‘pairs’. Default: None, means ‘people’.

  • usage (str, optional) – The image split to use, support ‘10fold’, ‘train’, ‘test’ and ‘all’. Default: None, will read samples including train and test.

  • image_set (str, optional) – Type of image funneling to use, support ‘original’, ‘funneled’ or ‘deepfunneled’. Default: None, will use ‘funneled’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards. Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> # 1) Read LFW People dataset
>>> lfw_people_dataset_dir = "/path/to/lfw_people_dataset_directory"
>>> dataset = ds.LFWDataset(dataset_dir=lfw_people_dataset_dir, task="people", usage="10fold",
...                         image_set="original")
>>>
>>> # 2) Read LFW Pairs dataset
>>> lfw_pairs_dataset_dir = "/path/to/lfw_pairs_dataset_directory"
>>> dataset = ds.LFWDataset(dataset_dir=lfw_pairs_dataset_dir, task="pairs", usage="test", image_set="funneled")

About LFW dataset:

LFW (Labelled Faces in the Wild) dataset is one of the most commonly used and widely open datasets in the field of face recognition. It was released by Gary B. Huang and his team at Massachusetts Institute of Technology in 2007. The dataset includes nearly 50,000 images of 13,233 individuals, which are sourced from various internet platforms and contain diverse environmental factors such as different poses, lighting conditions, and angles. Most of the images in the dataset are frontal and cover a wide range of ages, genders, and ethnicities.

You can unzip the original LFW dataset files into this directory structure and read by MindSpore’s API.

.
└── lfw_dataset_directory
    ├── lfw
    │    ├──Aaron_Eckhart
    │    │    ├──Aaron_Eckhart_0001.jpg
    │    │    ├──...
    │    ├──Abbas_Kiarostami
    │    │    ├── Abbas_Kiarostami_0001.jpg
    │    │    ├──...
    │    ├──...
    ├── lfw-deepfunneled
    │    ├──Aaron_Eckhart
    │    │    ├──Aaron_Eckhart_0001.jpg
    │    │    ├──...
    │    ├──Abbas_Kiarostami
    │    │    ├── Abbas_Kiarostami_0001.jpg
    │    │    ├──...
    │    ├──...
    ├── lfw_funneled
    │    ├──Aaron_Eckhart
    │    │    ├──Aaron_Eckhart_0001.jpg
    │    │    ├──...
    │    ├──Abbas_Kiarostami
    │    │    ├── Abbas_Kiarostami_0001.jpg
    │    │    ├──...
    │    ├──...
    ├── lfw-names.txt
    ├── pairs.txt
    ├── pairsDevTest.txt
    ├── pairsDevTrain.txt
    ├── people.txt
    ├── peopleDevTest.txt
    ├── peopleDevTrain.txt

Citation:

@TechReport{LFWTech,
    title={LFW: A Database for Studying Recognition in Unconstrained Environments},
    author={Gary B. Huang and Manu Ramesh and Tamara Berg and Erik Learned-Miller},
    institution ={University of Massachusetts, Amherst},
    year={2007}
    number={07-49},
    month={October},
    howpublished = {http://vis-www.cs.umass.edu/lfw}
}
class tinyms.data.LSUNDataset(dataset_dir, usage=None, classes=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

LSUN(Large-scale Scene UNderstarding) dataset.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of a scalar of uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be train , test , valid or all Default: None, will be set to all .

  • classes (Union[str, list[str]], optional) – Choose the specific classes to load. Default: None, means loading all classes in root directory.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards. Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is invalid (< 0 or >= num_shards ).

  • ValueError – If usage or classes is invalid (not in specific types).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> lsun_dataset_dir = "/path/to/lsun_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in lsun_dataset_dir with 8 threads
>>> dataset = ds.LSUNDataset(dataset_dir=lsun_dataset_dir,
...                          num_parallel_workers=8)
>>>
>>> # 2) Read all train samples (image files) from folder "bedroom" and "classroom"
>>> dataset = ds.LSUNDataset(dataset_dir=lsun_dataset_dir, usage="train",
...                          classes=["bedroom", "classroom"])

About LSUN dataset:

The LSUN (Large-Scale Scene Understanding) is a large-scale dataset used for indoors scene understanding. It was originally launched by Stanford University in 2015 with the aim of providing a challenging and diverse dataset for research in computer vision and machine learning. The main application of this dataset for research is indoor scene analysis.

This dataset contains ten different categories of scenes, including bedrooms, living rooms, restaurants, lounges, studies, kitchens, bathrooms, corridors, children’s room, and outdoors. Each category contains tens of thousands of images from different perspectives, and these images are high-quality, high-resolusion real-world images.

You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── lsun_dataset_directory
    ├── test
    │    ├── ...
    ├── bedroom_train
    │    ├── 1_1.jpg
    │    ├── 1_2.jpg
    ├── bedroom_val
    │    ├── ...
    ├── classroom_train
    │    ├── ...
    ├── classroom_val
    │    ├── ...

Citation:

article{yu15lsun,
    title={LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop},
    author={Yu, Fisher and Zhang, Yinda and Song, Shuran and Seff, Ari and Xiao, Jianxiong},
    journal={arXiv preprint arXiv:1506.03365},
    year={2015}
}
class tinyms.data.ManifestDataset(dataset_file, usage='train', num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, class_indexing=None, decode=False, num_shards=None, shard_id=None, cache=None)[source]

A source dataset for reading images from a Manifest file.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of a scalar of uint64 type.

Parameters:
  • dataset_file (str) – File to be read.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘eval’ and ‘inference’. Default: ‘train’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • class_indexing (dict, optional) – A str-to-int mapping from label name to index. Default: None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0.

  • decode (bool, optional) – decode the images after reading. Default: False.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max number of samples per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_files are not valid or do not exist.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • RuntimeError – If class_indexing is not a dictionary.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • The shape of the image column is [image_size] if decode flag is False, or [H,W,C] otherwise.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> manifest_dataset_dir = "/path/to/manifest_dataset_file"
>>>
>>> # 1) Read all samples specified in manifest_dataset_dir dataset with 8 threads for training
>>> dataset = ds.ManifestDataset(dataset_file=manifest_dataset_dir, usage="train", num_parallel_workers=8)
>>>
>>> # 2) Read samples (specified in manifest_file.manifest) for shard 0 in a 2-way distributed training setup
>>> dataset = ds.ManifestDataset(dataset_file=manifest_dataset_dir, num_shards=2, shard_id=0)

About Manifest dataset:

Manifest file contains a list of files included in a dataset, including basic file info such as File name and File ID, along with extended file metadata. Manifest is a data format file supported by Huawei Modelarts. For details, see Specifications for Importing the Manifest File .

.
└── manifest_dataset_directory
    ├── train
    │    ├── 1.JPEG
    │    ├── 2.JPEG
    │    ├── ...
    ├── eval
    │    ├── 1.JPEG
    │    ├── 2.JPEG
    │    ├── ...
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-int mapping from label name to index.

Examples

>>> manifest_dataset_dir = "/path/to/manifest_dataset_file"
>>>
>>> dataset = ds.ManifestDataset(dataset_file=manifest_dataset_dir)
>>> class_indexing = dataset.get_class_indexing()
class tinyms.data.MnistDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

MNIST dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’ . ‘train’ will read from 60,000 train samples, ‘test’ will read from 10,000 test samples, ‘all’ will read from all 70,000 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If usage is not ‘train’、’test’ or ‘all’.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> mnist_dataset_dir = "/path/to/mnist_dataset_directory"
>>>
>>> # Read 3 samples from MNIST dataset
>>> dataset = ds.MnistDataset(dataset_dir=mnist_dataset_dir, num_samples=3)
>>>
>>> # Note: In mnist_dataset dataset, each dictionary has keys "image" and "label"

About MNIST dataset:

The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

Here is the original MNIST dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── mnist_dataset_dir
     ├── t10k-images-idx3-ubyte
     ├── t10k-labels-idx1-ubyte
     ├── train-images-idx3-ubyte
     └── train-labels-idx1-ubyte

Citation:

@article{lecun2010mnist,
title        = {MNIST handwritten digit database},
author       = {LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal      = {ATT Labs [Online]},
volume       = {2},
year         = {2010},
howpublished = {http://yann.lecun.com/exdb/mnist}
}
class tinyms.data.OmniglotDataset(dataset_dir, background=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Omniglot dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • background (bool, optional) – Whether to create dataset from the “background” set. Otherwise create from the “evaluation” set. Default: None, set to True.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards. Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and sharding are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler. sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> omniglot_dataset_dir = "/path/to/omniglot_dataset_directory"
>>> dataset = ds.OmniglotDataset(dataset_dir=omniglot_dataset_dir,
...                              num_parallel_workers=8)

About Omniglot dataset:

The Omniglot dataset is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon’s Mechanical Turk by 20 different people. Each image is paired with stroke data, a sequences of [x, y, t] coordinates with time in milliseconds.

You can unzip the original Omniglot dataset files into this directory structure and read by MindSpore’s API.

.
└── omniglot_dataset_directory
     ├── images_background/
     │    ├── character_class1/
     ├    ├──── 01.jpg
     │    ├──── 02.jpg
     │    ├── character_class2/
     ├    ├──── 01.jpg
     │    ├──── 02.jpg
     │    ├── ...
     ├── images_evaluation/
     │    ├── character_class1/
     ├    ├──── 01.jpg
     │    ├──── 02.jpg
     │    ├── character_class2/
     ├    ├──── 01.jpg
     │    ├──── 02.jpg
     │    ├── ...

Citation:

@article{lake2015human,
    title={Human-level concept learning through probabilistic program induction},
    author={Lake, Brenden M and Salakhutdinov, Ruslan and Tenenbaum, Joshua B},
    journal={Science},
    volume={350},
    number={6266},
    pages={1332--1338},
    year={2015},
    publisher={American Association for the Advancement of Science}
}
class tinyms.data.PhotoTourDataset(dataset_dir, name, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

PhotoTour dataset.

According to the given usage configuration, the generated dataset has different output columns: - usage = ‘train’, output columns: [image, dtype=uint8] . - usage ≠ ‘train’, output columns: [image1, dtype=uint8] , [image2, dtype=uint8] , [matches, dtype=uint32] .

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • name (str) – Name of the dataset to load, should be one of ‘notredame’, ‘yosemite’, ‘liberty’, ‘notredame_harris’, ‘yosemite_harris’ or ‘liberty_harris’.

  • usage (str, optional) – Usage of the dataset, can be ‘train’ or ‘test’. Default: None, will be set to ‘train’. When usage is ‘train’, number of samples for each name is {‘notredame’: 468159, ‘yosemite’: 633587, ‘liberty’: 450092, ‘liberty_harris’: 379587, ‘yosemite_harris’: 450912, ‘notredame_harris’: 325295}. When usage is ‘test’, will read 100,000 samples for testing.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If usage is not in [“train”, “test”].

  • ValueError – If name is not in [“notredame”, “yosemite”, “liberty”, “notredame_harris”, “yosemite_harris”, “liberty_harris”].

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> # Read 3 samples from PhotoTour dataset.
>>> dataset = ds.PhotoTourDataset(dataset_dir="/path/to/photo_tour_dataset_directory",
...                               name='liberty', usage='train', num_samples=3)

About PhotoTour dataset:

The data is taken from Photo Tourism reconstructions from Trevi Fountain (Rome), Notre Dame (Paris) and Half Dome (Yosemite). Each dataset consists of a series of corresponding patches, which are obtained by projecting 3D points from Photo Tourism reconstructions back into the original images.

The dataset consists of 1024 x 1024 bitmap (.bmp) images, each containing a 16 x 16 array of image patches. Each patch is sampled as 64 x 64 grayscale, with a canonical scale and orientation. For details of how the scale and orientation is established, please see the paper. An associated metadata file info.txt contains the match information. Each row of info.txt corresponds to a separate patch, with the patches ordered from left to right and top to bottom in each bitmap image. The first number on each row of info.txt is the 3D point ID from which that patch was sampled – patches with the same 3D point ID are projected from the same 3D point (into different images). The second number in info.txt corresponds to the image from which the patch was sampled, and is not used at present.

You can unzip the original PhotoTour dataset files into this directory structure and read by MindSpore’s API.

.
└── photo_tour_dataset_directory
    ├── liberty/
    │    ├── info.txt                 // two columns: 3D_point_ID, unused
    │    ├── m50_100000_100000_0.txt  // seven columns: patch_ID1, 3D_point_ID1, unused1,
    │    │                            // patch_ID2, 3D_point_ID2, unused2, unused3
    │    ├── patches0000.bmp          // 1024*1024 pixels, with 16 * 16 patches.
    │    ├── patches0001.bmp
    │    ├── ...
    ├── yosemite/
    │    ├── ...
    ├── notredame/
    │    ├── ...
    ├── liberty_harris/
    │    ├── ...
    ├── yosemite_harris/
    │    ├── ...
    ├── notredame_harris/
    │    ├── ...

Citation:

@INPROCEEDINGS{4269996,
    author={Winder, Simon A. J. and Brown, Matthew},
    booktitle={2007 IEEE Conference on Computer Vision and Pattern Recognition},
    title={Learning Local Image Descriptors},
    year={2007},
    volume={},
    number={},
    pages={1-8},
    doi={10.1109/CVPR.2007.382971}
}
class tinyms.data.Places365Dataset(dataset_dir, usage=None, small=True, decode=False, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Places365 dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train-standard’, ‘train-challenge’ or ‘val’. Default: None, will be set to ‘train-standard’.

  • small (bool, optional) – Use 256 * 256 images (True) or high resolution images (False). Default: False.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If usage is not in [“train-standard”, “train-challenge”, “val”].

Note

  • This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> place365_dataset_dir = "/path/to/place365_dataset_directory"
>>>
>>> # Read 3 samples from Places365 dataset
>>> dataset = ds.Places365Dataset(dataset_dir=place365_dataset_dir, usage='train-standard',
...                               small=True, decode=True, num_samples=3)

About Places365 dataset:

Convolutional neural networks (CNNs) trained on the Places2 Database can be used for scene recognition as well as generic deep scene features for visual recognition.

The author releases the data of Places365-Standard and the data of Places365-Challenge to the public. Places365-Standard is the core set of Places2 Database, which has been used to train the Places365-CNNs. The author will add other kinds of annotation on the Places365-Standard in the future. Places365-Challenge is the competition set of Places2 Database, which has 6.2 million extra images compared to the Places365-Standard. The Places365-Challenge will be used for the Places Challenge 2016.

You can unzip the original Places365 dataset files into this directory structure and read by MindSpore’s API.

.
└── categories_places365
    ├── places365_train-standard.txt
    ├── places365_train-challenge.txt
    ├── val_large/
    │    ├── Places365_val_00000001.jpg
    │    ├── Places365_val_00000002.jpg
    │    ├── Places365_val_00000003.jpg
    │    ├── ...
    ├── val_256/
    │    ├── ...
    ├── data_large_standard/
    │    ├── ...
    ├── data_256_standard/
    │    ├── ...
    ├── data_large_challenge/
    │    ├── ...
    ├── data_256_challenge /
    │    ├── ...

Citation:

article{zhou2017places,
    title={Places: A 10 million Image Database for Scene Recognition},
    author={Zhou, Bolei and Lapedriza, Agata and Khosla, Aditya and Oliva, Aude and Torralba, Antonio},
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
    year={2017},
    publisher={IEEE}
}
class tinyms.data.QMnistDataset(dataset_dir, usage=None, compat=True, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

QMNIST dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, ‘test10k’, ‘test50k’, ‘nist’ or ‘all’. Default: None, will read all samples.

  • compat (bool, optional) – Whether the label for each example is class number (compat=True) or the full QMNIST information (compat=False). Default: True.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> qmnist_dataset_dir = "/path/to/qmnist_dataset_directory"
>>>
>>> # Read 3 samples from QMNIST train dataset
>>> dataset = ds.QMnistDataset(dataset_dir=qmnist_dataset_dir, num_samples=3)
>>>
>>> # Note: In QMNIST dataset, each dictionary has keys "image" and "label"

About QMNIST dataset:

The QMNIST dataset was generated from the original data found in the NIST Special Database 19 with the goal to match the MNIST preprocessing as closely as possible. Through an iterative process, researchers tried to generate an additional 50k images of MNIST-like data. They started with a reconstruction process given in the paper and used the Hungarian algorithm to find the best matches between the original MNIST samples and their reconstructed samples.

Here is the original QMNIST dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── qmnist_dataset_dir
     ├── qmnist-train-images-idx3-ubyte
     ├── qmnist-train-labels-idx2-int
     ├── qmnist-test-images-idx3-ubyte
     ├── qmnist-test-labels-idx2-int
     ├── xnist-images-idx3-ubyte
     └── xnist-labels-idx2-int

Citation:

@incollection{qmnist-2019,
   title = "Cold Case: The Lost MNIST Digits",
   author = "Chhavi Yadav and L'{e}on Bottou",           booktitle = {Advances in Neural Information Processing Systems 32},
   year = {2019},
   publisher = {Curran Associates, Inc.},
}
class tinyms.data.RandomDataset(total_rows=None, schema=None, columns_list=None, num_samples=None, num_parallel_workers=None, cache=None, shuffle=None, num_shards=None, shard_id=None)[source]

A source dataset that generates random data.

Parameters:
  • total_rows (int, optional) – Number of samples for the dataset to generate. Default: None, number of samples is random.

  • schema (Union[str, Schema], optional) – Data format policy, which specifies the data types and shapes of the data column to be read. Both JSON file path and objects constructed by mindspore.dataset.Schema are acceptable. Default: None.

  • columns_list (list[str], optional) – List of column names of the dataset. Default: None, the columns will be named like this “c0”, “c1”, “c2” etc.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

Raises:
  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • TypeError – If total_rows is not of type int.

  • TypeError – If num_shards is not of type int.

  • TypeError – If num_parallel_workers is not of type int.

  • TypeError – If shuffle is not of type bool.

  • TypeError – If columns_list is not of type list.

Examples

>>> from mindspore import dtype as mstype
>>> import mindspore.dataset as ds
>>>
>>> schema = ds.Schema()
>>> schema.add_column('image', de_type=mstype.uint8, shape=[2])
>>> schema.add_column('label', de_type=mstype.uint8, shape=[1])
>>> # apply dataset operations
>>> ds1 = ds.RandomDataset(schema=schema, total_rows=50, num_parallel_workers=4)
class tinyms.data.RenderedSST2Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

RenderedSST2(Rendered Stanford Sentiment Treebank v2) dataset.

The generated dataset has two columns: [image, label]. The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘val’, ‘test’ or ‘all’. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Whether or not to decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. When this argument is specified, num_samples reflects the maximum sample number of per shard. Default: None.

  • shard_id (int, optional) – The shard ID within num_shards . This argument can only be specified when num_shards is also specified. Default: None.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If usage is not ‘train’, ‘test’, ‘val’ or ‘all’.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> rendered_sst2_dataset_dir = "/path/to/rendered_sst2_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in rendered_sst2_dataset_dir with 8 threads
>>> dataset = ds.RenderedSST2Dataset(dataset_dir=rendered_sst2_dataset_dir,
...                                  usage="all", num_parallel_workers=8)

About RenderedSST2Dataset:

Rendered SST2 is an image classification dataset which was generated by rendering sentences in the Standford Sentiment Treebank v2 dataset. There are three splits in this dataset and each split contains two classes (positive and negative): a train split containing 6920 images (3610 positive and 3310 negative), a validation split containing 872 images (444 positive and 428 negative), and a test split containing 1821 images (909 positive and 912 negative).

Here is the original RenderedSST2 dataset structure. You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── rendered_sst2_dataset_directory
     ├── train
     │    ├── negative
     │    │    ├── 0001.jpg
     │    │    ├── 0002.jpg
     │    │    ...
     │    └── positive
     │         ├── 0001.jpg
     │         ├── 0002.jpg
     │         ...
     ├── test
     │    ├── negative
     │    │    ├── 0001.jpg
     │    │    ├── 0002.jpg
     │    │    ...
     │    └── positive
     │         ├── 0001.jpg
     │         ├── 0002.jpg
     │         ...
     └── valid
          ├── negative
          │    ├── 0001.jpg
          │    ├── 0002.jpg
          │    ...
          └── positive
               ├── 0001.jpg
               ├── 0002.jpg
               ...

Citation:

@inproceedings{socher-etal-2013-recursive,
    title     = {Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank},
    author    = {Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning,
                  Christopher D. and Ng, Andrew and Potts, Christopher},
    booktitle = {Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing},
    month     = oct,
    year      = {2013},
    address   = {Seattle, Washington, USA},
    publisher = {Association for Computational Linguistics},
    url       = {https://www.aclweb.org/anthology/D13-1170},
    pages     = {1631--1642},
}
class tinyms.data.SBDataset(dataset_dir, task='Boundaries', usage='all', num_samples=None, num_parallel_workers=1, shuffle=None, decode=None, sampler=None, num_shards=None, shard_id=None)[source]

SB(Semantic Boundaries) Dataset.

By configuring the ‘Task’ parameter, the generated dataset has different output columns.

  • ‘task’ = ‘Boundaries’ , there are two output columns: the ‘image’ column has the data type uint8 and the ‘label’ column contains one image of the data type uint8.

  • ‘task’ = ‘Segmentation’ , there are two output columns: the ‘image’ column has the data type uint8 and the ‘label’ column contains 20 images of the data type uint8.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • task (str, optional) – Acceptable tasks include ‘Boundaries’ or ‘Segmentation’. Default: ‘Boundaries’.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘val’, ‘train_noval’ and ‘all’. Default: ‘all’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker subprocesses to read the data. Default: 1.

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: None.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

Raises:
  • RuntimeError – If dataset_dir is not valid or does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If dataset_dir is not exist.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If task is not in [‘Boundaries’, ‘Segmentation’].

  • ValueError – If usage is not in [‘train’, ‘val’, ‘train_noval’, ‘all’].

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler. sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> sb_dataset_dir = "/path/to/sb_dataset_directory"
>>>
>>> # 1) Get all samples from Semantic Boundaries Dataset in sequence
>>> dataset = ds.SBDataset(dataset_dir=sb_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from Semantic Boundaries Dataset
>>> dataset = ds.SBDataset(dataset_dir=sb_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from Semantic Boundaries Dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.SBDataset(dataset_dir=sb_dataset_dir, num_shards=2, shard_id=0)
>>>
>>> # In Semantic Boundaries Dataset, each dictionary has keys "image" and "task"

About Semantic Boundaries Dataset:

The Semantic Boundaries Dataset consists of 11355 color images. There are 8498 images’ name in the train.txt, 2857 images’ name in the val.txt and 5623 images’ name in the train_noval.txt. The category cls/ contains the Segmentation and Boundaries results of category-level, the category inst/ contains the Segmentation and Boundaries results of instance-level.

You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── benchmark_RELEASE
     ├── dataset
     ├── img
     │    ├── 2008_000002.jpg
     │    ├── 2008_000003.jpg
     │    ├── ...
     ├── cls
     │    ├── 2008_000002.mat
     │    ├── 2008_000003.mat
     │    ├── ...
     ├── inst
     │    ├── 2008_000002.mat
     │    ├── 2008_000003.mat
     │    ├── ...
     ├── train.txt
     └── val.txt
@InProceedings{BharathICCV2011,
    author       = "Bharath Hariharan and Pablo Arbelaez and Lubomir Bourdev and
                    Subhransu Maji and Jitendra Malik",
    title        = "Semantic Contours from Inverse Detectors",
    booktitle    = "International Conference on Computer Vision (ICCV)",
    year         = "2011",
}
class tinyms.data.SBUDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

SBU(SBU Captioned Photo) dataset.

The generated dataset has two columns [image, caption] . The tensor of column image is of the uint8 type. The tensor of column caption is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> sbu_dataset_dir = "/path/to/sbu_dataset_directory"
>>> # Read 3 samples from SBU dataset
>>> dataset = ds.SBUDataset(dataset_dir=sbu_dataset_dir, num_samples=3)

About SBU dataset:

SBU dataset is a large captioned photo collection. It contains one million images with associated visually relevant captions.

You should manually download the images using official download.m by replacing ‘urls{i}(24, end)’ with ‘urls{i}(24:1:end)’ and keep the directory as below.

.
└─ dataset_dir
   ├── SBU_captioned_photo_dataset_captions.txt
   ├── SBU_captioned_photo_dataset_urls.txt
   └── sbu_images
       ├── m_3326_3596303505_3ce4c20529.jpg
       ├── ......
       └── m_2522_4182181099_c3c23ab1cc.jpg

Citation:

@inproceedings{Ordonez:2011:im2text,
  Author    = {Vicente Ordonez and Girish Kulkarni and Tamara L. Berg},
  Title     = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
  Booktitle = {Neural Information Processing Systems ({NIPS})},
  Year      = {2011},
}
class tinyms.data.SemeionDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Semeion dataset.

The generated dataset has two columns [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is a scalar of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> semeion_dataset_dir = "/path/to/semeion_dataset_directory"
>>>
>>> # 1) Get all samples from SEMEION dataset in sequence
>>> dataset = ds.SemeionDataset(dataset_dir=semeion_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 10 samples from SEMEION dataset
>>> dataset = ds.SemeionDataset(dataset_dir=semeion_dataset_dir, num_samples=10, shuffle=True)
>>>
>>> # 3) Get samples from SEMEION dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.SemeionDataset(dataset_dir=semeion_dataset_dir, num_shards=2, shard_id=0)
>>>
>>> # In SEMEION dataset, each dictionary has keys: image, label.

About SEMEION dataset:

The dataset was created by Tactile Srl, Brescia, Italy (http://www.tattile.it) and donated in 1994 to Semeion Research Center of Sciences of Communication, Rome, Italy (http://www.semeion.it), for machine learning research.

This dataset consists of 1593 records (rows) and 256 attributes (columns). Each record represents a handwritten digit, originally scanned with a resolution of 256 grey scale. Each pixel of the each original scanned image was first stretched, and after scaled between 0 and 1 (setting to 0 every pixel whose value was under the value 127 of the grey scale (127 included) and setting to 1 each pixel whose original value in the grey scale was over 127). Finally, each binary image was scaled again into a 16x16 square box (the final 256 binary attributes).

.
└── semeion_dataset_dir
    └──semeion.data
    └──semeion.names

Citation:

@article{
  title={The Theory of Independent Judges, in Substance Use & Misuse 33(2)1998, pp 439-461},
  author={M Buscema, MetaNet},
}
class tinyms.data.STL10Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

STL-10 dataset.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of a scalar of int32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, ‘unlabeled’, ‘train+unlabeled’ or ‘all’ . ‘train’ will read from 5,000 train samples, ‘test’ will read from 8,000 test samples, ‘unlabeled’ will read from all 100,000 samples, and ‘train+unlabeled’ will read from 105000 samples, ‘all’ will read all the samples Default: None, all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir is not valid or does not exist or does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If usage is invalid.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> stl10_dataset_dir = "/path/to/stl10_dataset_directory"
>>>
>>> # 1) Get all samples from STL10 dataset in sequence
>>> dataset = ds.STL10Dataset(dataset_dir=stl10_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from STL10 dataset
>>> dataset = ds.STL10Dataset(dataset_dir=stl10_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from STL10 dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.STL10Dataset(dataset_dir=stl10_dataset_dir, num_shards=2, shard_id=0)

About STL10 dataset:

STL10 dataset consists of 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. Images are 96x96 pixels, color. 500 training images, 800 test images per class and 100000 unlabeled images. Labels are 0-indexed, and unlabeled images have -1 as their labels.

Here is the original STL10 dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── stl10_dataset_dir
     ├── train_X.bin
     ├── train_y.bin
     ├── test_X.bin
     ├── test_y.bin
     └── unlabeled_X.bin

Citation of STL10 dataset:

@techreport{Coates10,
author       = {Adam Coates},
title        = {Learning multiple layers of features from tiny images},
year         = {20010},
howpublished = {https://cs.stanford.edu/~acoates/stl10/},
description  = {The STL-10 dataset consists of 96x96 RGB images in 10 classes,
                with 500 training images and 800 testing images per class.
                There are 5000 training images and 8000 test images.
                It also has 100000 unlabeled images for unsupervised learning.
                These examples are extracted from a similar but broader distribution of images.
                }
}
class tinyms.data.SUN397Dataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

SUN397(Scene UNderstanding) dataset.

The generated dataset has two columns: [image, label]. The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Whether or not to decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. When this argument is specified, num_samples reflects the maximum sample number of per shard. Default: None.

  • shard_id (int, optional) – The shard ID within num_shards . This argument can only be specified when num_shards is also specified. Default: None.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> sun397_dataset_dir = "/path/to/sun397_dataset_directory"
>>>
>>> # 1) Read all samples (image files) in sun397_dataset_dir with 8 threads
>>> dataset = ds.SUN397Dataset(dataset_dir=sun397_dataset_dir, num_parallel_workers=8)

About SUN397Dataset:

The SUN397 or Scene UNderstanding (SUN) is a dataset for scene recognition consisting of 397 categories with 108,754 images. The number of images varies across categories, but there are at least 100 images per category. Images are in jpg, png, or gif format.

Here is the original SUN397 dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── sun397_dataset_directory
    ├── ClassName.txt
    ├── README.txt
    ├── a
    │   ├── abbey
    │   │   ├── sun_aaaulhwrhqgejnyt.jpg
    │   │   ├── sun_aacphuqehdodwawg.jpg
    │   │   ├── ...
    │   ├── apartment_building
    │   │   └── outdoor
    │   │       ├── sun_aamyhslnsnomjzue.jpg
    │   │       ├── sun_abbjzfrsalhqivis.jpg
    │   │       ├── ...
    │   ├── ...
    ├── b
    │   ├── badlands
    │   │   ├── sun_aabtemlmesogqbbp.jpg
    │   │   ├── sun_afbsfeexggdhzshd.jpg
    │   │   ├── ...
    │   ├── balcony
    │   │   ├── exterior
    │   │   │   ├── sun_aaxzaiuznwquburq.jpg
    │   │   │   ├── sun_baajuldidvlcyzhv.jpg
    │   │   │   ├── ...
    │   │   └── interior
    │   │       ├── sun_babkzjntjfarengi.jpg
    │   │       ├── sun_bagjvjynskmonnbv.jpg
    │   │       ├── ...
    │   └── ...
    ├── ...

Citation:

@inproceedings{xiao2010sun,
title        = {Sun database: Large-scale scene recognition from abbey to zoo},
author       = {Xiao, Jianxiong and Hays, James and Ehinger, Krista A and Oliva, Aude and Torralba, Antonio},
booktitle    = {2010 IEEE computer society conference on computer vision and pattern recognition},
pages        = {3485--3492},
year         = {2010},
organization = {IEEE}
}
class tinyms.data.SVHNDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None)[source]

SVHN(Street View House Numbers) dataset.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of a scalar of uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Specify the ‘train’, ‘test’, ‘extra’ or ‘all’ parts of dataset. Default: None, will read all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Random accessible input is required. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument must be specified only when num_shards is also specified.

Raises:
  • RuntimeError – If dataset_dir is not valid or does not exist or does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If usage is invalid.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> svhn_dataset_dir = "/path/to/svhn_dataset_directory"
>>> dataset = ds.SVHNDataset(dataset_dir=svhn_dataset_dir, usage="train")

About SVHN dataset:

SVHN dataset consists of 10 digit classes and is obtained from house numbers in Google Street View images.

Here is the original SVHN dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── svhn_dataset_dir
     ├── train_32x32.mat
     ├── test_32x32.mat
     └── extra_32x32.mat

Citation:

@article{
  title={Reading Digits in Natural Images with Unsupervised Feature Learning},
  author={Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng},
  conference={NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.},
  year={2011},
  publisher={NIPS}
  url={http://ufldl.stanford.edu/housenumbers}
}
class tinyms.data.USPSDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

USPS(U.S. Postal Service) dataset.

The generated dataset has two columns: [image, label] . The tensor of column image is of the uint8 type. The tensor of column label is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. ‘train’ will read from 7,291 train samples, ‘test’ will read from 2,007 test samples, ‘all’ will read from all 9,298 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir is not valid or does not exist or does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If usage is invalid.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> usps_dataset_dir = "/path/to/usps_dataset_directory"
>>>
>>> # Read 3 samples from USPS dataset
>>> dataset = ds.USPSDataset(dataset_dir=usps_dataset_dir, num_samples=3)

About USPS dataset:

USPS is a digit dataset automatically scanned from envelopes by the U.S. Postal Service containing a total of 9,298 16×16 pixel grayscale samples. The images are centered, normalized and show a broad range of font styles.

Here is the original USPS dataset structure. You can download and unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── usps_dataset_dir
     ├── usps
     ├── usps.t

Citation:

@article{hull1994database,
  title={A database for handwritten text recognition research},
  author={Hull, Jonathan J.},
  journal={IEEE Transactions on pattern analysis and machine intelligence},
  volume={16},
  number={5},
  pages={550--554},
  year={1994},
  publisher={IEEE}
}
class tinyms.data.VOCDataset(dataset_dir, task='Segmentation', usage='train', class_indexing=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None, extra_metadata=False, decrypt=None)[source]

VOC(Visual Object Classes) dataset.

The generated dataset with different task setting has different output columns:

  • task = Detection , output columns: [image, dtype=uint8] , [bbox, dtype=float32] , [label, dtype=uint32] , [difficult, dtype=uint32] , [truncate, dtype=uint32] .

  • task = Segmentation , output columns: [image, dtype=uint8] , [target,dtype=uint8] .

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • task (str, optional) – Set the task type of reading voc data, now only support ‘Segmentation’ or ‘Detection’. Default: ‘Segmentation’.

  • usage (str, optional) – Set the task type of ImageSets. Default: ‘train’. If task is ‘Segmentation’, image and annotation list will be loaded in ./ImageSets/Segmentation/usage + “.txt”; If task is ‘Detection’, image and annotation list will be loaded in ./ImageSets/Main/usage + “.txt”; if task and usage are not set, image and annotation list will be loaded in ./ImageSets/Segmentation/train.txt as default.

  • class_indexing (dict, optional) – A str-to-int mapping from label name to index, only valid in ‘Detection’ task. Default: None, the folder names will be sorted alphabetically and each class will be given a unique index starting from 0.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • extra_metadata (bool, optional) – Flag to add extra meta-data to row. If True, an additional column named [_meta-filename, dtype=string] will be output at the end. Default: False.

  • decrypt (callable, optional) – Image decryption function, which accepts the path of the encrypted image file and returns the decrypted bytes data. Default: None, no decryption.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If xml of Annotations is an invalid format.

  • RuntimeError – If xml of Annotations loss attribution of object .

  • RuntimeError – If xml of Annotations loss attribution of bndbox .

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If task is not equal ‘Segmentation’ or ‘Detection’.

  • ValueError – If task equal ‘Segmentation’ but class_indexing is not None.

  • ValueError – If txt related to mode is not exist.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • Column ‘[_meta-filename, dtype=string]’ won’t be output unless an explicit rename dataset op is added to remove the prefix(‘_meta-‘).

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> voc_dataset_dir = "/path/to/voc_dataset_directory"
>>>
>>> # 1) Read VOC data for segmentation training
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Segmentation", usage="train")
>>>
>>> # 2) Read VOC data for detection training
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train")
>>>
>>> # 3) Read all VOC dataset samples in voc_dataset_dir with 8 threads in random order
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train",
...                         num_parallel_workers=8)
>>>
>>> # 4) Read then decode all VOC dataset samples in voc_dataset_dir in sequence
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection", usage="train",
...                         decode=True, shuffle=False)
>>>
>>> # In VOC dataset, if task='Segmentation', each dictionary has keys "image" and "target"
>>> # In VOC dataset, if task='Detection', each dictionary has keys "image" and "annotation"

About VOC dataset:

The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures.

You can unzip the original VOC-2012 dataset files into this directory structure and read by MindSpore’s API.

.
└── voc2012_dataset_dir
    ├── Annotations
    │    ├── 2007_000027.xml
    │    ├── 2007_000032.xml
    │    ├── ...
    ├── ImageSets
    │    ├── Action
    │    ├── Layout
    │    ├── Main
    │    └── Segmentation
    ├── JPEGImages
    │    ├── 2007_000027.jpg
    │    ├── 2007_000032.jpg
    │    ├── ...
    ├── SegmentationClass
    │    ├── 2007_000032.png
    │    ├── 2007_000033.png
    │    ├── ...
    └── SegmentationObject
         ├── 2007_000032.png
         ├── 2007_000033.png
         ├── ...

Citation:

@article{Everingham10,
author       = {Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.},
title        = {The Pascal Visual Object Classes (VOC) Challenge},
journal      = {International Journal of Computer Vision},
volume       = {88},
year         = {2012},
number       = {2},
month        = {jun},
pages        = {303--338},
biburl       = {http://host.robots.ox.ac.uk/pascal/VOC/pubs/everingham10.html#bibtex},
howpublished = {http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html}
}
get_class_indexing()[source]

Get the class index.

Returns:

dict, a str-to-int mapping from label name to index.

Examples

>>> voc_dataset_dir = "/path/to/voc_dataset_directory"
>>>
>>> dataset = ds.VOCDataset(dataset_dir=voc_dataset_dir, task="Detection")
>>> class_indexing = dataset.get_class_indexing()
class tinyms.data.WIDERFaceDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

WIDERFace dataset.

When usage is “train”, “valid” or “all”, the generated dataset has eight columns [“image”, “bbox”, “blur”, “expression”, “illumination”, “occlusion”, “pose”, “invalid”]. The data type of the image column is uint8, and all other columns are uint32. When usage is “test”, it only has one column [“image”], with uint8 data type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, ‘valid’ or ‘all’. ‘train’ will read from 12,880 samples, ‘test’ will read from 16,097 samples, ‘valid’ will read from 3,226 test samples and ‘all’ will read all ‘train’ and ‘valid’ samples. Default: None, will be set to ‘all’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • decode (bool, optional) – Decode the images after reading. Default: False.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If usage is not in [‘train’, ‘test’, ‘valid’, ‘all’].

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If annotation_file is not exist.

  • ValueError – If dataset_dir is not exist.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> wider_face_dir = "/path/to/wider_face_dataset"
>>>
>>> # Read 3 samples from WIDERFace dataset
>>> dataset = ds.WIDERFaceDataset(dataset_dir=wider_face_dir, num_samples=3)

About WIDERFace dataset:

The WIDERFace database has a training set of 12,880 samples, a testing set of 16,097 examples and a validating set of 3,226 examples. It is a subset of a larger set available from WIDER. The digits have been size-normalized and centered in a fixed-size image.

The following is the original WIDERFace dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── wider_face_dir
     ├── WIDER_test
     │    └── images
     │         ├── 0--Parade
     │         │     ├── 0_Parade_marchingband_1_9.jpg
     │         │     ├── ...
     │         ├──1--Handshaking
     │         ├──...
     ├── WIDER_train
     │    └── images
     │         ├── 0--Parade
     │         │     ├── 0_Parade_marchingband_1_11.jpg
     │         │     ├── ...
     │         ├──1--Handshaking
     │         ├──...
     ├── WIDER_val
     │    └── images
     │         ├── 0--Parade
     │         │     ├── 0_Parade_marchingband_1_102.jpg
     │         │     ├── ...
     │         ├──1--Handshaking
     │         ├──...
     └── wider_face_split
          ├── wider_face_test_filelist.txt
          ├── wider_face_train_bbx_gt.txt
          └── wider_face_val_bbx_gt.txt

Citation:

@inproceedings{2016WIDER,
  title={WIDERFACE: A Detection Benchmark},
  author={Yang, S. and Luo, P. and Loy, C. C. and Tang, X.},
  booktitle={IEEE},
  pages={5525-5533},
  year={2016},
}
class tinyms.data.AGNewsDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

AG News dataset.

The generated dataset has three columns: [index, title, description] , and the data type of three columns is string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘test’ and ‘all’. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . This argument can only be specified when num_shards is also specified. Default: None.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> ag_news_dataset_dir = "/path/to/ag_news_dataset_file"
>>> dataset = ds.AGNewsDataset(dataset_dir=ag_news_dataset_dir, usage='all')

About AGNews dataset:

AG is a collection of over 1 million news articles. The news articles were collected by ComeToMyHead from over 2,000 news sources in over 1 year of activity. ComeToMyHead is an academic news search engine that has been in operation since July 2004. The dataset is provided by academics for research purposes such as data mining (clustering, classification, etc.), information retrieval (ranking, searching, etc.), xml, data compression, data streaming, and any other non-commercial activities. AG’s news topic classification dataset was constructed by selecting the four largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 test samples. The total number of training samples in train.csv is 120,000 and the number of test samples in test.csv is 7,600.

You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── ag_news_dataset_dir
    ├── classes.txt
    ├── train.csv
    ├── test.csv
    └── readme.txt

Citation:

@misc{zhang2015characterlevel,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Zhao and Yann LeCun},
year={2015},
eprint={1509.01626},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
class tinyms.data.AmazonReviewDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

Amazon Review Polarity and Amazon Review Full datasets.

The generated dataset has three columns: [label, title, content] , and the data type of three columns is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the Amazon Review Polarity dataset or the Amazon Review Full dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. For Polarity dataset, ‘train’ will read from 3,600,000 train samples, ‘test’ will read from 400,000 test samples, ‘all’ will read from all 4,000,000 samples. For Full dataset, ‘train’ will read from 3,000,000 train samples, ‘test’ will read from 650,000 test samples, ‘all’ will read from all 3,650,000 samples. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to be read. Default: None, reads the full dataset.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> amazon_review_dataset_dir = "/path/to/amazon_review_dataset_dir"
>>> dataset = ds.AmazonReviewDataset(dataset_dir=amazon_review_dataset_dir, usage='all')

About AmazonReview Dataset:

The Amazon reviews full dataset consists of reviews from Amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. The dataset is mainly used for text classification, given the content and title, predict the correct star rating.

The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, 4 and 5 as positive. Samples of score 3 is ignored.

The Amazon Reviews Polarity and Amazon Reviews Full datasets have the same directory structures. You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── amazon_review_dir
     ├── train.csv
     ├── test.csv
     └── readme.txt

Citation:

@article{zhang2015character,
  title={Character-level convolutional networks for text classification},
  author={Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
  journal={Advances in neural information processing systems},
  volume={28},
  pages={649--657},
  year={2015}
}
class tinyms.data.CLUEDataset(dataset_files, task='AFQMC', usage='train', num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

CLUE(Chinese Language Understanding Evaluation) dataset. Supported CLUE classification tasks: ‘AFQMC’, ‘TNEWS’, ‘IFLYTEK’, ‘CMNLI’, ‘WSC’ and ‘CSL’.

Parameters:
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • task (str, optional) – The kind of task, one of ‘AFQMC’, ‘TNEWS’, ‘IFLYTEK’, ‘CMNLI’, ‘WSC’ and ‘CSL’. Default: ‘AFQMC’.

  • usage (str, optional) – Specify the ‘train’, ‘test’ or ‘eval’ part of dataset. Default: ‘train’.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: Shuffle.GLOBAL. Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

The generated dataset with different task setting has different output columns:

task

usage

Output column

AFQMC

train

[sentence1, dtype=string]

[sentence2, dtype=string]

[label, dtype=string]

test

[id, dtype=uint32]

[sentence1, dtype=string]

[sentence2, dtype=string]

eval

[sentence1, dtype=string]

[sentence2, dtype=string]

[label, dtype=string]

TNEWS

train

[label, dtype=string]

[label_des, dtype=string]

[sentence, dtype=string]

[keywords, dtype=string]

test

[label, dtype=uint32]

[keywords, dtype=string]

[sentence, dtype=string]

eval

[label, dtype=string]

[label_des, dtype=string]

[sentence, dtype=string]

[keywords, dtype=string]

IFLYTEK

train

[label, dtype=string]

[label_des, dtype=string]

[sentence, dtype=string]

test

[id, dtype=uint32]

[sentence, dtype=string]

eval

[label, dtype=string]

[label_des, dtype=string]

[sentence, dtype=string]

CMNLI

train

[sentence1, dtype=string]

[sentence2, dtype=string]

[label, dtype=string]

test

[id, dtype=uint32]

[sentence1, dtype=string]

[sentence2, dtype=string]

eval

[sentence1, dtype=string]

[sentence2, dtype=string]

[label, dtype=string]

WSC

train

[span1_index, dtype=uint32]

[span2_index, dtype=uint32]

[span1_text, dtype=string]

[span2_text, dtype=string]

[idx, dtype=uint32]

[text, dtype=string]

[label, dtype=string]

test

[span1_index, dtype=uint32]

[span2_index, dtype=uint32]

[span1_text, dtype=string]

[span2_text, dtype=string]

[idx, dtype=uint32]

[text, dtype=string]

eval

[span1_index, dtype=uint32]

[span2_index, dtype=uint32]

[span1_text, dtype=string]

[span2_text, dtype=string]

[idx, dtype=uint32]

[text, dtype=string]

[label, dtype=string]

CSL

train

[id, dtype=uint32]

[abst, dtype=string]

[keyword, dtype=string]

[label, dtype=string]

test

[id, dtype=uint32]

[abst, dtype=string]

[keyword, dtype=string]

eval

[id, dtype=uint32]

[abst, dtype=string]

[keyword, dtype=string]

[label, dtype=string]

Raises:
  • ValueError – If dataset_files are not valid or do not exist.

  • ValueError – task is not in ‘AFQMC’, ‘TNEWS’, ‘IFLYTEK’, ‘CMNLI’, ‘WSC’ or ‘CSL’.

  • ValueError – usage is not in ‘train’, ‘test’ or ‘eval’.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

Examples

>>> clue_dataset_dir = ["/path/to/clue_dataset_file"] # contains 1 or multiple clue files
>>> dataset = ds.CLUEDataset(dataset_files=clue_dataset_dir, task='AFQMC', usage='train')

About CLUE dataset:

CLUE, a Chinese Language Understanding Evaluation benchmark. It contains multiple tasks, including single-sentence classification, sentence pair classification, and machine reading comprehension.

You can unzip the dataset files into the following structure and read by MindSpore’s API, such as afqmc dataset:

.
└── afqmc_public
     ├── train.json
     ├── test.json
     └── dev.json

Citation:

@article{CLUEbenchmark,
title   = {CLUE: A Chinese Language Understanding Evaluation Benchmark},
author  = {Liang Xu, Xuanwei Zhang, Lu Li, Hai Hu, Chenjie Cao, Weitang Liu, Junyi Li, Yudong Li,
        Kai Sun, Yechen Xu, Yiming Cui, Cong Yu, Qianqian Dong, Yin Tian, Dian Yu, Bo Shi, Jun Zeng,
        Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou,
        Shaoweihua Liu, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Zhenzhong Lan},
journal = {arXiv preprint arXiv:2004.05986},
year    = {2020},
howpublished = {https://github.com/CLUEbenchmark/CLUE}
}
class tinyms.data.CoNLL2000Dataset(dataset_dir, usage=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

CoNLL-2000(Conference on Computational Natural Language Learning) chunking dataset.

The generated dataset has three columns: [word, pos_tag, chunk_tag] . The tensors of column word , column pos_tag , and column chunk_tag are of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the CoNLL2000 chunking dataset.

  • usage (str, optional) – Usage of dataset, can be ‘train’, ‘test’, or ‘all’. For dataset, ‘train’ will read from 8,936 train samples, ‘test’ will read from 2,012 test samples, ‘all’ will read from all 1,0948 samples. Default: None, read all samples.

  • num_samples (int, optional) – Number of samples (rows) to be read. Default: None, read the full dataset.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: mindspore.dataset.Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. When this argument is specified, num_samples reflects the max sample number of per shard. Default: None.

  • shard_id (int, optional) – The shard ID within num_shards . This argument can only be specified when num_shards is also specified. Default: None.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

Examples

>>> conll2000_dataset_dir = "/path/to/conll2000_dataset_dir"
>>> dataset = ds.CoNLL2000Dataset(dataset_dir=conll2000_dataset_dir, usage='all')

About CoNLL2000 Dataset:

The CoNLL2000 chunking dataset consists of the text from sections 15-20 of the Wall Street Journal corpus. Texts are chunked using IOB notation, and the chunk type has NP, VP, PP, ADJP and ADVP. The dataset consist of three columns separated by spaces. The first column contains the current word, the second is part-of-speech tag as derived by the Brill tagger and the third is chunk tag as derived from the WSJ corpus. Text chunking consists of dividing a text in syntactically correlated parts of words.

You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── conll2000_dataset_dir
     ├── train.txt
     ├── test.txt
     └── readme.txt

Citation:

@inproceedings{tksbuchholz2000conll,
author     = {Tjong Kim Sang, Erik F. and Sabine Buchholz},
title      = {Introduction to the CoNLL-2000 Shared Task: Chunking},
editor     = {Claire Cardie and Walter Daelemans and Claire Nedellec and Tjong Kim Sang, Erik},
booktitle  = {Proceedings of CoNLL-2000 and LLL-2000},
publisher  = {Lisbon, Portugal},
pages      = {127--132},
year       = {2000}
}
class tinyms.data.DBpediaDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

DBpedia dataset.

The generated dataset has three columns [class, title, content] , and the data type of three columns is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. ‘train’ will read from 560,000 train samples, ‘test’ will read from 70,000 test samples, ‘all’ will read from all 630,000 samples. Default: None, all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all text.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> dbpedia_dataset_dir = "/path/to/dbpedia_dataset_directory"
>>>
>>> # 1) Read 3 samples from DBpedia dataset
>>> dataset = ds.DBpediaDataset(dataset_dir=dbpedia_dataset_dir, num_samples=3)
>>>
>>> # 2) Read train samples from DBpedia dataset
>>> dataset = ds.DBpediaDataset(dataset_dir=dbpedia_dataset_dir, usage="train")

About DBpedia dataset:

The DBpedia dataset consists of 630,000 text samples in 14 classes, there are 560,000 samples in the train.csv and 70,000 samples in the test.csv. The 14 different classes represent Company, EducationaInstitution, Artist, Athlete, OfficeHolder, MeanOfTransportation, Building, NaturalPlace, Village, Animal, Plant, Album, Film, WrittenWork.

Here is the original DBpedia dataset structure. You can unzip the dataset files into this directory structure and read by Mindspore’s API.

.
└── dbpedia_dataset_dir
    ├── train.csv
    ├── test.csv
    ├── classes.txt
    └── readme.txt

Citation:

@article{DBpedia,
title   = {DBPedia Ontology Classification Dataset},
author  = {Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas,
        Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef,
            Sören Auer, Christian Bizer},
year    = {2015},
howpublished = {http://dbpedia.org}
}
class tinyms.data.EnWik9Dataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=True, num_shards=None, shard_id=None, cache=None)[source]

EnWik9 dataset.

The generated dataset has one column [text] with type string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: True. If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> en_wik9_dataset_dir = "/path/to/en_wik9_dataset"
>>> dataset2 = ds.EnWik9Dataset(dataset_dir=en_wik9_dataset_dir, num_samples=2,
...                             shuffle=True)

About EnWik9 dataset:

The data of EnWik9 is UTF-8 encoded XML consisting primarily of English text. It contains 243,426 article titles, of which 85,560 are #REDIRECT to fix broken links, and the rest are regular articles.

The data is UTF-8 clean. All characters are in the range U’0000 to U’10FFFF with valid encodings of 1 to 4 bytes. The byte values 0xC0, 0xC1, and 0xF5-0xFF never occur. Also, in the Wikipedia dumps, there are no control characters in the range 0x00-0x1F except for 0x09 (tab) and 0x0A (linefeed). Linebreaks occur only on paragraph boundaries, so they always have a semantic purpose.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── EnWik9
     ├── enwik9

Citation:

@NetworkResource{Hutter_prize,
author    = {English Wikipedia},
url       = "https://cs.fit.edu/~mmahoney/compression/textdata.html",
month     = {March},
year      = {2006}
}
class tinyms.data.IMDBDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

IMDb(Internet Movie Database) dataset.

The generated dataset has two columns: [text, label] . The tensor of column text is of the string type. The column label is of a scalar of uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. Default: None, will read all samples.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will include all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • The shape of the test column.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> imdb_dataset_dir = "/path/to/imdb_dataset_directory"
>>>
>>> # 1) Read all samples (text files) in imdb_dataset_dir with 8 threads
>>> dataset = ds.IMDBDataset(dataset_dir=imdb_dataset_dir, num_parallel_workers=8)
>>>
>>> # 2) Read train samples (text files).
>>> dataset = ds.IMDBDataset(dataset_dir=imdb_dataset_dir, usage="train")

About IMDBDataset:

The IMDB dataset contains 50, 000 highly polarized reviews from the Internet Movie Database (IMDB). The dataset was divided into 25 000 comments for training and 25 000 comments for testing, with both the training set and test set containing 50% positive and 50% negative comments. Train labels and test labels are all lists of 0 and 1, where 0 stands for negative and 1 for positive.

You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── imdb_dataset_directory
     ├── train
     │    ├── pos
     │    │    ├── 0_9.txt
     │    │    ├── 1_7.txt
     │    │    ├── ...
     │    ├── neg
     │    │    ├── 0_3.txt
     │    │    ├── 1_1.txt
     │    │    ├── ...
     ├── test
     │    ├── pos
     │    │    ├── 0_10.txt
     │    │    ├── 1_10.txt
     │    │    ├── ...
     │    ├── neg
     │    │    ├── 0_2.txt
     │    │    ├── 1_3.txt
     │    │    ├── ...

Citation:

@InProceedings{maas-EtAl:2011:ACL-HLT2011,
  author    = {Maas, Andrew L.  and  Daly, Raymond E.  and  Pham, Peter T.  and  Huang, Dan
                and  Ng, Andrew Y.  and  Potts, Christopher},
  title     = {Learning Word Vectors for Sentiment Analysis},
  booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:
                Human Language Technologies},
  month     = {June},
  year      = {2011},
  address   = {Portland, Oregon, USA},
  publisher = {Association for Computational Linguistics},
  pages     = {142--150},
  url       = {http://www.aclweb.org/anthology/P11-1015}
}
class tinyms.data.IWSLT2016Dataset(dataset_dir, usage=None, language_pair=None, valid_set=None, test_set=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

IWSLT2016(International Workshop on Spoken Language Translation) dataset.

The generated dataset has two columns: [text, translation] . The tensor of column :py:obj: text is of the string type. The column :py:obj: translation is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘valid’, ‘test’ and ‘all’. Default: None, all samples.

  • language_pair (sequence, optional) – Sequence containing source and target language, supported values are (‘en’, ‘fr’), (‘en’, ‘de’), (‘en’, ‘cs’), (‘en’, ‘ar’), (‘fr’, ‘en’), (‘de’, ‘en’), (‘cs’, ‘en’), (‘ar’, ‘en’). Default: (‘de’, ‘en’).

  • valid_set (str, optional) – A string to identify validation set, when usage is valid or all, the validation set of valid_set type will be read, supported values are ‘dev2010’, ‘tst2010’, ‘tst2011’, ‘tst2012’, ‘tst2013’ and ‘tst2014’. Default: ‘tst2013’.

  • test_set (str, optional) – A string to identify test set, when usage is test or all, the test set of test_set type will be read, supported values are ‘dev2010’, ‘tst2010’, ‘tst2011’, ‘tst2012’, ‘tst2013’ and ‘tst2014’. Default: ‘tst2014’.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> iwslt2016_dataset_dir = "/path/to/iwslt2016_dataset_dir"
>>> dataset = ds.IWSLT2016Dataset(dataset_dir=iwslt2016_dataset_dir, usage='all',
...                               language_pair=('de', 'en'), valid_set='tst2013', test_set='tst2014')

About IWSLT2016 dataset:

IWSLT is an international oral translation conference, a major annual scientific conference dedicated to all aspects of oral translation. The MT task of the IWSLT evaluation activity constitutes a dataset, which can be publicly obtained through the WIT3 website wit3 . The IWSLT2016 dataset includes translations from English to Arabic, Czech, French, and German, and translations from Arabic, Czech, French, and German to English.

You can unzip the original IWSLT2016 dataset files into this directory structure and read by MindSpore’s API. After decompression, you also need to decompress the dataset to be read in the specified folder. For example, if you want to read the dataset of de-en, you need to unzip the tgz file in the de/en directory, the dataset is in the unzipped folder.

.
└── iwslt2016_dataset_directory
     ├── subeval_files
     └── texts
          ├── ar
          │    └── en
          │        └── ar-en
          ├── cs
          │    └── en
          │        └── cs-en
          ├── de
          │    └── en
          │        └── de-en
          │            ├── IWSLT16.TED.dev2010.de-en.de.xml
          │            ├── train.tags.de-en.de
          │            ├── ...
          ├── en
          │    ├── ar
          │    │   └── en-ar
          │    ├── cs
          │    │   └── en-cs
          │    ├── de
          │    │   └── en-de
          │    └── fr
          │        └── en-fr
          └── fr
               └── en
                   └── fr-en

Citation:

@inproceedings{cettoloEtAl:EAMT2012,
Address = {Trento, Italy},
Author = {Mauro Cettolo and Christian Girardi and Marcello Federico},
Booktitle = {Proceedings of the 16$^{th}$ Conference of the European Association for Machine Translation
             (EAMT)},
Date = {28-30},
Month = {May},
Pages = {261--268},
Title = {WIT$^3$: Web Inventory of Transcribed and Translated Talks},
Year = {2012}}
class tinyms.data.IWSLT2017Dataset(dataset_dir, usage=None, language_pair=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

IWSLT2017(International Workshop on Spoken Language Translation) dataset.

The generated dataset has two columns: [text, translation] . The tensor of column text and translation are of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘valid’, ‘test’ and ‘all’. Default: None, all samples.

  • language_pair (sequence, optional) – List containing src and tgt language, supported values are (‘en’, ‘nl’), (‘en’, ‘de’), (‘en’, ‘it’), (‘en’, ‘ro’), (‘nl’, ‘en’), (‘nl’, ‘de’), (‘nl’, ‘it’), (‘nl’, ‘ro’), (‘de’, ‘en’), (‘de’, ‘nl’), (‘de’, ‘it’), (‘de’, ‘ro’), (‘it’, ‘en’), (‘it’, ‘nl’), (‘it’, ‘de’), (‘it’, ‘ro’), (‘ro’, ‘en’), (‘ro’, ‘nl’), (‘ro’, ‘de’), (‘ro’, ‘it’). Default: (‘de’, ‘en’).

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> iwslt2017_dataset_dir = "/path/to/iwslt2017_dataset_dir"
>>> dataset = ds.IWSLT2017Dataset(dataset_dir=iwslt2017_dataset_dir, usage='all', language_pair=('de', 'en'))

About IWSLT2017 dataset:

IWSLT is an international oral translation conference, a major annual scientific conference dedicated to all aspects of oral translation. The MT task of the IWSLT evaluation activity constitutes a dataset, which can be publicly obtained through the WIT3 website wit3 . The IWSLT2017 dataset involves German, English, Italian, Dutch, and Romanian. The dataset includes translations in any two different languages.

You can unzip the original IWSLT2017 dataset files into this directory structure and read by MindSpore’s API. You need to decompress the dataset package in texts/DeEnItNlRo/DeEnItNlRo directory to get the DeEnItNlRo-DeEnItNlRo subdirectory.

.
└── iwslt2017_dataset_directory
    └── DeEnItNlRo
        └── DeEnItNlRo
            └── DeEnItNlRo-DeEnItNlRo
                ├── IWSLT17.TED.dev2010.de-en.de.xml
                ├── train.tags.de-en.de
                ├── ...

Citation:

@inproceedings{cettoloEtAl:EAMT2012,
Address = {Trento, Italy},
Author = {Mauro Cettolo and Christian Girardi and Marcello Federico},
Booktitle = {Proceedings of the 16$^{th}$ Conference of the European Association for Machine Translation
             (EAMT)},
Date = {28-30},
Month = {May},
Pages = {261--268},
Title = {WIT$^3$: Web Inventory of Transcribed and Translated Talks},
Year = {2012}}
class tinyms.data.Multi30kDataset(dataset_dir, usage=None, language_pair=None, num_samples=None, num_parallel_workers=None, shuffle=None, num_shards=None, shard_id=None, cache=None)[source]

Multi30k dataset.

The generated dataset has two columns [text, translation] . The tensor of column text is of the string type. The tensor of column translation is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘test, ‘valid’ or ‘all’. Default: None, will read all samples.

  • language_pair (Sequence[str, str], optional) – Acceptable language_pair include [‘en’, ‘de’], [‘de’, ‘en’]. Default: None, means [‘en’, ‘de’].

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Whether to shuffle the dataset. Default: None, means Shuffle.GLOBAL. If False is provided, no shuffling will be performed. If True is provided, it is the same as setting to mindspore.dataset.Shuffle.GLOBAL. If Shuffle is provided, the effect is as follows:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If usage is not ‘train’, ‘test’, ‘valid’ or ‘all’.

  • TypeError – If language_pair is not of type Sequence[str, str].

  • RuntimeError – If num_samples is less than 0.

  • RuntimeError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> multi30k_dataset_dir = "/path/to/multi30k_dataset_directory"
>>> data = ds.Multi30kDataset(dataset_dir=multi30k_dataset_dir, usage='all', language_pair=['de', 'en'])

About Multi30k dataset:

Multi30K is a multilingual dataset that features approximately 31,000 standardized images described in multiple languages. The images are sourced from Flickr and each image comes with sentence descripitions in both English and German, as well as descriptions in other languages. Multi30k is used primarily for training and testing in tasks such as image captioning, machine translation, and visual question answering.

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

└── multi30k_dataset_directory
      ├── training
      │    ├── train.de
      │    └── train.en
      ├── validation
      │    ├── val.de
      │    └── val.en
      └── mmt16_task1_test
           ├── val.de
           └── val.en

Citation:

@article{elliott-EtAl:2016:VL16,
author    = {{Elliott}, D. and {Frank}, S. and {Sima'an}, K. and {Specia}, L.},
title     = {Multi30K: Multilingual English-German Image Descriptions},
booktitle = {Proceedings of the 5th Workshop on Vision and Language},
year      = {2016},
pages     = {70--74},
year      = 2016
}
class tinyms.data.PennTreebankDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

PennTreebank dataset.

The generated dataset has one column [text] . The tensor of column text is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘test’, ‘valid’ and ‘all’. ‘train’ will read from 42,068 train samples of string type, ‘test’ will read from 3,370 test samples of string type, ‘valid’ will read from 3,761 test samples of string type, ‘all’ will read from all 49,199 samples of string type. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> penn_treebank_dataset_dir = "/path/to/penn_treebank_dataset_directory"
>>> dataset = ds.PennTreebankDataset(dataset_dir=penn_treebank_dataset_dir, usage='all')

About PennTreebank dataset:

Penn Treebank (PTB) dataset, is widely used in machine learning for NLP (Natural Language Processing) research. Word-level PTB does not contain capital letters, numbers, and punctuations, and the vocabulary is capped at 10k unique words, which is relatively small in comparison to most modern datasets which can result in a larger number of out of vocabulary tokens.

Here is the original PennTreebank dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── PennTreebank_dataset_dir
     ├── ptb.test.txt
     ├── ptb.train.txt
     └── ptb.valid.txt

Citation:

@techreport{Santorini1990,
  added-at = {2014-03-26T23:25:56.000+0100},
  author = {Santorini, Beatrice},
  biburl = {https://www.bibsonomy.org/bibtex/234cdf6ddadd89376090e7dada2fc18ec/butonic},
  file = {:Santorini - Penn Treebank tag definitions.pdf:PDF},
  institution = {Department of Computer and Information Science, University of Pennsylvania},
  interhash = {818e72efd9e4b5fae3e51e88848100a0},
  intrahash = {34cdf6ddadd89376090e7dada2fc18ec},
  keywords = {dis pos tagging treebank},
  number = {MS-CIS-90-47},
  timestamp = {2014-03-26T23:25:56.000+0100},
  title = {Part-of-speech tagging guidelines for the {P}enn {T}reebank {P}roject},
  url = {ftp://ftp.cis.upenn.edu/pub/treebank/doc/tagguide.ps.gz},
  year = 1990
}
class tinyms.data.SogouNewsDataset(dataset_dir, usage=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

Sogou News dataset.

The generated dataset has three columns: [index, title, content] , and the data type of three columns is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’ . ‘train’ will read from 450,000 train samples, ‘test’ will read from 60,000 test samples, ‘all’ will read from all 510,000 samples. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, read all samples.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> sogou_news_dataset_dir = "/path/to/sogou_news_dataset_dir"
>>> dataset = ds.SogouNewsDataset(dataset_dir=sogou_news_dataset_dir, usage='all')

About SogouNews Dataset:

SogouNews dataset includes 3 columns, corresponding to class index (1 to 5), title and content. The title and content are escaped using double quotes (“), and any internal double quote is escaped by 2 double quotes (“”). New lines are escaped by a backslash followed with an “n” character, that is “n”.

You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── sogou_news_dir
     ├── classes.txt
     ├── readme.txt
     ├── test.csv
     └── train.csv

Citation:

@misc{zhang2015characterlevel,
    title={Character-level Convolutional Networks for Text Classification},
    author={Xiang Zhang and Junbo Zhao and Yann LeCun},
    year={2015},
    eprint={1509.01626},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
class tinyms.data.SQuADDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

SQuAD 1.1 and SQuAD 2.0 datasets.

The generated dataset with different versions and usages has the same output columns: [context, question, text, answer_start] . The tensor of column context is of the string type. The tensor of column question is of the string type. The tensor of column text is the answer in the context of the string type. The tensor of column answer_start is the start index of answer in context, which is of the uint32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Specify the ‘train’, ‘dev’ or ‘all’ part of dataset. Default: None, all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Whether to shuffle the dataset. Default: Shuffle.GLOBAL. If False is provided, no shuffling will be performed. If True is provided, it is the same as setting to mindspore.dataset.Shuffle.GLOBAL. If Shuffle is provided, the effect is as follows:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> squad_dataset_dir = "/path/to/squad_dataset_file"
>>> dataset = ds.SQuADDataset(dataset_dir=squad_dataset_dir, usage='all')

About SQuAD dataset:

SQuAD (Stanford Question Answering Dataset) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.

SQuAD 1.1, the previous version of the SQuAD dataset, contains 100,000+ question-answer pairs on 500+ articles. SQuAD 2.0 combines the 100,000 questions in SQuAD 1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.

You can get the dataset files into the following structure and read by MindSpore’s API,

For SQuAD 1.1:

.
└── SQuAD1
     ├── train-v1.1.json
     └── dev-v1.1.json

For SQuAD 2.0:

.
└── SQuAD2
     ├── train-v2.0.json
     └── dev-v2.0.json

Citation:

@misc{rajpurkar2016squad,
    title         = {SQuAD: 100,000+ Questions for Machine Comprehension of Text},
    author        = {Pranav Rajpurkar and Jian Zhang and Konstantin Lopyrev and Percy Liang},
    year          = {2016},
    eprint        = {1606.05250},
    archivePrefix = {arXiv},
    primaryClass  = {cs.CL}
}

@misc{rajpurkar2018know,
    title         = {Know What You Don't Know: Unanswerable Questions for SQuAD},
    author        = {Pranav Rajpurkar and Robin Jia and Percy Liang},
    year          = {2018},
    eprint        = {1806.03822},
    archivePrefix = {arXiv},
    primaryClass  = {cs.CL}
}
class tinyms.data.SST2Dataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

SST2(Stanford Sentiment Treebank v2) dataset.

The generated dataset’s train.tsv and dev.tsv have two columns [sentence, label] . The generated dataset’s test.tsv has one column [sentence] . The tensor of column sentence and label are of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be train, test or dev. train will read from 67,349 train samples, test will read from 1,821 test samples, dev will read from all 872 samples. Default: None, will read train samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all text.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed; If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBAL Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle the samples.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards. This argument can only be specified when num_shards is also specified. Default: None.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> sst2_dataset_dir = "/path/to/sst2_dataset_directory"
>>>
>>> # 1) Read 3 samples from SST2 dataset
>>> dataset = ds.SST2Dataset(dataset_dir=sst2_dataset_dir, num_samples=3)
>>>
>>> # 2) Read train samples from SST2 dataset
>>> dataset = ds.SST2Dataset(dataset_dir=sst2_dataset_dir, usage="train")

About SST2 dataset: The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.

Here is the original SST2 dataset structure. You can unzip the dataset files into this directory structure and read by Mindspore’s API.

.
└── sst2_dataset_dir
    ├── train.tsv
    ├── test.tsv
    ├── dev.tsv
    └── original

Citation:

@inproceedings{socher-etal-2013-recursive,
    title     = {Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank},
    author    = {Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning,
                  Christopher D. and Ng, Andrew and Potts, Christopher},
    booktitle = {Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing},
    month     = oct,
    year      = {2013},
    address   = {Seattle, Washington, USA},
    publisher = {Association for Computational Linguistics},
    url       = {https://www.aclweb.org/anthology/D13-1170},
    pages     = {1631--1642},
}
class tinyms.data.TextFileDataset(dataset_files, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

A source dataset that reads and parses datasets stored on disk in text format. The generated dataset has one column [text] with type string.

Parameters:
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: Shuffle.GLOBAL . Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • ValueError – If dataset_files are not valid or do not exist.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> text_file_dataset_dir = ["/path/to/text_file_dataset_file"] # contains 1 or multiple text files
>>> dataset = ds.TextFileDataset(dataset_files=text_file_dataset_dir)
class tinyms.data.UDPOSDataset(dataset_dir, usage=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

UDPOS(Universal Dependencies dataset for Part of Speech) dataset.

The generated dataset has three columns: [word, universal, stanford] , and the data type of three columns is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, ‘valid’ or ‘all’. ‘train’ will read from 12,543 train samples, ‘test’ will read from 2,077 test samples, ‘valid’ will read from 2,002 test samples, ‘all’ will read from all 16,622 samples. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> udpos_dataset_dir = "/path/to/udpos_dataset_dir"
>>> dataset = ds.UDPOSDataset(dataset_dir=udpos_dataset_dir, usage='all')

About UDPOS dataset:

Text corpus dataset that clarifies syntactic or semantic sentence structure. The corpus comprises 254,830 words and 16,622 sentences, taken from various web media including weblogs, newsgroups, emails and reviews.

Citation:

@inproceedings{silveira14gold,
  year = {2014},
  author = {Natalia Silveira and Timothy Dozat and Marie-Catherine de Marneffe and Samuel Bowman
    and Miriam Connor and John Bauer and Christopher D. Manning},
  title = {A Gold Standard Dependency Corpus for {E}nglish},
  booktitle = {Proceedings of the Ninth International Conference on Language
    Resources and Evaluation (LREC-2014)}
}
class tinyms.data.WikiTextDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

WikiText2 and WikiText103 datasets.

The generated dataset has one column [text] , and the tensor of column text is of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Acceptable usages include ‘train’, ‘test’, ‘valid’ and ‘all’. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads the full dataset.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files or invalid.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_samples is invalid (< 0).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

About WikiTextDataset dataset:

The WikiText Long Term Dependency Language Modeling Dataset is an English lexicon containing 100 million words. These terms are drawn from Wikipedia’s premium and benchmark articles, including versions of Wikitext2 and Wikitext103. For WikiText2, it has 36718 lines in wiki.train.tokens, 4358 lines in wiki.test.tokens and 3760 lines in wiki.valid.tokens. For WikiText103, it has 1801350 lines in wiki.train.tokens, 4358 lines in wiki.test.tokens and 3760 lines in wiki.valid.tokens.

Here is the original WikiText dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── WikiText2/WikiText103
     ├── wiki.train.tokens
     ├── wiki.test.tokens
     ├── wiki.valid.tokens

Citation:

@article{merity2016pointer,
  title={Pointer sentinel mixture models},
  author={Merity, Stephen and Xiong, Caiming and Bradbury, James and Socher, Richard},
  journal={arXiv preprint arXiv:1609.07843},
  year={2016}
}

Examples

>>> wiki_text_dataset_dir = "/path/to/wiki_text_dataset_directory"
>>> dataset = ds.WikiTextDataset(dataset_dir=wiki_text_dataset_dir, usage='all')
class tinyms.data.YahooAnswersDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

YahooAnswers dataset.

The generated dataset has four columns [class, title, content, answer] , whose data type is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. ‘train’ will read from 1,400,000 train samples, ‘test’ will read from 60,000 test samples, ‘all’ will read from all 1,460,000 samples. Default: None, all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all text.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> yahoo_answers_dataset_dir = "/path/to/yahoo_answers_dataset_directory"
>>>
>>> # 1) Read 3 samples from YahooAnswers dataset
>>> dataset = ds.YahooAnswersDataset(dataset_dir=yahoo_answers_dataset_dir, num_samples=3)
>>>
>>> # 2) Read train samples from YahooAnswers dataset
>>> dataset = ds.YahooAnswersDataset(dataset_dir=yahoo_answers_dataset_dir, usage="train")

About YahooAnswers dataset:

The YahooAnswers dataset consists of 630,000 text samples in 10 classes, There are 560,000 samples in the train.csv and 70,000 samples in the test.csv. The 10 different classes represent Society & Culture, Science & Mathematics, Health, Education & Reference, Computers & Internet, Sports, Business & Finance, Entertainment & Music, Family & Relationships, Politics & Government.

Here is the original YahooAnswers dataset structure. You can unzip the dataset files into this directory structure and read by Mindspore’s API.

.
└── yahoo_answers_dataset_dir
    ├── train.csv
    ├── test.csv
    ├── classes.txt
    └── readme.txt

Citation:

@article{YahooAnswers,
title   = {Yahoo! Answers Topic Classification Dataset},
author  = {Xiang Zhang},
year    = {2015},
howpublished = {}
}
class tinyms.data.YelpReviewDataset(dataset_dir, usage=None, num_samples=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, num_parallel_workers=None, cache=None)[source]

Yelp Review Polarity and Yelp Review Full datasets.

The generated dataset has two columns: [label, text] , and the data type of two columns is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’ or ‘all’. For Polarity, ‘train’ will read from 560,000 train samples, ‘test’ will read from 38,000 test samples, ‘all’ will read from all 598,000 samples. For Full, ‘train’ will read from 650,000 train samples, ‘test’ will read from 50,000 test samples, ‘all’ will read from all 700,000 samples. Default: None, all samples.

  • num_samples (int, optional) – Number of samples (rows) to read. Default: None, reads all samples.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Bool type and Shuffle enum are both supported to pass in. Default: Shuffle.GLOBAL . If shuffle is False, no shuffling will be performed. If shuffle is True, it is equivalent to setting shuffle to mindspore.dataset.Shuffle.GLOBAL. Set the mode of data shuffling by passing in enumeration variables:

    • Shuffle.GLOBAL: Shuffle both the files and samples.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> yelp_review_dataset_dir = "/path/to/yelp_review_dataset_dir"
>>> dataset = ds.YelpReviewDataset(dataset_dir=yelp_review_dataset_dir, usage='all')

About YelpReview Dataset:

The Yelp Review Full dataset consists of reviews from Yelp. It is extracted from the Yelp Dataset Challenge 2015 data, and it is mainly used for text classification.

The Yelp Review Polarity dataset is constructed from the above dataset, by considering stars 1 and 2 negative, and 3 and 4 positive.

The directory structures of these two datasets are the same. You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── yelp_review_dir
     ├── train.csv
     ├── test.csv
     └── readme.txt

Citation:

For Yelp Review Polarity:

@article{zhangCharacterlevelConvolutionalNetworks2015,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1509.01626},
  primaryClass = {cs},
  title = {Character-Level {{Convolutional Networks}} for {{Text Classification}}},
  abstract = {This article offers an empirical exploration on the use of character-level convolutional networks
              (ConvNets) for text classification. We constructed several large-scale datasets to show that
              character-level convolutional networks could achieve state-of-the-art or competitive results.
              Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF
              variants, and deep learning models such as word-based ConvNets and recurrent neural networks.},
  journal = {arXiv:1509.01626 [cs]},
  author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
  month = sep,
  year = {2015},
}

Citation:

For Yelp Review Full:

@article{zhangCharacterlevelConvolutionalNetworks2015,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1509.01626},
  primaryClass = {cs},
  title = {Character-Level {{Convolutional Networks}} for {{Text Classification}}},
  abstract = {This article offers an empirical exploration on the use of character-level convolutional networks
              (ConvNets) for text classification. We constructed several large-scale datasets to show that
              character-level convolutional networks could achieve state-of-the-art or competitive results.
              Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF
              variants, and deep learning models such as word-based ConvNets and recurrent neural networks.},
  journal = {arXiv:1509.01626 [cs]},
  author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
  month = sep,
  year = {2015},
}
class tinyms.data.CMUArcticDataset(dataset_dir, name=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

CMU Arctic dataset.

The generated dataset has four columns: [waveform, sample_rate, transcript, utterance_id] . The tensor of column waveform is of the float32 type. The tensor of column sample_rate is of a scalar of uint32 type. The tensor of column transcript is of a scalar of string type. The tensor of column utterance_id is of a scalar of string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • name (str, optional) – Part of this dataset, can be ‘aew’, ‘ahw’, ‘aup’, ‘awb’, ‘axb’, ‘bdl’, ‘clb’, ‘eey’, ‘fem’, ‘gka’, ‘jmk’, ‘ksp’, ‘ljm’, ‘lnh’, ‘rms’, ‘rxr’, ‘slp’ or ‘slt’. Default: None, means ‘aew’.

  • num_samples (int, optional) – The number of audio to be included in the dataset. Default: None, will read all audio.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None, no dividing. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None, will use 0. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • Not support mindspore.dataset.PKSampler for sampler parameter yet.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> cmu_arctic_dataset_directory = "/path/to/cmu_arctic_dataset_directory"
>>>
>>> # 1) Read 500 samples (audio files) in cmu_arctic_dataset_directory
>>> dataset = ds.CMUArcticDataset(cmu_arctic_dataset_directory, name="ahw", num_samples=500)
>>>
>>> # 2) Read all samples (audio files) in cmu_arctic_dataset_directory
>>> dataset = ds.CMUArcticDataset(cmu_arctic_dataset_directory)

About CMUArctic dataset:

The CMU Arctic databases are designed for the purpose of speech synthesis research. These single speaker speech databases have been carefully recorded under studio conditions and consist of approximately 1200 phonetically balanced English utterances. In addition to wavefiles, the databases provide complete support for the Festival Speech Synthesis System, including pre-built voices that may be used as is. The entire package is distributed as free software, without restriction on commercial or non-commercial use.

You can construct the following directory structure from CMUArctic dataset and read by MindSpore’s API.

.
└── cmu_arctic_dataset_directory
    ├── cmu_us_aew_arctic
    │    ├── wav
    │    │    ├──arctic_a0001.wav
    │    │    ├──arctic_a0002.wav
    │    │    ├──...
    │    ├── etc
    │    │    └── txt.done.data
    ├── cmu_us_ahw_arctic
    │    ├── wav
    │    │    ├──arctic_a0001.wav
    │    │    ├──arctic_a0002.wav
    │    │    ├──...
    │    └── etc
    │         └── txt.done.data
    └──...

Citation:

@article{LTI2003CMUArctic,
title        = {CMU ARCTIC databases for speech synthesis},
author       = {John Kominek and Alan W Black},
journal      = {Language Technologies Institute [Online]},
year         = {2003}
howpublished = {http://www.festvox.org/cmu_arctic/}
}
class tinyms.data.GTZANDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

GTZAN dataset.

The generated dataset has three columns: [waveform, sample_rate, label] . The tensor of column waveform is of the float32 type. The tensor of column sample_rate is of a scalar of uint32 type. The tensor of column label is of a scalar of string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘valid’, ‘test’ or ‘all’. Default: None, will read all samples.

  • num_samples (int, optional) – The number of audio to be included in the dataset. Default: None, will read all audio.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • Not support mindspore.dataset.PKSampler for sampler parameter yet.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> gtzan_dataset_directory = "/path/to/gtzan_dataset_directory"
>>>
>>> # 1) Read 500 samples (audio files) in gtzan_dataset_directory
>>> dataset = ds.GTZANDataset(gtzan_dataset_directory, usage="all", num_samples=500)
>>>
>>> # 2) Read all samples (audio files) in gtzan_dataset_directory
>>> dataset = ds.GTZANDataset(gtzan_dataset_directory)

About GTZAN dataset:

The GTZAN dataset appears in at least 100 published works and is the most commonly used public dataset for evaluation in machine listening research for music genre recognition. It consists of 1000 audio tracks, each of which is 30 seconds long. It contains 10 genres (blues, classical, country, disco, hiphop, jazz, metal, pop, reggae and rock), each of which is represented by 100 tracks. The tracks are all 22050Hz Mono 16-bit audio files in .wav format.

You can construct the following directory structure from GTZAN dataset and read by MindSpore’s API.

.
└── gtzan_dataset_directory
    ├── blues
    │    ├──blues.00000.wav
    │    ├──blues.00001.wav
    │    ├──blues.00002.wav
    │    ├──...
    ├── disco
    │    ├──disco.00000.wav
    │    ├──disco.00001.wav
    │    ├──disco.00002.wav
    │    └──...
    └──...

Citation:

@misc{tzanetakis_essl_cook_2001,
author    = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title     = "Automatic Musical Genre Classification Of Audio Signals",
url       = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year      = "2001"
}
class tinyms.data.LibriTTSDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

LibriTTS dataset.

The generated dataset has seven columns [waveform, sample_rate, original_text, normalized_text, speaker_id, chapter_id, utterance_id] . The tensor of column waveform is of the float32 type. The tensor of column sample_rate is of a scalar of uint32 type. The tensor of column original_text is of a scalar of string type. The tensor of column normalized_text is of a scalar of string type. The tensor of column speaker_id is of a scalar of uint32 type. The tensor of column chapter_id is of a scalar of uint32 type. The tensor of column utterance_id is of a scalar of string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Part of this dataset, can be ‘dev-clean’, ‘dev-other’, ‘test-clean’, ‘test-other’, ‘train-clean-100’, ‘train-clean-360’, ‘train-other-500’, or ‘all’. Default: None, means ‘all’.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all audio.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • Not support mindspore.dataset.PKSampler for sampler parameter yet.

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> libri_tts_dataset_dir = "/path/to/libri_tts_dataset_directory"
>>>
>>> # 1) Read 500 samples (audio files) in libri_tts_dataset_directory
>>> dataset = ds.LibriTTSDataset(libri_tts_dataset_dir, usage="train-clean-100", num_samples=500)
>>>
>>> # 2) Read all samples (audio files) in libri_tts_dataset_directory
>>> dataset = ds.LibriTTSDataset(libri_tts_dataset_dir)

About LibriTTS dataset:

LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus.

You can construct the following directory structure from LibriTTS dataset and read by MindSpore’s API.

.
└── libri_tts_dataset_directory
    ├── dev-clean
    │    ├── 116
    │    │    ├── 288045
    |    |    |    ├── 116_288045.trans.tsv
    │    │    │    ├── 116_288045_000003_000000.wav
    │    │    │    └──...
    │    │    ├── 288046
    |    |    |    ├── 116_288046.trans.tsv
    |    |    |    ├── 116_288046_000003_000000.wav
    │    |    |    └── ...
    |    |    └── ...
    │    ├── 1255
    │    │    ├── 138279
    |    |    |    ├── 1255_138279.trans.tsv
    │    │    │    ├── 1255_138279_000001_000000.wav
    │    │    │    └── ...
    │    │    ├── 74899
    |    |    |    ├── 1255_74899.trans.tsv
    |    |    |    ├── 1255_74899_000001_000000.wav
    │    |    |    └── ...
    |    |    └── ...
    |    └── ...
    └── ...

Citation:

@article{lecun2010mnist,
title        = {LIBRITTS handwritten digit database},
author       = {zpw, NBU},
journal      = {ATT Labs [Online]},
volume       = {2},
year         = {2010},
howpublished = {http://www.openslr.org/resources/60/},
description  = {The LibriSpeech ASR corpus (http://www.openslr.org/12/) [1] has been used in
                various research projects. However, as it was originally designed for ASR research,
                there are some undesired properties when using for TTS research}
}
class tinyms.data.LJSpeechDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

LJSpeech dataset.

The generated dataset has four columns [waveform, sample_rate, transcription, normalized_transcript] . The column waveform is a tensor of the float32 type. The column sample_rate is a scalar of the int32 type. The column transcription is a scalar of the string type. The column normalized_transcript is a scalar of the string type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of audios to be included in the dataset. Default: None, all audios.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> lj_speech_dataset_dir = "/path/to/lj_speech_dataset_directory"
>>>
>>> # 1) Get all samples from LJSPEECH dataset in sequence
>>> dataset = ds.LJSpeechDataset(dataset_dir=lj_speech_dataset_dir, shuffle=False)
>>>
>>> # 2) Randomly select 350 samples from LJSPEECH dataset
>>> dataset = ds.LJSpeechDataset(dataset_dir=lj_speech_dataset_dir, num_samples=350, shuffle=True)
>>>
>>> # 3) Get samples from LJSPEECH dataset for shard 0 in a 2-way distributed training
>>> dataset = ds.LJSpeechDataset(dataset_dir=lj_speech_dataset_dir, num_shards=2, shard_id=0)
>>>
>>> # In LJSPEECH dataset, each dictionary has keys "waveform", "sample_rate", "transcription"
>>> # and "normalized_transcript"

About LJSPEECH dataset:

This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.

The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.

Here is the original LJSPEECH dataset structure. You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

.
└── LJSpeech-1.1
    ├── README
    ├── metadata.csv
    └── wavs
        ├── LJ001-0001.wav
        ├── LJ001-0002.wav
        ├── LJ001-0003.wav
        ├── LJ001-0004.wav
        ├── LJ001-0005.wav
        ├── LJ001-0006.wav
        ├── LJ001-0007.wav
        ├── LJ001-0008.wav
        ...
        ├── LJ050-0277.wav
        └── LJ050-0278.wav

Citation:

@misc{lj_speech17,
author       = {Keith Ito and Linda Johnson},
title        = {The LJ Speech Dataset},
howpublished = {url{https://keithito.com/LJ-Speech-Dataset}},
year         = 2017
}
class tinyms.data.SpeechCommandsDataset(dataset_dir, usage=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Speech Commands dataset.

The generated dataset has five columns [waveform, sample_rate, label, speaker_id, utterance_number] . The tensor of column waveform is a vector of the float32 type. The tensor of column sample_rate is a scalar of the int32 type. The tensor of column label is a scalar of the string type. The tensor of column speaker_id is a scalar of the string type. The tensor of column utterance_number is a scalar of the int32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • usage (str, optional) – Usage of this dataset, can be ‘train’, ‘test’, ‘valid’ or ‘all’. ‘train’ will read from 84,843 samples, ‘test’ will read from 11,005 samples, ‘valid’ will read from 9,981 test samples and ‘all’ will read from all 105,829 samples. Default: None, will read all samples.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will read all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> speech_commands_dataset_dir = "/path/to/speech_commands_dataset_directory"
>>>
>>> # Read 3 samples from SpeechCommands dataset
>>> dataset = ds.SpeechCommandsDataset(dataset_dir=speech_commands_dataset_dir, num_samples=3)

About SpeechCommands dataset:

The SpeechCommands is database for limited_vocabulary speech recognition, containing 105,829 audio samples of ‘.wav’ format.

Here is the original SpeechCommands dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── speech_commands_dataset_dir
     ├── cat
          ├── b433eff_nohash_0.wav
          ├── 5a33edf_nohash_1.wav
          └──....
     ├── dog
          ├── b433w2w_nohash_0.wav
          └──....
     ├── four
     └── ....

Citation:

@article{2018Speech,
title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
author={Warden, P.},
year={2018}
}
class tinyms.data.TedliumDataset(dataset_dir, release, usage=None, extensions=None, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

Tedlium dataset. The columns of generated dataset depend on the source SPH files and the corresponding STM files.

The generated dataset has six columns [waveform, sample_rate, transcript, talk_id, speaker_id, identifier] .

The data type of column waveform is float32, the data type of column sample_rate is int32, and the data type of columns transcript , talk_id , speaker_id and identifier is string.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • release (str) – Release of the dataset, can be ‘release1’, ‘release2’, ‘release3’.

  • usage (str, optional) – Usage of this dataset. For release1 or release2, can be ‘train’, ‘test’, ‘dev’ or ‘all’. ‘train’ will read from train samples, ‘test’ will read from test samples, ‘dev’ will read from dev samples, ‘all’ will read from all samples. For release3, can only be ‘all’, it will read from data samples. Default: None, all samples.

  • extensions (str, optional) – Extensions of the SPH files, only ‘.sph’ is valid. Default: None, “.sph”.

  • num_samples (int, optional) – The number of audio samples to be included in the dataset. Default: None, all samples.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain stm files.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> # 1) Get all train samples from TEDLIUM_release1 dataset in sequence.
>>> dataset = ds.TedliumDataset(dataset_dir="/path/to/tedlium1_dataset_directory",
...                             release="release1", shuffle=False)
>>>
>>> # 2) Randomly select 10 samples from TEDLIUM_release2 dataset.
>>> dataset = ds.TedliumDataset(dataset_dir="/path/to/tedlium2_dataset_directory",
...                             release="release2", num_samples=10, shuffle=True)
>>>
>>> # 3) Get samples from TEDLIUM_release-3 dataset for shard 0 in a 2-way distributed training.
>>> dataset = ds.TedliumDataset(dataset_dir="/path/to/tedlium3_dataset_directory",
...                             release="release3", num_shards=2, shard_id=0)
>>>
>>> # In TEDLIUM dataset, each dictionary has keys : waveform, sample_rate, transcript, talk_id,
>>> # speaker_id and identifier.

About TEDLIUM_release1 dataset:

The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.

About TEDLIUM_release2 dataset:

This is the TED-LIUM corpus release 2, licensed under Creative Commons BY-NC-ND 3.0. All talks and text are property of TED Conferences LLC. The TED-LIUM corpus was made from audio talks and their transcriptions available on the TED website. We have prepared and filtered these data in order to train acoustic models to participate to the International Workshop on Spoken Language Translation 2011 (the LIUM English/French SLT system reached the first rank in the SLT task).

About TEDLIUM_release-3 dataset:

This is the TED-LIUM corpus release 3, licensed under Creative Commons BY-NC-ND 3.0. All talks and text are property of TED Conferences LLC. This new TED-LIUM release was made through a collaboration between the Ubiqus company and the LIUM (University of Le Mans, France).

You can unzip the dataset files into the following directory structure and read by MindSpore’s API.

The structure of TEDLIUM release2 is the same as TEDLIUM release1, only the data is different.

.
└──TEDLIUM_release1
    └── dev
        ├── sph
            ├── AlGore_2009.sph
            ├── BarrySchwartz_2005G.sph
        ├── stm
            ├── AlGore_2009.stm
            ├── BarrySchwartz_2005G.stm
    └── test
        ├── sph
            ├── AimeeMullins_2009P.sph
            ├── BillGates_2010.sph
        ├── stm
            ├── AimeeMullins_2009P.stm
            ├── BillGates_2010.stm
    └── train
        ├── sph
            ├── AaronHuey_2010X.sph
            ├── AdamGrosser_2007.sph
        ├── stm
            ├── AaronHuey_2010X.stm
            ├── AdamGrosser_2007.stm
    └── readme
    └── TEDLIUM.150k.dic

The directory structure of TEDLIUM release3 is slightly different.

.
└──TEDLIUM_release-3
    └── data
        ├── ctl
        ├── sph
            ├── 911Mothers_2010W.sph
            ├── AalaElKhani.sph
        ├── stm
            ├── 911Mothers_2010W.stm
            ├── AalaElKhani.stm
    └── doc
    └── legacy
    └── LM
    └── speaker-adaptation
    └── readme
    └── TEDLIUM.150k.dic

Citation:

@article{
  title={TED-LIUM: an automatic speech recognition dedicated corpus},
  author={A. Rousseau, P. Deléglise, Y. Estève},
  journal={Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)},
  year={May 2012},
  biburl={https://www.openslr.org/7/}
}

@article{
  title={Enhancing the TED-LIUM Corpus with Selected Data for Language Modeling and More TED Talks},
  author={A. Rousseau, P. Deléglise, and Y. Estève},
  journal={Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)},
  year={May 2014},
  biburl={https://www.openslr.org/19/}
}

@article{
  title={TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation},
  author={François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Estève},
  journal={the 20th International Conference on Speech and Computer (SPECOM 2018)},
  year={September 2018},
  biburl={https://www.openslr.org/51/}
}
class tinyms.data.YesNoDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None, cache=None)[source]

YesNo dataset.

The generated dataset has three columns [waveform, sample_rate, labels] . The tensor of column waveform is a vector of the float32 type. The tensor of column sample_rate is a scalar of the int32 type. The tensor of column labels is a scalar of the int32 type.

Parameters:
  • dataset_dir (str) – Path to the root directory that contains the dataset.

  • num_samples (int, optional) – The number of images to be included in the dataset. Default: None, will read all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_dir does not contain data files.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If sampler and shuffle are specified at the same time.

  • RuntimeError – If sampler and num_shards/shard_id are specified at the same time.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> yes_no_dataset_dir = "/path/to/yes_no_dataset_directory"
>>>
>>> # Read 3 samples from YesNo dataset
>>> dataset = ds.YesNoDataset(dataset_dir=yes_no_dataset_dir, num_samples=3)
>>>
>>> # Note: In YesNo dataset, each dictionary has keys "waveform", "sample_rate", "label"

About YesNo dataset:

Yesno is an audio dataset consisting of 60 recordings of one individual saying yes or no in Hebrew; each recording is eight words long.

Here is the original YesNo dataset structure. You can unzip the dataset files into this directory structure and read by MindSpore’s API.

.
└── yes_no_dataset_dir
     ├── 1_1_0_0_1_1_0_0.wav
     ├── 1_0_0_0_1_1_0_0.wav
     ├── 1_1_0_0_1_1_0_0.wav
     └──....

Citation:

@NetworkResource{Kaldi_audio_project,
author    = {anonymous},
url       = "http://wwww.openslr.org/1/"
}
class tinyms.data.CSVDataset(dataset_files, field_delim=', ', column_defaults=None, column_names=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, cache=None)[source]

A source dataset that reads and parses comma-separated values (CSV) files as dataset.

The columns of generated dataset depend on the source CSV files.

Parameters:
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in a lexicographical order.

  • field_delim (str, optional) – A string that indicates the char delimiter to separate fields. Default: ‘,’.

  • column_defaults (list, optional) – List of default values for the CSV field. Default: None. Each item in the list is either a valid type (float, int, or string). If this is not provided, treats all columns as string type.

  • column_names (list[str], optional) – List of column names of the dataset. Default: None. If this is not provided, infers the column_names from the first row of CSV file.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, will include all images.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: Shuffle.GLOBAL. Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • RuntimeError – If dataset_files are not valid or do not exist.

  • ValueError – If field_delim is invalid.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> csv_dataset_dir = ["/path/to/csv_dataset_file"] # contains 1 or multiple csv files
>>> dataset = ds.CSVDataset(dataset_files=csv_dataset_dir, column_names=['col1', 'col2', 'col3', 'col4'])
class tinyms.data.MindDataset(dataset_files, columns_list=None, num_parallel_workers=None, shuffle=None, num_shards=None, shard_id=None, sampler=None, padded_sample=None, num_padded=None, num_samples=None, cache=None)[source]

A source dataset that reads and parses MindRecord dataset.

The columns of generated dataset depend on the source MindRecord files.

Parameters:
  • dataset_files (Union[str, list[str]]) – If dataset_file is a str, it represents for a file name of one component of a mindrecord source, other files with identical source in the same path will be found and loaded automatically. If dataset_file is a list, it represents for a list of dataset files to be read directly.

  • columns_list (list[str], optional) – List of columns to be read. Default: None, read all columns.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: None, performs global shuffle. Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Global shuffle of all rows of data in dataset, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle the file sequence but keep the order of data within each file.

    • Shuffle.INFILE: Keep the file sequence the same but shuffle the data within each file.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • sampler (Sampler, optional) – Object used to choose samples from the dataset. Default: None, sampler is exclusive with shuffle and block_reader. Support list: SubsetRandomSampler, PkSampler, RandomSampler, SequentialSampler, DistributedSampler.

  • padded_sample (dict, optional) – Samples will be appended to dataset, where keys are the same as column_list.

  • num_padded (int, optional) – Number of padding samples. Dataset size plus num_padded should be divisible by num_shards.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all samples.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

Raises:
  • ValueError – If dataset_files are not valid or do not exist.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> mind_dataset_dir = ["/path/to/mind_dataset_file"] # contains 1 or multiple MindRecord files
>>> dataset = ds.MindDataset(dataset_files=mind_dataset_dir)
class tinyms.data.OBSMindDataset(dataset_files, server, ak, sk, sync_obs_path, columns_list=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, shard_equal_rows=True)[source]

A source dataset that reads and parses MindRecord dataset which stored in cloud storage such as OBS, Minio or AWS S3.

The columns of generated dataset depend on the source MindRecord files.

Parameters:
  • dataset_files (list[str]) – List of files in cloud storage to be read and file path is in the format of s3://bucketName/objectKey.

  • server (str) – Endpoint for accessing cloud storage. If it’s OBS Service of Huawei Cloud,the endpoint is like <obs.cn-north-4.myhuaweicloud.com> (Region cn-north-4). If it’s Minio which starts locally, the endpoint is like <https://127.0.0.1:9000>.

  • ak (str) – Access key ID of cloud storage.

  • sk (str) – Secret key ID of cloud storage.

  • sync_obs_path (str) – Remote dir path used for synchronization, users need to create it on cloud storage in advance. Path is in the format of s3://bucketName/objectKey.

  • columns_list (list[str], optional) – List of columns to be read. Default: None, read all columns.

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: None, performs global shuffle. Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, performs global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Global shuffle of all rows of data in dataset, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle the file sequence but keep the order of data within each file.

    • Shuffle.INFILE: Keep the file sequence the same but shuffle the data within each file.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None.

  • shard_id (int, optional) – The shard ID within num_shards. Default: None. This argument can only be specified when num_shards is also specified.

  • shard_equal_rows (bool, optional) – Get equal rows for all shards. Default: True. If shard_equal_rows is false, number of rows of each shard may be not equal, and may lead to a failure in distributed training. When the number of samples of per MindRecord file are not equal, it is suggested to set to true. This argument should only be specified when num_shards is also specified.

Raises:
  • RuntimeError – If sync_obs_path do not exist.

  • ValueError – If columns_list is invalid.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • It’s necessary to create a synchronization directory on cloud storage in advance which be defined by parameter: sync_obs_path .

  • If training is offline(no cloud), it’s recommended to set the environment variable BATCH_JOB_ID .

  • In distributed training, if there are multiple nodes(servers), all 8 devices must be used in each node(server). If there is only one node(server), there is no such restriction.

Examples

>>> # OBS
>>> bucket = "iris"  # your obs bucket name
>>> # the bucket directory structure is similar to the following:
>>> #  - imagenet21k
>>> #        | - mr_imagenet21k_01
>>> #        | - mr_imagenet21k_02
>>> #  - sync_node
>>> dataset_obs_dir = ["s3://" + bucket + "/imagenet21k/mr_imagenet21k_01",
...                    "s3://" + bucket + "/imagenet21k/mr_imagenet21k_02"]
>>> sync_obs_dir = "s3://" + bucket + "/sync_node"
>>> num_shards = 8
>>> shard_id = 0
>>> dataset = ds.OBSMindDataset(dataset_obs_dir, "obs.cn-north-4.myhuaweicloud.com",
...                             "AK of OBS", "SK of OBS",
...                             sync_obs_dir, shuffle=True, num_shards=num_shards, shard_id=shard_id)
class tinyms.data.TFRecordDataset(dataset_files, schema=None, columns_list=None, num_samples=None, num_parallel_workers=None, shuffle=<Shuffle.GLOBAL: 'global'>, num_shards=None, shard_id=None, shard_equal_rows=False, cache=None, compression_type=None)[source]

A source dataset that reads and parses datasets stored on disk in TFData format.

The columns of generated dataset depend on the source TFRecord files.

Parameters:
  • dataset_files (Union[str, list[str]]) – String or list of files to be read or glob strings to search for a pattern of files. The list will be sorted in lexicographical order.

  • schema (Union[str, Schema], optional) – Data format policy, which specifies the data types and shapes of the data column to be read. Both JSON file path and objects constructed by mindspore.dataset.Schema are acceptable. Default: None.

  • columns_list (list[str], optional) – List of columns to be read. Default: None, read all columns.

  • num_samples (int, optional) – The number of samples (rows) to be included in the dataset. Default: None. Processing priority for num_samples is as the following: 1. If num_samples is greater than 0, read num_samples rows. 2. Otherwise, if numRows (parsed from schema ) is greater than 0, read numRows rows. 3. Otherwise, read the full dataset. num_samples or numRows (parsed from schema ) will be interpreted as number of rows per shard. It is highly recommended to provide num_samples or numRows (parsed from schema ) when compression_type is “GZIP” or “ZLIB” to avoid performance degradation due to multiple decompressions of the same file to obtain the file size.

  • num_parallel_workers (int, optional) – Number of worker threads to read the data. Default: None, will use global default workers(8), it can be set by mindspore.dataset.config.set_num_parallel_workers .

  • shuffle (Union[bool, Shuffle], optional) –

    Perform reshuffling of the data every epoch. Default: Shuffle.GLOBAL. Bool type and Shuffle enum are both supported to pass in. If shuffle is False, no shuffling will be performed. If shuffle is True, perform global shuffle. There are three levels of shuffling, desired shuffle enum defined by mindspore.dataset.Shuffle.

    • Shuffle.GLOBAL: Shuffle both the files and samples, same as setting shuffle to True.

    • Shuffle.FILES: Shuffle files only.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the maximum sample number per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument can only be specified when num_shards is also specified.

  • shard_equal_rows (bool, optional) – Get equal rows for all shards. Default: False. If shard_equal_rows is False, the number of rows of each shard may not be equal, and may lead to a failure in distributed training. When the number of samples per TFRecord file are not equal, it is suggested to set it to True. This argument should only be specified when num_shards is also specified. When compression_type is not None, and num_samples or numRows (parsed from schema ) is provided, shard_equal_rows will be implied as true.

  • cache (DatasetCache, optional) –

    Use tensor caching service to speed up dataset processing. More details: Single-Node Data Cache . Default: None, which means no cache is used.

  • compression_type (str, optional) – The type of compression used for all files, must be either ‘’, ‘GZIP’, or ‘ZLIB’. Default: None, as in empty string.

Raises:
  • ValueError – If dataset_files are not valid or do not exist.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

  • ValueError – If compression_type is invalid (other than ‘’, ‘GZIP’, or ‘ZLIB’).

  • ValueError – If compression_type is provided, but the number of dataset files < num_shards .

  • ValueError – If num_samples < 0.

Examples

>>> from mindspore import dtype as mstype
>>>
>>> tfrecord_dataset_dir = ["/path/to/tfrecord_dataset_file"] # contains 1 or multiple TFRecord files
>>> tfrecord_schema_file = "/path/to/tfrecord_schema_file"
>>>
>>> # 1) Get all rows from tfrecord_dataset_dir with no explicit schema.
>>> # The meta-data in the first row will be used as a schema.
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir)
>>>
>>> # 2) Get all rows from tfrecord_dataset_dir with user-defined schema.
>>> schema = ds.Schema()
>>> schema.add_column(name='col_1d', de_type=mstype.int64, shape=[2])
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir, schema=schema)
>>>
>>> # 3) Get all rows from tfrecord_dataset_dir with the schema file.
>>> dataset = ds.TFRecordDataset(dataset_files=tfrecord_dataset_dir, schema=tfrecord_schema_file)
class tinyms.data.GeneratorDataset(source, column_names=None, column_types=None, schema=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None, python_multiprocessing=True, max_rowsize=6)[source]

A source dataset that generates data from Python by invoking Python data source each epoch.

The column names and column types of generated dataset depend on Python data defined by users.

Parameters:
  • source (Union[Callable, Iterable, Random Accessible]) – A generator callable object, an iterable Python object or a random accessible Python object. Callable source is required to return a tuple of NumPy arrays as a row of the dataset on source().next(). Iterable source is required to return a tuple of NumPy arrays as a row of the dataset on iter(source).next(). Random accessible source is required to return a tuple of NumPy arrays as a row of the dataset on source[idx].

  • column_names (Union[str, list[str]], optional) – List of column names of the dataset. Default: None. Users are required to provide either column_names or schema.

  • column_types (list[mindspore.dtype], optional) – List of column data types of the dataset. Default: None. If provided, sanity check will be performed on generator output.

  • schema (Union[str, Schema], optional) – Data format policy, which specifies the data types and shapes of the data column to be read. Both JSON file path and objects constructed by mindspore.dataset.Schema are acceptable. Default: None.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all images.

  • num_parallel_workers (int, optional) – Number of worker threads/subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Random accessible input is required. Default: None, expected order behavior shown in the table below.

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Random accessible input is required. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. Random accessible input is required. When this argument is specified, num_samples reflects the maximum sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument must be specified only when num_shards is also specified. Random accessible input is required.

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker process. This option could be beneficial if the Python operation is computational heavy. Default: True.

  • max_rowsize (int, optional) – Maximum size of row in MB that is used for shared memory allocation to copy data between processes. This is only used if python_multiprocessing is set to True. Default: 6 MB.

Raises:
  • RuntimeError – If source raises an exception during execution.

  • RuntimeError – If len of column_names does not match output len of source.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If sampler and shuffle are specified at the same time.

  • ValueError – If sampler and sharding are specified at the same time.

  • ValueError – If num_shards is specified but shard_id is None.

  • ValueError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Note

  • If you configure python_multiprocessing=True (default: True) and num_parallel_workers>1 (default: 1) indicates that the multi-process mode is started for data load acceleration. At this time, as the dataset iterates, the memory consumption of the subprocess will gradually increase, mainly because the subprocess of the user-defined dataset obtains the member variables from the main process in the Copy On Write way. Example: If you define a dataset with __ init__ function which contains a large number of member variable data (for example, a very large file name list is loaded during the dataset construction) and uses the multi-process mode, which may cause the problem of OOM (the estimated total memory usage is: (num_parallel_workers+1) * size of the parent process ). The simplest solution is to replace Python objects (such as list/dict/int/float/string) with non referenced data types (such as Pandas, Numpy or PyArrow objects) for member variables, or load less meta data in member variables, or configure python_multiprocessing=False to use multi-threading mode.

  • Input source accepts user-defined Python functions (PyFuncs), Do not add network computing operators from mindspore.nn and mindspore.ops or others into this source .

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Examples

>>> import numpy as np
>>>
>>> # 1) Multidimensional generator function as callable input.
>>> def generator_multidimensional():
...     for i in range(64):
...         yield (np.array([[i, i + 1], [i + 2, i + 3]]),)
>>>
>>> dataset = ds.GeneratorDataset(source=generator_multidimensional, column_names=["multi_dimensional_data"])
>>>
>>> # 2) Multi-column generator function as callable input.
>>> def generator_multi_column():
...     for i in range(64):
...         yield np.array([i]), np.array([[i, i + 1], [i + 2, i + 3]])
>>>
>>> dataset = ds.GeneratorDataset(source=generator_multi_column, column_names=["col1", "col2"])
>>>
>>> # 3) Iterable dataset as iterable input.
>>> class MyIterable:
...     def __init__(self):
...         self._index = 0
...         self._data = np.random.sample((5, 2))
...         self._label = np.random.sample((5, 1))
...
...     def __next__(self):
...         if self._index >= len(self._data):
...             raise StopIteration
...         else:
...             item = (self._data[self._index], self._label[self._index])
...             self._index += 1
...             return item
...
...     def __iter__(self):
...         self._index = 0
...         return self
...
...     def __len__(self):
...         return len(self._data)
>>>
>>> dataset = ds.GeneratorDataset(source=MyIterable(), column_names=["data", "label"])
>>>
>>> # 4) Random accessible dataset as random accessible input.
>>> class MyAccessible:
...     def __init__(self):
...         self._data = np.random.sample((5, 2))
...         self._label = np.random.sample((5, 1))
...
...     def __getitem__(self, index):
...         return self._data[index], self._label[index]
...
...     def __len__(self):
...         return len(self._data)
>>>
>>> dataset = ds.GeneratorDataset(source=MyAccessible(), column_names=["data", "label"])
>>>
>>> # list, dict, tuple of Python is also random accessible
>>> dataset = ds.GeneratorDataset(source=[(np.array(0),), (np.array(1),), (np.array(2),)], column_names=["col"])
class tinyms.data.NumpySlicesDataset(data, column_names=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None)[source]

Creates a dataset with given data slices, mainly for loading Python data into dataset.

The column names and column types of generated dataset depend on Python data defined by users.

Parameters:
  • data (Union[list, tuple, dict]) – list, tuple, dict and other NumPy formats. Input data will be sliced along the first dimension and generate additional rows, if input is list, there will be one column in each row, otherwise there tends to be multi columns. Large data is not recommended to be loaded in this way as data is loading into memory.

  • column_names (list[str], optional) – List of column names of the dataset. Default: None. If column_names is not provided, the output column names will be named as the keys of dict when the input data is a dict, otherwise they will be named like column_0, column_1 …

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all samples.

  • num_parallel_workers (int, optional) – Number of worker subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Default: None, expected order behavior shown in the table below.

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Default: None, expected order behavior shown in the table below.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument must be specified only when num_shards is also specified.

Note

  • This dataset can take in a sampler . sampler and shuffle are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using sampler and shuffle

Parameter sampler

Parameter shuffle

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Raises:
  • RuntimeError – If len of column_names does not match output len of data.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If sampler and shuffle are specified at the same time.

  • ValueError – If sampler and sharding are specified at the same time.

  • ValueError – If num_shards is specified but shard_id is None.

  • ValueError – If shard_id is specified but num_shards is None.

  • ValueError – If shard_id is not in range of [0, num_shards ).

Examples

>>> # 1) Input data can be a list
>>> data = [1, 2, 3]
>>> dataset = ds.NumpySlicesDataset(data=data, column_names=["column_1"])
>>>
>>> # 2) Input data can be a dictionary, and column_names will be its keys
>>> data = {"a": [1, 2], "b": [3, 4]}
>>> dataset = ds.NumpySlicesDataset(data=data)
>>>
>>> # 3) Input data can be a tuple of lists (or NumPy arrays), each tuple element refers to data in each column
>>> data = ([1, 2], [3, 4], [5, 6])
>>> dataset = ds.NumpySlicesDataset(data=data, column_names=["column_1", "column_2", "column_3"])
>>>
>>> # 4) Load data from CSV file
>>> import pandas as pd
>>> df = pd.read_csv(filepath_or_buffer=csv_dataset_dir[0])
>>> dataset = ds.NumpySlicesDataset(data=dict(df), shuffle=False)
class tinyms.data.PaddedDataset(padded_samples)[source]

Creates a dataset with filler data provided by user.

Mainly used to add to the original dataset and assign it to the corresponding shard.

Parameters:

padded_samples (list(dict)) – Samples provided by user.

Raises:
  • TypeError – If padded_samples is not an instance of list.

  • TypeError – If the element of padded_samples is not an instance of dict.

  • ValueError – If the padded_samples is empty.

Examples

>>> import numpy as np
>>> data = [{'image': np.zeros(1, np.uint8)}, {'image': np.zeros(2, np.uint8)}]
>>> dataset = ds.PaddedDataset(padded_samples=data)
class tinyms.data.GraphData(dataset_file, num_parallel_workers=None, working_mode='local', hostname='127.0.0.1', port=50051, num_client=1, auto_shutdown=True)[source]

Reads the graph dataset used for GNN training from the shared file and database. Support reading graph datasets like Cora, Citeseer and PubMed.

About how to load raw graph dataset into MindSpore please refer to Loading Graph Dataset .

Parameters:
  • dataset_file (str) – One of file names in the dataset.

  • num_parallel_workers (int, optional) – Number of workers to process the dataset in parallel. Default: None.

  • working_mode (str, optional) –

    Set working mode, now supports ‘local’/’client’/’server’. Default: ‘local’.

    • ’local’, used in non-distributed training scenarios.

    • ’client’, used in distributed training scenarios. The client does not load data, but obtains data from the server.

    • ’server’, used in distributed training scenarios. The server loads the data and is available to the client.

  • hostname (str, optional) – Hostname of the graph data server. This parameter is only valid when working_mode is set to ‘client’ or ‘server’. Default: ‘127.0.0.1’.

  • port (int, optional) – Port of the graph data server. The range is 1024-65535. This parameter is only valid when working_mode is set to ‘client’ or ‘server’. Default: 50051.

  • num_client (int, optional) – Maximum number of clients expected to connect to the server. The server will allocate resources according to this parameter. This parameter is only valid when working_mode is set to ‘server’. Default: 1.

  • auto_shutdown (bool, optional) – Valid when working_mode is set to ‘server’, when the number of connected clients reaches num_client and no client is being connected, the server automatically exits. Default: True.

Raises:
  • ValueError – If dataset_file does not exist or permission denied.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If working_mode is not ‘local’, ‘client’ or ‘server’.

  • TypeError – If hostname is illegal.

  • ValueError – If port is not in range [1024, 65535].

  • ValueError – If num_client is not in range [1, 255].

Supported Platforms:

CPU

Examples

>>> graph_dataset_dir = "/path/to/graph_dataset_file"
>>> graph_data = ds.GraphData(dataset_file=graph_dataset_dir, num_parallel_workers=2)
>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> features = graph_data.get_node_feature(node_list=nodes, feature_types=[1])
get_all_edges(edge_type)[source]

Get all edges in the graph.

Parameters:

edge_type (int) – Specify the type of edge.

Returns:

numpy.ndarray, array of edges.

Examples

>>> edges = graph_data.get_all_edges(edge_type=0)
Raises:

TypeError – If edge_type is not integer.

get_all_neighbors(node_list, neighbor_type, output_format=<OutputFormat.NORMAL: 0>)[source]

Get neighbor_type neighbors of the nodes in node_list . We try to use the following example to illustrate the definition of these formats. 1 represents connected between two nodes, and 0 represents not connected.

Adjacent Matrix

0

1

2

3

0

0

1

0

0

1

0

0

1

0

2

1

0

0

1

3

1

0

0

0

Normal Format

src

0

1

2

3

dst_0

1

2

0

1

dst_1

-1

-1

3

-1

COO Format

src

0

1

2

2

3

dst

1

2

0

3

1

CSR Format

offsetTable

0

1

2

4

dstTable

1

2

0

3

1

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_type (int) – Specify the type of neighbor node.

  • output_format (OutputFormat, optional) – Output storage format. Default: OutputFormat.NORMAL. It can be any of [OutputFormat.NORMAL, OutputFormat.COO, OutputFormat.CSR].

Returns:

For NORMAL format or COO format numpy.ndarray which represents the array of neighbors will return. As if CSR format is specified, two numpy.ndarrays will return. The first one is offset table, the second one is neighbors

Examples

>>> from mindspore.dataset.engine import OutputFormat
>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> neighbors = graph_data.get_all_neighbors(node_list=nodes, neighbor_type=2)
>>> neighbors_coo = graph_data.get_all_neighbors(node_list=nodes, neighbor_type=2,
...                                              output_format=OutputFormat.COO)
>>> offset_table, neighbors_csr = graph_data.get_all_neighbors(node_list=nodes, neighbor_type=2,
...                                                            output_format=OutputFormat.CSR)
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_type is not integer.

get_all_nodes(node_type)[source]

Get all nodes in the graph.

Parameters:

node_type (int) – Specify the type of node.

Returns:

numpy.ndarray, array of nodes.

Examples

>>> nodes = graph_data.get_all_nodes(node_type=1)
Raises:

TypeError – If node_type is not integer.

get_edge_feature(edge_list, feature_types)[source]

Get feature_types feature of the edges in edge_list .

Parameters:
Returns:

numpy.ndarray, array of features.

Examples

>>> edges = graph_data.get_all_edges(edge_type=0)
>>> features = graph_data.get_edge_feature(edge_list=edges, feature_types=[1])
Raises:
  • TypeError – If edge_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_edges_from_nodes(node_list)[source]

Get edges from the nodes.

Parameters:

node_list (Union[list[tuple], numpy.ndarray]) – The given list of pair nodes ID.

Returns:

numpy.ndarray, array of edges ID.

Examples

>>> edges = graph_data.get_edges_from_nodes(node_list=[(101, 201), (103, 207)])
Raises:

TypeError – If edge_list is not list or ndarray.

get_neg_sampled_neighbors(node_list, neg_neighbor_num, neg_neighbor_type)[source]

Get neg_neighbor_type negative sampled neighbors of the nodes in node_list .

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neg_neighbor_num (int) – Number of neighbors sampled.

  • neg_neighbor_type (int) – Specify the type of negative neighbor.

Returns:

numpy.ndarray, array of neighbors.

Examples

>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> neg_neighbors = graph_data.get_neg_sampled_neighbors(node_list=nodes, neg_neighbor_num=5,
...                                                      neg_neighbor_type=2)
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neg_neighbor_num is not integer.

  • TypeError – If neg_neighbor_type is not integer.

get_node_feature(node_list, feature_types)[source]

Get feature_types feature of the nodes in node_list .

Parameters:
Returns:

numpy.ndarray, array of features.

Examples

>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> features = graph_data.get_node_feature(node_list=nodes, feature_types=[2, 3])
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_nodes_from_edges(edge_list)[source]

Get nodes from the edges.

Parameters:

edge_list (Union[list, numpy.ndarray]) – The given list of edges.

Returns:

numpy.ndarray, array of nodes.

Examples

>>> from mindspore.dataset import GraphData
>>>
>>> g = ds.GraphData("/path/to/testdata", 1)
>>> edges = g.get_all_edges(0)
>>> nodes = g.get_nodes_from_edges(edges)
Raises:

TypeError – If edge_list is not list or ndarray.

get_sampled_neighbors(node_list, neighbor_nums, neighbor_types, strategy=<SamplingStrategy.RANDOM: 0>)[source]

Get sampled neighbor information.

The api supports multi-hop neighbor sampling. That is, the previous sampling result is used as the input of next-hop sampling. A maximum of 6-hop are allowed.

The sampling result is tiled into a list in the format of [input node, 1-hop sampling result, 2-hop sampling result …].

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_nums (Union[list, numpy.ndarray]) – Number of neighbors sampled per hop.

  • neighbor_types (Union[list, numpy.ndarray]) – Neighbor type sampled per hop, type of each element in neighbor_types should be int.

  • strategy (SamplingStrategy, optional) –

    Sampling strategy. Default: SamplingStrategy.RANDOM. It can be any of [SamplingStrategy.RANDOM, SamplingStrategy.EDGE_WEIGHT].

    • SamplingStrategy.RANDOM, random sampling with replacement.

    • SamplingStrategy.EDGE_WEIGHT, sampling with edge weight as probability.

Returns:

numpy.ndarray, array of neighbors.

Examples

>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> neighbors = graph_data.get_sampled_neighbors(node_list=nodes, neighbor_nums=[2, 2],
...                                              neighbor_types=[2, 1])
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_nums is not list or ndarray.

  • TypeError – If neighbor_types is not list or ndarray.

graph_info()[source]

Get the meta information of the graph, including the number of nodes, the type of nodes, the feature information of nodes, the number of edges, the type of edges, and the feature information of edges.

Returns:

dict, meta information of the graph. The key is node_type, edge_type, node_num, edge_num, node_feature_type and edge_feature_type.

Examples: >>> from mindspore.dataset import GraphData >>> >>> g = ds.GraphData(“/path/to/testdata”, 2) >>> graph_info = g.graph_info()

random_walk(target_nodes, meta_path, step_home_param=1.0, step_away_param=1.0, default_node=-1)[source]

Random walk in nodes.

Parameters:
  • target_nodes (list[int]) – Start node list in random walk

  • meta_path (list[int]) – node type for each walk step

  • step_home_param (float, optional) – return hyper parameter in node2vec algorithm. Default: 1.0.

  • step_away_param (float, optional) – in out hyper parameter in node2vec algorithm. Default: 1.0.

  • default_node (int, optional) – default node if no more neighbors found. Default: -1. A default value of -1 indicates that no node is given.

Returns:

numpy.ndarray, array of nodes.

Examples

>>> nodes = graph_data.get_all_nodes(node_type=1)
>>> walks = graph_data.random_walk(target_nodes=nodes, meta_path=[2, 1, 2])
Raises:
  • TypeError – If target_nodes is not list or ndarray.

  • TypeError – If meta_path is not list or ndarray.

class tinyms.data.Graph(edges, node_feat=None, edge_feat=None, graph_feat=None, node_type=None, edge_type=None, num_parallel_workers=None, working_mode='local', hostname='127.0.0.1', port=50051, num_client=1, auto_shutdown=True)[source]

A graph object for storing Graph structure and feature data, and provide capabilities such as graph sampling.

This class supports init graph With input numpy array data, which represent node, edge and its features. If working mode is local , there is no need to specify input arguments like working_mode , hostname , port , num_client , auto_shutdown .

Parameters:
  • edges (Union[list, numpy.ndarray]) – edges of graph in COO format with shape [2, num_edges].

  • node_feat (dict, optional) – feature of nodes, input data format should be dict, key is feature type, which is represented with string like ‘weight’ etc, value should be numpy.array with shape [num_nodes, num_node_features].

  • edge_feat (dict, optional) – feature of edges, input data format should be dict, key is feature type, which is represented with string like ‘weight’ etc, value should be numpy.array with shape [num_edges, num_edge_features].

  • graph_feat (dict, optional) – additional feature, which can not be assigned to node_feat or edge_feat, input data format should be dict, key is feature type, which is represented with string, value should be numpy.array, its shape is not restricted.

  • node_type (Union[list, numpy.ndarray], optional) – type of nodes, each element should be string which represent type of corresponding node. If not provided, default type for each node is “0”.

  • edge_type (Union[list, numpy.ndarray], optional) – type of edges, each element should be string which represent type of corresponding edge. If not provided, default type for each edge is “0”.

  • num_parallel_workers (int, optional) – Number of workers to process the dataset in parallel. Default: None.

  • working_mode (str, optional) –

    Set working mode, now supports ‘local’/’client’/’server’. Default: ‘local’.

    • ’local’, used in non-distributed training scenarios.

    • ’client’, used in distributed training scenarios. The client does not load data, but obtains data from the server.

    • ’server’, used in distributed training scenarios. The server loads the data and is available to the client.

  • hostname (str, optional) – Hostname of the graph data server. This parameter is only valid when working_mode is set to ‘client’ or ‘server’. Default: ‘127.0.0.1’.

  • port (int, optional) – Port of the graph data server. The range is 1024-65535. This parameter is only valid when working_mode is set to ‘client’ or ‘server’. Default: 50051.

  • num_client (int, optional) – Maximum number of clients expected to connect to the server. The server will allocate resources according to this parameter. This parameter is only valid when working_mode is set to ‘server’. Default: 1.

  • auto_shutdown (bool, optional) – Valid when working_mode is set to ‘server’, when the number of connected clients reaches num_client and no client is being connected, the server automatically exits. Default: True.

Raises:
  • TypeError – If edges not list or NumPy array.

  • TypeError – If node_feat provided but not dict, or key in dict is not string type, or value in dict not NumPy array.

  • TypeError – If edge_feat provided but not dict, or key in dict is not string type, or value in dict not NumPy array.

  • TypeError – If graph_feat provided but not dict, or key in dict is not string type, or value in dict not NumPy array.

  • TypeError – If node_type provided but its type not list or NumPy array.

  • TypeError – If edge_type provided but its type not list or NumPy array.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

  • ValueError – If working_mode is not ‘local’, ‘client’ or ‘server’.

  • TypeError – If hostname is illegal.

  • ValueError – If port is not in range [1024, 65535].

  • ValueError – If num_client is not in range [1, 255].

Examples

>>> import numpy as np
>>> from mindspore.dataset import Graph
>>>
>>> # 1) Only provide edges for creating graph, as this is the only required input parameter
>>> edges = np.array([[1, 2], [0, 1]], dtype=np.int32)
>>> graph = Graph(edges)
>>> graph_info = graph.graph_info()
>>>
>>> # 2) Setting node_feat and edge_feat for corresponding node and edge
>>> #    first dimension of feature shape should be corresponding node num or edge num.
>>> edges = np.array([[1, 2], [0, 1]], dtype=np.int32)
>>> node_feat = {"node_feature_1": np.array([[0], [1], [2]], dtype=np.int32)}
>>> edge_feat = {"edge_feature_1": np.array([[1, 2], [3, 4]], dtype=np.int32)}
>>> graph = Graph(edges, node_feat, edge_feat)
>>>
>>> # 3) Setting graph feature for graph, there is no shape limit for graph feature
>>> edges = np.array([[1, 2], [0, 1]], dtype=np.int32)
>>> graph_feature = {"graph_feature_1": np.array([1, 2, 3, 4, 5, 6], dtype=np.int32)}
>>> graph = Graph(edges, graph_feat=graph_feature)
get_all_edges(edge_type)[source]

Get all edges in the graph.

Parameters:

edge_type (str) – Specify the type of edge, default edge_type is “0” when init graph without specify edge_type.

Returns:

numpy.ndarray, array of edges.

Examples

>>> edges = graph.get_all_edges(edge_type="0")
Raises:

TypeError – If edge_type is not string.

get_all_neighbors(node_list, neighbor_type, output_format=<OutputFormat.NORMAL: 0>)[source]

Get neighbor_type neighbors of the nodes in node_list . We try to use the following example to illustrate the definition of these formats. 1 represents connected between two nodes, and 0 represents not connected.

Adjacent Matrix

0

1

2

3

0

0

1

0

0

1

0

0

1

0

2

1

0

0

1

3

1

0

0

0

Normal Format

src

0

1

2

3

dst_0

1

2

0

1

dst_1

-1

-1

3

-1

COO Format

src

0

1

2

2

3

dst

1

2

0

3

1

CSR Format

offsetTable

0

1

2

4

dstTable

1

2

0

3

1

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_type (str) – Specify the type of neighbor node.

  • output_format (OutputFormat, optional) – Output storage format. Default: OutputFormat.NORMAL. It can be any of [OutputFormat.NORMAL, OutputFormat.COO, OutputFormat.CSR].

Returns:

For NORMAL format or COO format numpy.ndarray which represents the array of neighbors will return. As if CSR format is specified, two numpy.ndarrays will return. The first one is offset table, the second one is neighbors

Examples

>>> from mindspore.dataset.engine import OutputFormat
>>> nodes = graph.get_all_nodes(node_type="0")
>>> neighbors = graph.get_all_neighbors(node_list=nodes, neighbor_type="0")
>>> neighbors_coo = graph.get_all_neighbors(node_list=nodes, neighbor_type="0",
...                                         output_format=OutputFormat.COO)
>>> offset_table, neighbors_csr = graph.get_all_neighbors(node_list=nodes, neighbor_type="0",
...                                                       output_format=OutputFormat.CSR)
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_type is not string.

get_all_nodes(node_type)[source]

Get all nodes in the graph.

Parameters:

node_type (str) – Specify the type of node.

Returns:

numpy.ndarray, array of nodes.

Examples

>>> nodes = graph.get_all_nodes(node_type="0")
Raises:

TypeError – If node_type is not string.

get_edge_feature(edge_list, feature_types)[source]

Get feature_types feature of the edges in edge_list .

Parameters:
  • edge_list (Union[list, numpy.ndarray]) – The given list of edges.

  • feature_types (Union[list, numpy.ndarray]) – The given list of feature types, each element should be string.

Returns:

numpy.ndarray, array of features.

Examples

>>> edges = graph.get_all_edges(edge_type="0")
>>> features = graph.get_edge_feature(edge_list=edges, feature_types=["edge_feature_1"])
Raises:
  • TypeError – If edge_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_graph_feature(feature_types)[source]

Get feature_types feature that stored in Graph feature level.

Parameters:

feature_types (Union[list, numpy.ndarray]) – The given list of feature types, each element should be string.

Returns:

numpy.ndarray, array of features.

Examples

>>> features = graph.get_graph_feature(feature_types=['graph_feature_1'])
Raises:

TypeError – If feature_types is not list or ndarray.

get_neg_sampled_neighbors(node_list, neg_neighbor_num, neg_neighbor_type)[source]

Get neg_neighbor_type negative sampled neighbors of the nodes in node_list .

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neg_neighbor_num (int) – Number of neighbors sampled.

  • neg_neighbor_type (str) – Specify the type of negative neighbor.

Returns:

numpy.ndarray, array of neighbors.

Examples

>>> nodes = graph.get_all_nodes(node_type="0")
>>> neg_neighbors = graph.get_neg_sampled_neighbors(node_list=nodes, neg_neighbor_num=3,
...                                                 neg_neighbor_type="0")
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neg_neighbor_num is not integer.

  • TypeError – If neg_neighbor_type is not string.

get_node_feature(node_list, feature_types)[source]

Get feature_types feature of the nodes in node_list .

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • feature_types (Union[list, numpy.ndarray]) – The given list of feature types, each element should be string.

Returns:

numpy.ndarray, array of features.

Examples

>>> nodes = graph.get_all_nodes(node_type="0")
>>> features = graph.get_node_feature(node_list=nodes, feature_types=["node_feature_1"])
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If feature_types is not list or ndarray.

get_sampled_neighbors(node_list, neighbor_nums, neighbor_types, strategy=<SamplingStrategy.RANDOM: 0>)[source]

Get sampled neighbor information.

The api supports multi-hop neighbor sampling. That is, the previous sampling result is used as the input of next-hop sampling. A maximum of 6-hop are allowed.

The sampling result is tiled into a list in the format of [input node, 1-hop sampling result, 2-hop sampling result …].

Parameters:
  • node_list (Union[list, numpy.ndarray]) – The given list of nodes.

  • neighbor_nums (Union[list, numpy.ndarray]) – Number of neighbors sampled per hop.

  • neighbor_types (Union[list, numpy.ndarray]) – Neighbor type sampled per hop, type of each element in neighbor_types should be str.

  • strategy (SamplingStrategy, optional) –

    Sampling strategy. Default: SamplingStrategy.RANDOM. It can be any of [SamplingStrategy.RANDOM, SamplingStrategy.EDGE_WEIGHT].

    • SamplingStrategy.RANDOM, random sampling with replacement.

    • SamplingStrategy.EDGE_WEIGHT, sampling with edge weight as probability.

Returns:

numpy.ndarray, array of neighbors.

Examples

>>> nodes = graph.get_all_nodes(node_type="0")
>>> neighbors = graph.get_sampled_neighbors(node_list=nodes, neighbor_nums=[2, 2],
...                                         neighbor_types=["0", "0"])
Raises:
  • TypeError – If node_list is not list or ndarray.

  • TypeError – If neighbor_nums is not list or ndarray.

  • TypeError – If neighbor_types is not list or ndarray.

graph_info()[source]

Get the meta information of the graph, including the number of nodes, the type of nodes, the feature information of nodes, the number of edges, the type of edges, and the feature information of edges.

Returns:

dict, meta information of the graph. The key is node_type, edge_type, node_num, edge_num, node_feature_type, edge_feature_type and graph_feature_type.

class tinyms.data.InMemoryGraphDataset(data_dir, save_dir='./processed', column_names='graph', num_samples=None, num_parallel_workers=1, shuffle=None, num_shards=None, shard_id=None, python_multiprocessing=True, max_rowsize=6)[source]

Basic Dataset for loading graph into memory.

Recommended to Implement your own dataset with inheriting this class, and implement your own method like process , save and load , refer source code of ArgoverseDataset for how to implement your own dataset. When init your own dataset like ArgoverseDataset, The executed process like follows. Check if there are already processed data under given data_dir , if so will call load method to load it directly, otherwise it will call process method to create graphs and call save method to save the graphs into save_dir .

You can access graph in created dataset using graphs = my_dataset.graphs and also you can iterate dataset and get data using my_dataset.create_tuple_iterator() (in this way you need to implement methods like __getitem__ and __len__), referring to the following example for detail. Note: we have overwritten the __new__ method to reinitialize __init__ internally, which means the user-defined __new__ method won’t work.

Parameters:
  • data_dir (str) – directory for loading dataset, here contains origin format data and will be loaded in process method.

  • save_dir (str) – relative directory for saving processed dataset, this directory is under data_dir . Default: ‘./processed’.

  • column_names (Union[str, list[str]], optional) – single column name or list of column names of the dataset, num of column name should be equal to num of item in return data when implement method like __getitem__ . Default: ‘graph’.

  • num_samples (int, optional) – The number of samples to be included in the dataset. Default: None, all samples.

  • num_parallel_workers (int, optional) – Number of subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. This parameter can only be specified when the implemented dataset has a random access attribute ( __getitem__ ). Default: None.

  • num_shards (int, optional) – Number of shards that the dataset will be divided into. Default: None. When this argument is specified, num_samples reflects the max sample number of per shard.

  • shard_id (int, optional) – The shard ID within num_shards . Default: None. This argument must be specified only when num_shards is also specified.

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker process. This option could be beneficial if the Python operation is computational heavy. Default: True.

  • max_rowsize (int, optional) – Maximum size of row in MB that is used for shared memory allocation to copy data between processes. This is only used if python_multiprocessing is set to True. Default: 6 MB.

Raises:
  • TypeError – If data_dir is not of type str.

  • TypeError – If save_dir is not of type str.

  • TypeError – If num_parallel_workers is not of type int.

  • TypeError – If shuffle is not of type bool.

  • TypeError – If python_multiprocessing is not of type bool.

  • TypeError – If perf_mode is not of type bool.

  • RuntimeError – If data_dir is not valid or does not exit.

  • RuntimeError – If num_shards is specified but shard_id is None.

  • RuntimeError – If shard_id is specified but num_shards is None.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> from mindspore.dataset import InMemoryGraphDataset, Graph
>>>
>>> class MyDataset(InMemoryGraphDataset):
...     def __init__(self, data_dir):
...         super().__init__(data_dir)
...
...     def process(self):
...         # create graph with loading data in given data_dir
...         # here create graph with numpy array directly instead
...         edges = np.array([[0, 1], [1, 2]])
...         graph = Graph(edges=edges)
...         self.graphs.append(graph)
...
...     def __getitem__(self, index):
...         # this method and '__len__' method are required when iterating created dataset
...         graph = self.graphs[index]
...         return graph.get_all_edges('0')
...
...     def __len__(self):
...         return len(self.graphs)
load()[source]

Load data from given(processed) path, you can also override this method in your dataset class.

process()[source]

Process method based on origin dataset, override this method in your our dataset class.

save()[source]

Save processed data into disk in numpy.npz format, you can also override this method in your dataset class.

class tinyms.data.ArgoverseDataset(data_dir, column_names='graph', num_parallel_workers=1, shuffle=None, python_multiprocessing=True, perf_mode=True)[source]

Load argoverse dataset and create graph.

Here argoverse dataset is public dataset for autonomous driving, current implement ArgoverseDataset is mainly for loading Motion Forecasting Dataset in argoverse dataset, recommend to visit official website for more detail: https://www.argoverse.org/av1.html#download-link.

Parameters:
  • data_dir (str) – directory for loading dataset, here contains origin format data and will be loaded in process method.

  • column_names (Union[str, list[str]], optional) – single column name or list of column names of the dataset. Default: “graph”. Num of column name should be equal to num of item in return data when implement method like __getitem__, recommend to specify it with column_names=[“edge_index”, “x”, “y”, “cluster”, “valid_len”, “time_step_len”] like the following example.

  • num_parallel_workers (int, optional) – Number of subprocesses used to fetch the dataset in parallel. Default: 1.

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. This parameter can only be specified when the implemented dataset has a random access attribute ( __getitem__ ). Default: None.

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker process. This option could be beneficial if the Python operation is computational heavy. Default: True.

  • perf_mode (bool, optional) – mode for obtaining higher performance when iterate created dataset(will call __getitem__ method in this process). Default True, will save all the data in graph (like edge index, node feature and graph feature) into graph feature.

Raises:
  • TypeError – If data_dir is not of type str.

  • TypeError – If num_parallel_workers is not of type int.

  • TypeError – If shuffle is not of type bool.

  • TypeError – If python_multiprocessing is not of type bool.

  • TypeError – If perf_mode is not of type bool.

  • RuntimeError – If data_dir is not valid or does not exit.

  • ValueError – If num_parallel_workers exceeds the max thread numbers.

Examples

>>> from mindspore.dataset import ArgoverseDataset
>>>
>>> argoverse_dataset_dir = "/path/to/argoverse_dataset_directory"
>>> graph_dataset = ArgoverseDataset(data_dir=argoverse_dataset_dir,
...                                  column_names=["edge_index", "x", "y", "cluster", "valid_len",
...                                                "time_step_len"])
>>> for item in graph_dataset.create_dict_iterator(output_numpy=True, num_epochs=1):
...     pass

About Argoverse Dataset:

Argverse is the first dataset containing high-precision maps, which contains 290KM high-precision map data with geometric shape and semantic information.

You can unzip the dataset files into the following structure and read by MindSpore’s API:

.
└── argoverse_dataset_dir
    ├── train
    │    ├──...
    ├── val
    │    └──...
    ├── test
    │    └──...

Citation:

@inproceedings{Argoverse,
author     = {Ming-Fang Chang and John W Lambert and Patsorn Sangkloy and Jagjeet Singh
           and Slawomir Bak and Andrew Hartnett and De Wang and Peter Carr
           and Simon Lucey and Deva Ramanan and James Hays},
title      = {Argoverse: 3D Tracking and Forecasting with Rich Maps},
booktitle  = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year       = {2019}
}
process()[source]

Process method for argoverse dataset, here we load original dataset and create a lot of graphs based on it. Pre-processed method mainly refers to: https://github.com/xk-huang/yet-another-vectornet/blob/master/dataset.py.

class tinyms.data.DistributedSampler(dataset_size, num_replicas=None, rank=None, shuffle=True)[source]

Distributed sampler.

Parameters:
  • dataset_size (int) – Dataset list length

  • num_replicas (int) – Replicas num.

  • rank (int) – Device rank.

  • shuffle (bool) – Whether the dataset needs to be shuffled. Default: True.

Returns:

DistributedSampler instance.

class tinyms.data.RandomSampler(replacement=False, num_samples=None)[source]

Samples the elements randomly.

Parameters:
  • replacement (bool, optional) – If True, put the sample ID back for the next draw. Default: False.

  • num_samples (int, optional) – Number of elements to sample. Default: None, which means sample all elements.

Raises:
  • TypeError – If replacement is not of type bool.

  • TypeError – If num_samples is not of type int.

  • ValueError – If num_samples is a negative value.

Examples

>>> # creates a RandomSampler
>>> sampler = ds.RandomSampler()
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
parse()[source]

Parse the sampler.

parse_for_minddataset()[source]

Parse the sampler for MindRecord.

class tinyms.data.SequentialSampler(start_index=None, num_samples=None)[source]

Samples the dataset elements sequentially that is equivalent to not using a sampler.

Parameters:
  • start_index (int, optional) – Index to start sampling at. Default: None, start at first ID.

  • num_samples (int, optional) – Number of elements to sample. Default: None, which means sample all elements.

Raises:
  • TypeError – If start_index is not of type int.

  • TypeError – If num_samples is not of type int.

  • RuntimeError – If start_index is a negative value.

  • ValueError – If num_samples is a negative value.

Examples

>>> # creates a SequentialSampler
>>> sampler = ds.SequentialSampler()
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
parse()[source]

Parse the sampler.

parse_for_minddataset()[source]

Parse the sampler for MindRecord.

class tinyms.data.SubsetRandomSampler(indices, num_samples=None)[source]

Samples the elements randomly from a sequence of indices.

Parameters:
  • indices (Iterable) – A sequence of indices (Any iterable Python object but string).

  • num_samples (int, optional) – Number of elements to sample. Default: None, which means sample all elements.

Raises:
  • TypeError – If elements of indices are not of type number.

  • TypeError – If num_samples is not of type int.

  • ValueError – If num_samples is a negative value.

Examples

>>> indices = [0, 1, 2, 3, 7, 88, 119]
>>>
>>> # create a SubsetRandomSampler, will sample from the provided indices
>>> sampler = ds.SubsetRandomSampler(indices)
>>> data = ds.ImageFolderDataset(image_folder_dataset_dir, num_parallel_workers=8, sampler=sampler)
parse()[source]

Parse the sampler.

parse_for_minddataset()[source]

Parse the sampler for MindRecord.

class tinyms.data.SubsetSampler(indices, num_samples=None)[source]

Samples the elements from a sequence of indices.

Parameters:
  • indices (Iterable) – A sequence of indices (Any iterable Python object but string).

  • num_samples (int, optional) – Number of elements to sample. Default: None, which means sample all elements.

Raises:
  • TypeError – If elements of indices are not of type number.

  • TypeError – If num_samples is not of type int.

  • ValueError – If num_samples is a negative value.

Examples

>>> indices = [0, 1, 2, 3, 4, 5]
>>>
>>> # creates a SubsetSampler, will sample from the provided indices
>>> sampler = ds.SubsetSampler(indices)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
parse()[source]

Parse the sampler.

parse_for_minddataset()[source]

Parse the sampler for MindRecord.

class tinyms.data.PKSampler(num_val, num_class=None, shuffle=False, class_column='label', num_samples=None)[source]

Samples K elements for each P class in the dataset.

Parameters:
  • num_val (int) – Number of elements to sample for each class.

  • num_class (int, optional) – Number of classes to sample. Default: None, sample all classes. The parameter does not support to specify currently.

  • shuffle (bool, optional) – If True, the class IDs are shuffled, otherwise it will not be shuffled. Default: False.

  • class_column (str, optional) – Name of column with class labels for MindDataset. Default: ‘label’.

  • num_samples (int, optional) – The number of samples to draw. Default: None, which means sample all elements.

Raises:

Examples

>>> # creates a PKSampler that will get 3 samples from every class.
>>> sampler = ds.PKSampler(3)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
parse()[source]

Parse the sampler.

parse_for_minddataset()[source]

Parse the sampler for MindRecord.

class tinyms.data.WeightedRandomSampler(weights, num_samples=None, replacement=True)[source]

Samples the elements from [0, len(weights) - 1] randomly with the given weights (probabilities).

Parameters:
  • weights (list[float, int]) – A sequence of weights, not necessarily summing up to 1.

  • num_samples (int, optional) – Number of elements to sample. Default: None, which means sample all elements.

  • replacement (bool) – If True, put the sample ID back for the next draw. Default: True.

Raises:
  • TypeError – If elements of weights are not of type number.

  • TypeError – If num_samples is not of type int.

  • TypeError – If replacement is not of type bool.

  • RuntimeError – If weights is empty or all zero.

  • ValueError – If num_samples is a negative value.

Examples

>>> weights = [0.9, 0.01, 0.4, 0.8, 0.1, 0.1, 0.3]
>>>
>>> # creates a WeightedRandomSampler that will sample 4 elements without replacement
>>> sampler = ds.WeightedRandomSampler(weights, 4)
>>> dataset = ds.ImageFolderDataset(image_folder_dataset_dir,
...                                 num_parallel_workers=8,
...                                 sampler=sampler)
parse()[source]

Parse the sampler.

class tinyms.data.DatasetCache(session_id, size=0, spilling=False, hostname=None, port=None, num_connections=None, prefetch_size=None)[source]

A client to interface with tensor caching service.

For details, please check Tutorial .

Parameters:
  • session_id (int) – A user assigned session id for the current pipeline.

  • size (int, optional) – Size of the memory set aside for the row caching. Default: 0, which means unlimited, note that it might bring in the risk of running out of memory on the machine.

  • spilling (bool, optional) – Whether or not spilling to disk if out of memory. Default: False.

  • hostname (str, optional) – Host name. Default: None, use default hostname ‘127.0.0.1’.

  • port (int, optional) – Port to connect to server. Default: None, use default port 50052.

  • num_connections (int, optional) – Number of tcp/ip connections. Default: None, use default value 12.

  • prefetch_size (int, optional) – The size of the cache queue between operations. Default: None, use default value 20.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # Create a cache instance, in which session_id is generated from command line `cache_admin -g`
>>> # In the following code, suppose the session_id is 780643335
>>> some_cache = ds.DatasetCache(session_id=780643335, size=0)
>>>
>>> dataset_dir = "/path/to/image_folder_dataset_directory"
>>> ds1 = ds.ImageFolderDataset(dataset_dir, cache=some_cache)
get_stat()[source]

Get the statistics from a cache. After data pipeline, three types of statistics can be obtained, including average number of cache hits (avg_cache_sz), number of caches in memory (num_mem_cached) and number of caches in disk (num_disk_cached).

class tinyms.data.DSCallback(step_size=1)[source]

Abstract base class used to build dataset callback classes.

Users can obtain the dataset pipeline context through ds_run_context , including cur_epoch_num , cur_step_num_in_epoch and cur_step_num .

Parameters:

step_size (int, optional) – The number of steps between adjacent ds_step_begin/ds_step_end calls. Default: 1, will be called at each step.

Examples

>>> from mindspore.dataset import DSCallback
>>> from mindspore.dataset.transforms import transforms
>>>
>>> class PrintInfo(DSCallback):
...     def ds_epoch_end(self, ds_run_context):
...         print(ds_run_context.cur_epoch_num)
...         print(ds_run_context.cur_step_num)
>>>
>>> dataset = ds.MnistDataset(mnist_dataset_dir, num_samples=100)
>>> op = transforms.OneHot(10)
>>> dataset = dataset.map(operations=op, callbacks=PrintInfo())
create_runtime_obj()[source]

Internal method, creates a runtime (C++) object from the callback methods defined by the user.

Returns:

_c_dataengine.PyDSCallback.

ds_begin(ds_run_context)[source]

Called before the data pipeline is started.

Parameters:

ds_run_context (RunContext) – Include some information of the data pipeline.

ds_epoch_begin(ds_run_context)[source]

Called before a new epoch is started.

Parameters:

ds_run_context (RunContext) – Include some information of the data pipeline.

ds_epoch_end(ds_run_context)[source]

Called after an epoch is finished.

Parameters:

ds_run_context (RunContext) – Include some information of the data pipeline.

ds_step_begin(ds_run_context)[source]

Called before a step start.

Parameters:

ds_run_context (RunContext) – Include some information of the data pipeline.

ds_step_end(ds_run_context)[source]

Called after a step finished.

Parameters:

ds_run_context (RunContext) – Include some information of the data pipeline.

class tinyms.data.WaitedDSCallback(step_size=1)[source]

Abstract base class used to build dataset callback classes that are synchronized with the training callback class mindspore.train.Callback .

It can be used to execute a custom callback method before a step or an epoch, such as updating the parameters of operations according to the loss of the previous training epoch in auto augmentation.

Users can obtain the network training context through train_run_context , such as network , train_network , epoch_num , batch_num , loss_fn , optimizer , parallel_mode , device_number , list_callback , cur_epoch_num , cur_step_num , dataset_sink_mode , net_outputs , etc., see mindspore.train.Callback .

Users can obtain the dataset pipeline context through ds_run_context , including cur_epoch_num , cur_step_num_in_epoch and cur_step_num .

Note

Note that the call is triggered only at the beginning of the second step or epoch.

Parameters:

step_size (int, optional) – The number of rows in each step, usually set equal to the batch size. Default: 1.

Examples

>>> import mindspore.nn as nn
>>> import mindspore as ms
>>> from mindspore.dataset import WaitedDSCallback
>>> import mindspore.dataset as ds
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target="CPU")
>>>
>>> # custom callback class for data synchronization in data pipeline
>>> class MyWaitedCallback(WaitedDSCallback):
...     def __init__(self, events, step_size=1):
...         super().__init__(step_size)
...         self.events = events
...
...     # callback method to be executed by data pipeline before the epoch starts
...     def sync_epoch_begin(self, train_run_context, ds_run_context):
...         event = f"ds_epoch_begin_{ds_run_context.cur_epoch_num}_{ds_run_context.cur_step_num}"
...         self.events.append(event)
...
...     # callback method to be executed by data pipeline before the step starts
...     def sync_step_begin(self, train_run_context, ds_run_context):
...         event = f"ds_step_begin_{ds_run_context.cur_epoch_num}_{ds_run_context.cur_step_num}"
...         self.events.append(event)
>>>
>>> # custom callback class for data synchronization in network training
>>> class MyMSCallback(ms.Callback):
...     def __init__(self, events):
...         self.events = events
...
...     # callback method to be executed by network training after the epoch ends
...     def epoch_end(self, run_context):
...         cb_params = run_context.original_args()
...         event = f"ms_epoch_end_{cb_params.cur_epoch_num}_{cb_params.cur_step_num}"
...         self.events.append(event)
...
...     # callback method to be executed by network training after the step ends
...     def step_end(self, run_context):
...         cb_params = run_context.original_args()
...         event = f"ms_step_end_{cb_params.cur_epoch_num}_{cb_params.cur_step_num}"
...         self.events.append(event)
>>>
>>> # custom network
>>> class Net(nn.Cell):
...     def construct(self, x, y):
...         return x
>>>
>>> # define a parameter that needs to be synchronized between data pipeline and network training
>>> events = []
>>>
>>> # define callback classes of data pipeline and netwok training
>>> my_cb1 = MyWaitedCallback(events, 1)
>>> my_cb2 = MyMSCallback(events)
>>> arr = [1, 2, 3, 4]
>>>
>>> # construct data pipeline
>>> data = ds.NumpySlicesDataset((arr, arr), column_names=["c1", "c2"], shuffle=False)
>>> # map the data callback object into the pipeline
>>> data = data.map(operations=(lambda x: x), callbacks=my_cb1)
>>>
>>> net = Net()
>>> model = ms.Model(net)
>>>
>>> # add the data and network callback objects to the model training callback list
>>> model.train(2, data, dataset_sink_mode=False, callbacks=[my_cb2, my_cb1])
create_runtime_obj()[source]

Internal method, creates a runtime (C++) object from the callback methods defined by the user.

Returns:

_c_dataengine.PyDSCallback.

ds_epoch_begin(ds_run_context)[source]

Internal method, do not call/override. Define mindspore.dataset.DSCallback.ds_epoch_begin to wait for mindspore.train.callback.Callback.epoch_end.

Parameters:

ds_run_context – Include some information of the data pipeline.

ds_step_begin(ds_run_context)[source]

Internal method, do not call/override. Define mindspore.dataset.DSCallback.ds_step_begin to wait for mindspore.train.callback.Callback.step_end.

Parameters:

ds_run_context – Include some information of the data pipeline.

end(run_context)[source]

Internal method, release wait when the network training ends.

Parameters:

run_context – Include some information of the model.

epoch_end(run_context)[source]

Internal method, do not call/override. Defines epoch_end of Callback to release the wait in ds_epoch_begin.

Parameters:

run_context – Include some information of the model.

step_end(run_context)[source]

Internal method, do not call/override. Defines step_end of Callback to release the wait in ds_step_begin.

Parameters:

run_context – Include some information of the model.

sync_epoch_begin(train_run_context, ds_run_context)[source]

Called before a new dataset epoch is started and after the previous training epoch is ended.

Parameters:
  • train_run_context – Include some information of the model with feedback from the previous epoch.

  • ds_run_context – Include some information of the data pipeline.

sync_step_begin(train_run_context, ds_run_context)[source]

Called before a new dataset step is started and after the previous training step is ended.

Parameters:
  • train_run_context – Include some information of the model with feedback from the previous step.

  • ds_run_context – Include some information of the data pipeline.

class tinyms.data.Schema(schema_file=None)[source]

Class to represent a schema of a dataset.

Parameters:

schema_file (str) – Path of the schema file. Default: None.

Returns:

Schema object, schema info about dataset.

Raises:

RuntimeError – If schema file failed to load.

Examples

>>> from mindspore import dtype as mstype
>>>
>>> # Create schema; specify column name, mindspore.dtype and shape of the column
>>> schema = ds.Schema()
>>> schema.add_column(name='col1', de_type=mstype.int64, shape=[2])
add_column(name, de_type, shape=None)[source]

Add new column to the schema.

Parameters:
  • name (str) – The new name of the column.

  • de_type (str) – Data type of the column.

  • shape (list[int], optional) – Shape of the column. Default: None, [-1] which is an unknown shape of rank 1.

Raises:

ValueError – If column type is unknown.

Examples: >>> from mindspore import dtype as mstype >>> >>> schema = ds.Schema() >>> schema.add_column(‘col_1d’, de_type=mstype.int64, shape=[2])

from_json(json_obj)[source]

Get schema file from JSON object.

Parameters:

json_obj (dictionary) – Object of JSON parsed.

Raises:

Examples

>>> import json
>>>
>>> from mindspore.dataset import Schema
>>>
>>> with open("/path/to/schema_file") as file:
...     json_obj = json.load(file)
...     schema = ds.Schema()
...     schema.from_json(json_obj)
parse_columns(columns)[source]

Parse the columns and add it to self.

Parameters:

columns (Union[dict, list[dict], tuple[dict]]) –

Dataset attribute information, decoded from schema file.

  • list[dict], name and type must be in keys, shape optional.

  • dict, columns.keys() as name, columns.values() is dict, and type inside, shape optional.

Raises:

Examples

>>> from mindspore.dataset import Schema
>>> schema = Schema()
>>> columns1 = [{'name': 'image', 'type': 'int8', 'shape': [3, 3]},
...             {'name': 'label', 'type': 'int8', 'shape': [1]}]
>>> schema.parse_columns(columns1)
>>> columns2 = {'image': {'shape': [3, 3], 'type': 'int8'}, 'label': {'shape': [1], 'type': 'int8'}}
>>> schema.parse_columns(columns2)
to_json()[source]

Get a JSON string of the schema.

Returns:

str, JSON string of the schema.

Examples

>>> from mindspore.dataset import Schema
>>>
>>> schema1 = ds.Schema()
>>> schema2 = schema1.to_json()
tinyms.data.compare(pipeline1, pipeline2)[source]

Compare if two dataset pipelines are the same.

Parameters:
  • pipeline1 (Dataset) – a dataset pipeline.

  • pipeline2 (Dataset) – a dataset pipeline.

Returns:

Whether pipeline1 is equal to pipeline2.

Examples

>>> pipeline1 = ds.MnistDataset(mnist_dataset_dir, num_samples=100)
>>> pipeline2 = ds.Cifar10Dataset(cifar10_dataset_dir, num_samples=100)
>>> res = ds.compare(pipeline1, pipeline2)
tinyms.data.deserialize(input_dict=None, json_filepath=None)[source]

Construct dataset pipeline from a JSON file produced by dataset serialize function.

Parameters:
  • input_dict (dict) – A Python dictionary containing a serialized dataset graph. Default: None.

  • json_filepath (str) – A path to the JSON file containing dataset graph. User can obtain this file by calling API mindspore.dataset.serialize() . Default: None.

Returns:

de.Dataset or None if error occurs.

Raises:

OSError – Can not open the JSON file.

Examples

>>> dataset = ds.MnistDataset(mnist_dataset_dir, num_samples=100)
>>> one_hot_encode = transforms.OneHot(10)  # num_classes is input argument
>>> dataset = dataset.map(operations=one_hot_encode, input_columns="label")
>>> dataset = dataset.batch(batch_size=10, drop_remainder=True)
>>> # Case 1: to/from JSON file
>>> serialized_data = ds.serialize(dataset, json_filepath="/path/to/mnist_dataset_pipeline.json")
>>> deserialized_dataset = ds.deserialize(json_filepath="/path/to/mnist_dataset_pipeline.json")
>>> # Case 2: to/from Python dictionary
>>> serialized_data = ds.serialize(dataset)
>>> deserialized_dataset = ds.deserialize(input_dict=serialized_data)
tinyms.data.serialize(dataset, json_filepath='')[source]

Serialize dataset pipeline into a JSON file.

Note

Complete serialization of Python objects is not currently supported. Scenarios that are not supported include data pipelines that use GeneratorDataset or map or batch operations that contain custom Python functions. For Python objects, serialization operations do not yield the full object content, which means that deserialization of the JSON file obtained by serialization may result in errors. For example, when serializing the data pipeline of Python user-defined functions, a related warning message is reported and the obtained JSON file cannot be deserialized into a usable data pipeline.

Parameters:
  • dataset (Dataset) – The starting node.

  • json_filepath (str) – The filepath where a serialized JSON file will be generated. Default: ‘’.

Returns:

Dict, the dictionary contains the serialized dataset graph.

Raises:

OSError – Cannot open a file.

Examples

>>> dataset = ds.MnistDataset(mnist_dataset_dir, num_samples=100)
>>> one_hot_encode = transforms.OneHot(10)  # num_classes is input argument
>>> dataset = dataset.map(operations=one_hot_encode, input_columns="label")
>>> dataset = dataset.batch(batch_size=10, drop_remainder=True)
>>> # serialize it to JSON file
>>> serialized_data = ds.serialize(dataset, json_filepath="/path/to/mnist_dataset_pipeline.json")
tinyms.data.show(dataset, indentation=2)[source]

Write the dataset pipeline graph to logger.info file.

Parameters:
  • dataset (Dataset) – The starting node.

  • indentation (int, optional) – The indentation used by the JSON print. Do not indent if indentation is None. Default: 2.

Examples

>>> dataset = ds.MnistDataset(mnist_dataset_dir, num_samples=100)
>>> one_hot_encode = transforms.OneHot(10)
>>> dataset = dataset.map(operations=one_hot_encode, input_columns="label")
>>> dataset = dataset.batch(batch_size=10, drop_remainder=True)
>>> ds.show(dataset)
tinyms.data.sync_wait_for_dataset(rank_id, rank_size, current_epoch)[source]

Wait util the dataset files required by all devices are downloaded.

Note

It should be used together with mindspore.dataset.OBSMindDataset and be called before each epoch.

Parameters:
  • rank_id (int) – Rank ID of the device.

  • rank_size (int) – Rank size.

  • current_epoch (int) – Number of current epochs.

Examples

>>> # Create a synchronization callback
>>> import mindspore as ms
>>> from mindspore.dataset import sync_wait_for_dataset
>>>
>>> class SyncForDataset(ms.Callback):
...     def __init__(self):
...         super(SyncForDataset, self).__init__()
...     def epoch_begin(self, run_context):
...         cb_params = run_context.original_args()
...         epoch_num = cb_params.cur_epoch_num
...         sync_wait_for_dataset(rank_id, rank_size, epoch_num)
tinyms.data.zip(datasets)[source]

Zip the datasets in the input tuple of datasets.

Parameters:

datasets (tuple[Dataset]) – A tuple of datasets to be zipped together. The number of datasets must be more than 1.

Returns:

Dataset, dataset zipped.

Raises:

Examples

>>> # Create a dataset which is the combination of dataset_1 and dataset_2
>>> dataset = ds.zip((dataset_1, dataset_2))
class tinyms.data.FileWriter(file_name, shard_num=1, overwrite=False)[source]

Class to write user defined raw data into MindRecord files.

Note

After the MindRecord file is generated, if the file name is changed, the file may fail to be read.

Parameters:
  • file_name (str) – File name of MindRecord file.

  • shard_num (int, optional) – The Number of MindRecord files. It should be between [1, 1000]. Default: 1.

  • overwrite (bool, optional) – Whether to overwrite if the file already exists. Default: False.

Raises:

ParamValueError – If file_name or shard_num or overwrite is invalid.

Examples

>>> from mindspore.mindrecord import FileWriter
>>> schema_json = {"file_name": {"type": "string"}, "label": {"type": "int32"}, "data": {"type": "bytes"}}
>>> indexes = ["file_name", "label"]
>>> data = [{"file_name": "1.jpg", "label": 0,
...          "data": b"\x10c\xb3w\xa8\xee$o&<q\x8c\x8e(\xa2\x90\x90\x96\xbc\xb1\x1e\xd4QER\x13?\xff"},
...         {"file_name": "2.jpg", "label": 56,
...          "data": b"\xe6\xda\xd1\xae\x07\xb8>\xd4\x00\xf8\x129\x15\xd9\xf2q\xc0\xa2\x91YFUO\x1dsE1"},
...         {"file_name": "3.jpg", "label": 99,
...          "data": b"\xaf\xafU<\xb8|6\xbd}\xc1\x99[\xeaj+\x8f\x84\xd3\xcc\xa0,i\xbb\xb9-\xcdz\xecp{T\xb1"}]
>>> writer = FileWriter(file_name="test.mindrecord", shard_num=1, overwrite=True)
>>> schema_id = writer.add_schema(schema_json, "test_schema")
>>> status = writer.add_index(indexes)
>>> status = writer.write_raw_data(data)
>>> status = writer.commit()
add_index(index_fields)[source]

Select index fields from schema to accelerate reading. schema is added through add_schema .

Note

The index fields should be primitive type. e.g. int/float/str. If the function is not called, the fields of the primitive type in schema are set as indexes by default.

Please refer to the Examples of class: mindspore.mindrecord.FileWriter .

Parameters:

index_fields (list[str]) – fields from schema.

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:
  • ParamTypeError – If index field is invalid.

  • MRMDefineIndexError – If index field is not primitive type.

  • MRMAddIndexError – If failed to add index field.

  • MRMGetMetaError – If the schema is not set or failed to get meta.

add_schema(content, desc=None)[source]

The schema is added to describe the raw data to be written.

Note

Please refer to the Examples of class: mindspore.mindrecord.FileWriter .

Parameters:
  • content (dict) – Dictionary of schema content.

  • desc (str, optional) – String of schema description, Default: None.

Returns:

int, schema id.

Raises:
  • MRMInvalidSchemaError – If schema is invalid.

  • MRMBuildSchemaError – If failed to build schema.

  • MRMAddSchemaError – If failed to add schema.

commit()[source]

Flush data in memory to disk and generate the corresponding database files.

Note

Please refer to the Examples of class: mindspore.mindrecord.FileWriter .

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:
  • MRMOpenError – If failed to open MindRecord file.

  • MRMSetHeaderError – If failed to set header.

  • MRMIndexGeneratorError – If failed to create index generator.

  • MRMGenerateIndexError – If failed to write to database.

  • MRMCommitError – If failed to flush data to disk.

  • RuntimeError – Parallel write failed.

open_and_set_header()[source]

Open writer and set header which stores meta information. The function is only used for parallel writing and is called before the write_raw_data .

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:
  • MRMOpenError – If failed to open MindRecord file.

  • MRMSetHeaderError – If failed to set header.

classmethod open_for_append(file_name)[source]

Open MindRecord file and get ready to append data.

Parameters:

file_name (str) – String of MindRecord file name.

Returns:

FileWriter, file writer object for the opened MindRecord file.

Raises:
  • ParamValueError – If file_name is invalid.

  • FileNameError – If path contains invalid characters.

  • MRMOpenError – If failed to open MindRecord file.

  • MRMOpenForAppendError – If failed to open file for appending data.

Examples

>>> from mindspore.mindrecord import FileWriter
>>> schema_json = {"file_name": {"type": "string"}, "label": {"type": "int32"}, "data": {"type": "bytes"}}
>>> data = [{"file_name": "1.jpg", "label": 0,
...          "data": b"\x10c\xb3w\xa8\xee$o&<q\x8c\x8e(\xa2\x90\x90\x96\xbc\xb1\x1e\xd4QER\x13?\xff"}]
>>> writer = FileWriter(file_name="test.mindrecord", shard_num=1, overwrite=True)
>>> schema_id = writer.add_schema(schema_json, "test_schema")
>>> status = writer.write_raw_data(data)
>>> status = writer.commit()
>>> write_append = FileWriter.open_for_append("test.mindrecord")
>>> status = write_append.write_raw_data(data)
>>> status = write_append.commit()
set_header_size(header_size)[source]

Set the size of header which contains shard information, schema information, page meta information, etc. The larger a header, the more data the MindRecord file can store. If the size of header is larger than the default size (16MB), users need to call the API to set a proper size.

Parameters:

header_size (int) – Size of header, between 16*1024(16KB) and 128*1024*1024(128MB).

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:

MRMInvalidHeaderSizeError – If failed to set header size.

Examples

>>> from mindspore.mindrecord import FileWriter
>>> writer = FileWriter(file_name="test.mindrecord", shard_num=1)
>>> status = writer.set_header_size(1 << 25) # 32MB
set_page_size(page_size)[source]

Set the size of page that represents the area where data is stored, and the areas are divided into two types: raw page and blob page. The larger a page, the more data the page can store. If the size of a sample is larger than the default size (32MB), users need to call the API to set a proper size.

Parameters:

page_size (int) – Size of page, between 32*1024(32KB) and 256*1024*1024(256MB).

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:

MRMInvalidPageSizeError – If failed to set page size.

Examples

>>> from mindspore.mindrecord import FileWriter
>>> writer = FileWriter(file_name="test.mindrecord", shard_num=1)
>>> status = writer.set_page_size(1 << 26)  # 64MB
write_raw_data(raw_data, parallel_writer=False)[source]

Convert raw data into a series of consecutive MindRecord files after the raw data is verified against the schema.

Note

Please refer to the Examples of class: mindspore.mindrecord.FileWriter .

Parameters:
  • raw_data (list[dict]) – List of raw data.

  • parallel_writer (bool, optional) – Write raw data in parallel if it equals to True. Default: False.

Returns:

MSRStatus, SUCCESS or FAILED.

Raises:
  • ParamTypeError – If index field is invalid.

  • MRMOpenError – If failed to open MindRecord file.

  • MRMValidateDataError – If data does not match blob fields.

  • MRMSetHeaderError – If failed to set header.

  • MRMWriteDatasetError – If failed to write dataset.

  • TypeError – If parallel_writer is not bool.

class tinyms.data.FileReader(file_name, num_consumer=4, columns=None, operator=None)[source]

Class to read MindRecord files.

Note

If file_name is a file path, it tries to load all MindRecord files generated in a conversion, and throws an exception if a MindRecord file is missing. If file_name is file path list, only the MindRecord files in the list are loaded.

Parameters:
  • file_name (str, list[str]) – One of MindRecord file path or file path list.

  • num_consumer (int, optional) – Number of reader workers which load data. Default: 4. It should not be smaller than 1 or larger than the number of processor cores.

  • columns (list[str], optional) – A list of fields where corresponding data would be read. Default: None.

  • operator (int, optional) – Reserved parameter for operators. Default: None.

Raises:

ParamValueError – If file_name , num_consumer or columns is invalid.

Examples

>>> from mindspore.mindrecord import FileReader
>>>
>>> mindrecord_file = "/path/to/mindrecord/file"
>>> reader = FileReader(file_name=mindrecord_file)
>>>
>>> # create iterator for mindrecord and get saved data
>>> for _, item in enumerate(reader.get_next()):
...     ori_data = item
>>> reader.close()
close()[source]

Stop reader worker and close file.

get_next()[source]

Yield a batch of data according to columns at a time.

Returns:

dict, a batch whose keys are the same as columns.

Raises:

MRMUnsupportedSchemaError – If schema is invalid.

len()[source]

Get the number of the samples in MindRecord.

Returns:

int, the number of the samples in MindRecord.

schema()[source]

Get the schema of the MindRecord.

Returns:

dict, the schema info.

class tinyms.data.MindPage(file_name, num_consumer=4)[source]

Class to read MindRecord files in pagination.

Parameters:
  • file_name (Union[str, list[str]]) – One of MindRecord files or a file list.

  • num_consumer (int, optional) – The number of reader workers which load data. Default: 4. It should not be smaller than 1 or larger than the number of processor cores.

Raises:
  • ParamValueError – If file_name , num_consumer or columns is invalid.

  • MRMInitSegmentError – If failed to initialize ShardSegment.

property candidate_fields

Return candidate category fields.

Returns:

list[str], by which data could be grouped.

property category_field

Getter function for category fields.

Returns:

list[str], by which data could be grouped.

get_category_fields()[source]

Return candidate category fields.

Returns:

list[str], by which data could be grouped.

read_at_page_by_id(category_id, page, num_row)[source]

Query by category id in pagination.

Parameters:
  • category_id (int) – Category id, referred to the return of read_category_info .

  • page (int) – Index of page.

  • num_row (int) – Number of rows in a page.

Returns:

list[dict], data queried by category id.

Raises:
  • ParamValueError – If any parameter is invalid.

  • MRMFetchDataError – If failed to fetch data by category.

  • MRMUnsupportedSchemaError – If schema is invalid.

read_at_page_by_name(category_name, page, num_row)[source]

Query by category name in pagination.

Parameters:
  • category_name (str) – String of category field’s value, referred to the return of read_category_info .

  • page (int) – Index of page.

  • num_row (int) – Number of row in a page.

Returns:

list[dict], data queried by category name.

read_category_info()[source]

Return category information when data is grouped by indicated category field.

Returns:

str, description of group information.

Raises:

MRMReadCategoryInfoError – If failed to read category information.

set_category_field(category_field)[source]

Set category field for reading.

Note

Should be a candidate category field.

Parameters:

category_field (str) – String of category field name.

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.Cifar10ToMR(source, destination)[source]

A class to transform from cifar10 to MindRecord.

Note

For details about Examples, please refer to Converting the CIFAR-10 Dataset .

Parameters:
  • source (str) – The cifar10 directory to be transformed.

  • destination (str) – MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

Raises:

ValueError – If source or destination is invalid.

run(fields=None)[source]

Execute transformation from cifar10 to MindRecord.

Parameters:

fields (list[str], optional) – A list of index fields. Default: None. For index field settings, please refer to mindspore.mindrecord.FileWriter.add_index() .

Returns:

MSRStatus, SUCCESS or FAILED.

transform(fields=None)[source]

Encapsulate the mindspore.mindrecord.Cifar10ToMR.run() function to exit normally.

Parameters:

fields (list[str], optional) – A list of index fields. Default: None. For index field settings, please refer to mindspore.mindrecord.FileWriter.add_index() .

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.Cifar100ToMR(source, destination)[source]

A class to transform from cifar100 to MindRecord.

Note

For details about Examples, please refer to Converting the CIFAR-10 Dataset .

Parameters:
  • source (str) – The cifar100 directory to be transformed.

  • destination (str) – MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

Raises:

ValueError – If source or destination is invalid.

run(fields=None)[source]

Execute transformation from cifar100 to MindRecord.

Parameters:

fields (list[str], optional) – A list of index field, e.g.[“fine_label”, “coarse_label”]. Default: None. For index field settings, please refer to mindspore.mindrecord.FileWriter.add_index() .

Returns:

MSRStatus, SUCCESS or FAILED.

transform(fields=None)[source]

Encapsulate the mindspore.mindrecord.Cifar100ToMR.run() function to exit normally.

Parameters:

fields (list[str], optional) – A list of index field, e.g.[“fine_label”, “coarse_label”]. Default: None. For index field settings, please refer to mindspore.mindrecord.FileWriter.add_index() .

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.CsvToMR(source, destination, columns_list=None, partition_number=1)[source]

A class to transform from csv to MindRecord.

Note

For details about Examples, please refer to Converting CSV Dataset .

Parameters:
  • source (str) – The file path of csv.

  • destination (str) – The MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

  • columns_list (list[str], optional) – A list of columns to be read. Default: None.

  • partition_number (int, optional) – The partition size, Default: 1.

Raises:
  • ValueError – If source , destination , partition_number is invalid.

  • RuntimeError – If columns_list is invalid.

run()[source]

Execute transformation from csv to MindRecord.

Returns:

MSRStatus, SUCCESS or FAILED.

transform()[source]

Encapsulate the mindspore.mindrecord.CsvToMR.run() function to exit normally.

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.ImageNetToMR(map_file, image_dir, destination, partition_number=1)[source]

A class to transform from imagenet to MindRecord.

Parameters:
  • map_file (str) –

    The map file that indicates label. This file can be generated by command ls -l [image_dir] | grep -vE "total|\." | awk -F " " '{print $9, NR-1;}' > [file_path] , where image_dir is image directory contains n01440764, n01443537, n01484850 and n15075141 directory and file_path is the generated map_file . An example of map_file is as below:

    n01440764 0
    n01443537 1
    n01484850 2
    n01491361 3
    ...
    n15075141 999
    

  • image_dir (str) – Image directory contains n01440764, n01443537, n01484850 and n15075141 directory.

  • destination (str) – MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

  • partition_number (int, optional) – The partition size. Default: 1.

Raises:

ValueError – If map_file , image_dir or destination is invalid.

run()[source]

Execute transformation from imagenet to MindRecord.

Returns:

MSRStatus, SUCCESS or FAILED.

transform()[source]

Encapsulate the mindspore.mindrecord.ImageNetToMR.run() function to exit normally.

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.MnistToMR(source, destination, partition_number=1)[source]

A class to transform from Mnist to MindRecord.

Parameters:
  • source (str) – Directory that contains t10k-images-idx3-ubyte.gz, train-images-idx3-ubyte.gz, t10k-labels-idx1-ubyte.gz and train-labels-idx1-ubyte.gz.

  • destination (str) – MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

  • partition_number (int, optional) – The partition size. Default: 1.

Raises:

ValueError – If source , destination , partition_number is invalid.

run()[source]

Execute transformation from Mnist to MindRecord.

Returns:

MSRStatus, SUCCESS or FAILED.

transform()[source]

Encapsulate the mindspore.mindrecord.MnistToMR.run() function to exit normally.

Returns:

MSRStatus, SUCCESS or FAILED.

class tinyms.data.TFRecordToMR(source, destination, feature_dict, bytes_fields=None)[source]

A class to transform from TFRecord to MindRecord.

Note

For details about Examples, please refer to Converting TFRecord Dataset .

Parameters:
  • source (str) – TFRecord file to be transformed.

  • destination (str) – MindRecord file path to transform into, ensure that the directory is created in advance and no file with the same name exists in the directory.

  • feature_dict (dict[str, FixedLenFeature]) – Dictionary that states the feature type, and FixedLenFeature is supported.

  • bytes_fields (list[str], optional) – The bytes fields which are in feature_dict and can be images bytes. Default: None, means that there is no byte dtype field such as image.

Raises:
  • ValueError – If parameter is invalid.

  • Exception – when tensorflow module is not found or version is not correct.

run()[source]

Execute transformation from TFRecord to MindRecord.

Returns:

MSRStatus, SUCCESS or FAILED.

tfrecord_iterator()[source]

Yield a dictionary whose keys are fields in schema.

Returns:

dict, data dictionary whose keys are the same as columns.

tfrecord_iterator_oldversion()[source]

Yield a dict with key to be fields in schema, and value to be data. This function is for old version tensorflow whose version number < 2.1.0.

Returns:

dict, data dictionary whose keys are the same as columns.

transform()[source]

Encapsulate the mindspore.mindrecord.TFRecordToMR.run() function to exit normally.

Returns:

MSRStatus, SUCCESS or FAILED.

tinyms.data.download_dataset(dataset_name, local_path='.')[source]

This function is defined to easily download any public dataset without specifing much details.

Parameters:
  • dataset_name (str) – The official name of dataset, currently supports mnist, cifar10 and cifar100.

  • local_path (str) – Specifies the local location of dataset to be downloaded. Default: ..

Returns:

str, the source location of dataset downloaded.

Examples

>>> from tinyms.data import download_dataset
>>>
>>> ds_path = download_dataset('mnist')
tinyms.data.generate_image_list(dir_path, max_dataset_size=inf)[source]

Traverse the directory to generate a list of images path.

Parameters:
  • dir_path (str) – image directory.

  • max_dataset_size (int) – Maximum number of return image paths.

Returns:

Image path list.

tinyms.data.load_resized_img(path, width=256, height=256)[source]

Load image with RGB and resize to (256, 256).

Parameters:
  • path (str) – image path.

  • width (int) – image width, default: 256.

  • height (int) – image height, default: 256.

Returns:

PIL image class.

tinyms.data.load_img(path)[source]

Load image with RGB.

Parameters:

path (str) – image path.

Returns:

PIL image class.

tinyms.vision

This module is to support vision augmentations. transforms is a high performance image augmentation module which is developed with C++ OpenCV.

class tinyms.vision.ImageViewer(image, label=None)[source]

ImageViewer is a class defined for visualizing the input image.

Parameters:
  • image (Union[PIL.Image, numpy.ndarray]) – image input.

  • label (str, optional) – specifies the label of this image. Default: None.

Raises:

TypeError – When image input is not numpy.ndarray or PIL.Image.

draw(pred_res, labels)[source]

Draw the predicting boxes on the picture and show the visualized picture.

Parameters:
  • pred_res (dict) – The predcition result from tinyms.serving.predict method.

  • labels (list) – The labels should be input manually with a list of string. This argument is required for distinguishing the color from different classes.

Note

This function is only valid when being called in interactive environment, such like Jupyter notebook.

Examples

>>> form PIL import Image
>>>
>>> img = Image.open('example.jpg')
>>> img_viewer = ImageViewer(img)
>>> labels = ['1', '2', '3']
>>> img_viewer.draw(pred_res, labels)
show()[source]

Directly show the visualized picture.

Note

This function is only valid when being called in interactive environment, such like Jupyter notebook.

Examples

>>> form PIL import Image
>>>
>>> img = Image.open('example.jpg')
>>> img_viewer = ImageViewer(img, label='cat')
>>> img_viewer.show()
class tinyms.vision.Inter[source]

Interpolation Modes.

Possible enumeration values are: Inter.NEAREST, Inter.ANTIALIAS, Inter.LINEAR, Inter.BILINEAR, Inter.CUBIC, Inter.BICUBIC, Inter.AREA, Inter.PILCUBIC.

  • Inter.NEAREST: means interpolation method is nearest-neighbor interpolation.

  • Inter.ANTIALIAS: means the interpolation method is antialias interpolation.

  • Inter.LINEAR: means interpolation method is bilinear interpolation, here is the same as Inter.BILINEAR.

  • Inter.BILINEAR: means interpolation method is bilinear interpolation.

  • Inter.CUBIC: means the interpolation method is bicubic interpolation, here is the same as Inter.BICUBIC.

  • Inter.BICUBIC: means the interpolation method is bicubic interpolation.

  • Inter.AREA: means interpolation method is pixel area interpolation.

  • Inter.PILCUBIC: means interpolation method is bicubic interpolation like implemented in pillow, input should be in 3 channels format.

class tinyms.vision.Border[source]

Padding Mode, Border Type.

Possible enumeration values are: Border.CONSTANT, Border.EDGE, Border.REFLECT, Border.SYMMETRIC.

  • Border.CONSTANT: means it fills the border with constant values.

  • Border.EDGE: means it pads with the last value on the edge.

  • Border.REFLECT: means it reflects the values on the edge omitting the last value of edge. For example, padding [1,2,3,4] with 2 elements on both sides will result in [3,2,1,2,3,4,3,2].

  • Border.SYMMETRIC: means it reflects the values on the edge repeating the last value of edge. For example, padding [1,2,3,4] with 2 elements on both sides will result in [2,1,1,2,3,4,4,3].

Note

This class derived from class str to support json serializable.

class tinyms.vision.ImageBatchFormat[source]

Data Format of images after batch operation.

Possible enumeration values are: ImageBatchFormat.NHWC, ImageBatchFormat.NCHW.

  • ImageBatchFormat.NHWC: in orders like, batch N, height H, width W, channels C to store the data.

  • ImageBatchFormat.NCHW: in orders like, batch N, channels C, height H, width W to store the data.

tinyms.vision.ssd_bboxes_encode(boxes)[source]

Labels anchors with ground truth inputs.

Parameters:

boxes (numpy.ndarray) – Ground truth with shape [N, 5], for each row, it stores [ymin, xmin, ymax, xmax, cls].

Returns:

numpy.ndarray, location ground truth with shape [num_anchors, 4]. numpy.ndarray, class ground truth with shape [num_anchors, 1]. numpy.ndarray, number of positives in an image.

tinyms.vision.ssd_bboxes_filter(boxes, box_scores, image_shape)[source]

Filter predict boxes with minimum score and nms threshold.

Parameters:
  • boxes (numpy.ndarray) – Ground truth with shape [N, 4], for each row, it stores [ymin, xmin, ymax, xmax].

  • box_scores (numpy.ndarray) – Class scores with shape [N, 21].

  • image_shape (tuple) – Shape of original image with the format [h, w].

Returns:

list[list[float]], ground truth with shape [N, 4], for each row, it stores [ymin, xmin, ymax, xmax]. list[list[float]], class scores with shape [N, 21]. list[list[int]], class label with shape [N, 21].

tinyms.vision.coco_eval(pred_data, anno_file)[source]

Calculate mAP of predicted bboxes.

class tinyms.vision.MnistTransform(configs=None)[source]

Mnist dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in Mnist-style.

Outputs:

numpy.ndarray, transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import MnistTransform
>>>
>>> mnist_transform = MnistTransform()
>>> img = Image.open('object_detection.jpg')
>>> img = mnist_transform(img)
apply_ds(mnist_ds, repeat_size=1, batch_size=32, num_parallel_workers=None)[source]

Apply preprocess operation on MnistDataset instance.

Parameters:
  • mnist_ds (data.MnistDataset) – MnistDataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

Returns:

data.MnistDataset, the preprocessed MnistDataset instance.

Examples

>>> from tinyms.vision import MnistTransform
>>>
>>> mnist_transform = MnistTransform()
>>> mnist_ds = mnist_transform.apply_ds(mnist_ds)
postprocess(input, strategy='TOP1_CLASS')

Apply postprocess operation for prediction result.

Parameters:
  • input (numpy.ndarray) – Prediction result.

  • strategy (str) – Specifies the postprocess strategy. Default: TOP1_CLASS.

Returns:

str, the postprocess result.

class tinyms.vision.Cifar10Transform(configs=None)[source]

Cifar10 dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in Cifar10-style.

Outputs:

numpy.ndarray, Transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import Cifar10Transform
>>>
>>> cifar10_transform = Cifar10Transform()
>>> img = Image.open('object_detection.jpg')
>>> img = cifar10_transform(img)

“””

apply_ds(cifar10_ds, repeat_size=1, batch_size=32, num_parallel_workers=None, is_training=True)[source]

Apply preprocess operation on Cifar10Dataset instance.

Parameters:
  • cifar10_ds (data.Cifar10Dataset) – Cifar10Dataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • is_training (bool) – Specifies if is in training step. Default: True.

Returns:

data.Cifar10Dataset, the preprocessed Cifar10Dataset instance.

Examples

>>> from tinyms.vision import Cifar10Transform
>>>
>>> cifar10_transform = Cifar10Transform()
>>> cifar10_ds = cifar10_transform.apply_ds(cifar10_ds)
postprocess(input, strategy='TOP1_CLASS')

Apply postprocess operation for prediction result.

Parameters:
  • input (numpy.ndarray) – Prediction result.

  • strategy (str) – Specifies the postprocess strategy. Default: TOP1_CLASS.

Returns:

str, the postprocess result.

class tinyms.vision.ImageFolderTransform(configs=None)[source]

ImageFolder dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in ImageFolder-style.

Outputs:

numpy.ndarray, transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import ImageFolderTransform
>>>
>>> imagefolder_transform = ImageFolderTransform()
>>> img = Image.open('object_detection.jpg')
>>> img = imagefolder_transform(img)
apply_ds(imagefolder_ds, repeat_size=1, batch_size=32, num_parallel_workers=None, is_training=True)[source]

Apply preprocess operation on ImageFolderDataset instance.

Parameters:
  • cifar10_ds (data.ImageFolderDataset) – ImageFolderDataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • is_training (bool) – Specifies if is in training step. Default: True.

Returns:

data.ImageFolderDataset, the preprocessed ImageFolderDataset instance.

Examples

>>> from tinyms.vision import ImageFolderTransform
>>>
>>> imagefolder_transform = ImageFolderTransform()
>>> imagefolder_ds = imagefolder_transform.apply_ds(imagefolder_ds)
postprocess(input, strategy='TOP1_CLASS')

Apply postprocess operation for prediction result.

Parameters:
  • input (numpy.ndarray) – Prediction result.

  • strategy (str) – Specifies the postprocess strategy. Default: TOP1_CLASS.

Returns:

str, the postprocess result.

class tinyms.vision.VOCTransform(configs=None)[source]

VOC dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in VOC-style.

Outputs:

numpy.ndarray, transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import VOCTransform
>>>
>>> voc_transform = VOCTransform()
>>> img = Image.open('object_detection.jpg')
>>> img = voc_transform(img)
apply_ds(voc_ds, repeat_size=1, batch_size=32, num_parallel_workers=None, is_training=True)[source]

Apply preprocess operation on VOCDataset instance.

Parameters:
  • voc_ds (data.VOCDataset) – VOCDataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • is_training (bool) – Specifies if is in training step. Default: True.

Returns:

data.VOCDataset, the preprocessed VOCDataset instance.

Examples

>>> from tinyms.vision import VOCTransform
>>>
>>> VOC_transform = VOCTransform()
>>> voc_ds = voc_transform.apply_ds(voc_ds)
postprocess(input, image_shape, strategy='TOP1_CLASS')[source]

Apply postprocess operation for prediction result.

Parameters:
  • input (numpy.ndarray) – Prediction result.

  • image_shape (tuple) – Image shape.

  • strategy (str) – Specifies the postprocess strategy. Default: TOP1_CLASS.

Returns:

dict, the postprocess result.

class tinyms.vision.ShanshuiTransform(configs=None)[source]

Shanshui dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in VOC-style.

Outputs:

numpy.ndarray, transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import ShanshuiTransform
>>>
>>> shanshui_transform = ShanshuiTransform()
>>> img = Image.open('object_detection.jpg')
>>> img = shanshui_transform(img)
apply_ds(voc_ds, repeat_size=1, batch_size=32, num_parallel_workers=None, is_training=True)

Apply preprocess operation on VOCDataset instance.

Parameters:
  • voc_ds (data.VOCDataset) – VOCDataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: None.

  • is_training (bool) – Specifies if is in training step. Default: True.

Returns:

data.VOCDataset, the preprocessed VOCDataset instance.

Examples

>>> from tinyms.vision import VOCTransform
>>>
>>> VOC_transform = VOCTransform()
>>> voc_ds = voc_transform.apply_ds(voc_ds)
postprocess(input, image_shape, strategy='TOP1_CLASS')

Apply postprocess operation for prediction result.

Parameters:
  • input (numpy.ndarray) – Prediction result.

  • image_shape (tuple) – Image shape.

  • strategy (str) – Specifies the postprocess strategy. Default: TOP1_CLASS.

Returns:

dict, the postprocess result.

class tinyms.vision.CycleGanDatasetTransform(configs=None)[source]

CycleGan dataset transform class.

Inputs:

img (Union[numpy.ndarray, PIL.Image]): Image to be transformed in city_scape.

Outputs:

numpy.ndarray, transformed image.

Examples

>>> from PIL import Image
>>> from tinyms.vision import CycleGanDatasetTransform
>>>
>>> cyclegan_transform = CycleGanDatasetTransform()
>>> img = Image.open('object_detection.jpg')
>>> img = cyclegan_transform(img)
apply_ds(gan_generator_ds, repeat_size=1, batch_size=1, num_parallel_workers=1, shuffle=True, phase='train')[source]

Apply preprocess operation on GeneratorDataset instance.

Parameters:
  • gan_generator_ds (data.GeneratorDataset) – GeneratorDataset instance.

  • repeat_size (int) – The repeat size of dataset. Default: 1.

  • batch_size (int) – Batch size. Default: 32.

  • num_parallel_workers (int) – The number of concurrent workers. Default: 1.

  • shuffle (bool) – Specifies if applying shuffle operation. Default: True.

  • phase (str) – Specifies the current phase. Default: train.

Returns:

data.GeneratorDataset, the preprocessed GeneratorDataset instance.

Examples

>>> from tinyms.vision import CycleGanDatasetTransform
>>>
>>> cyclegan_transform = CycleGanDatasetTransform()
>>> gan_generator_ds = cyclegan_transform.apply_ds(gan_generator_ds)
Raises:

TypeError – If gan_generator_ds is not instance of GeneratorDataset.

class tinyms.vision.AutoContrast(**kwargs)[source]

Apply automatic contrast on input image. This operation calculates histogram of image, reassign cutoff percent of the lightest pixels from histogram to 255, and reassign cutoff percent of the darkest pixels from histogram to 0.

Parameters:
  • cutoff (float, optional) – Percent of lightest and darkest pixels to cut off from the histogram of input image. The value must be in the range [0.0, 50.0). Default: 0.0.

  • ignore (Union[int, sequence], optional) – The background pixel values to ignore, The ignore values must be in range [0, 255]. Default: None.

Raises:
  • TypeError – If cutoff is not of type float.

  • TypeError – If ignore is not of type int or sequence.

  • ValueError – If cutoff is not in range [0, 50.0).

  • ValueError – If ignore is not in range [0, 255].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.AutoContrast(cutoff=10.0, ignore=[10, 20])]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.BoundingBoxAugment(**kwargs)[source]

Apply a given image processing operation on a random selection of bounding box regions of a given image.

Parameters:
  • transform (TensorOperation) – C++ transformation operation to be applied on random selection of bounding box regions of a given image.

  • ratio (float, optional) – Ratio of bounding boxes to apply augmentation on. Range: [0.0, 1.0]. Default: 0.3.

Raises:
  • TypeError – If transform is not an image processing operation in mindspore.dataset.vision.c_transforms .

  • TypeError – If ratio is not of type float.

  • ValueError – If ratio is not in range [0.0, 1.0].

  • RuntimeError – If given bounding box is invalid.

Supported Platforms:

CPU

Examples

>>> # set bounding box operation with ratio of 1 to apply rotation on all bounding boxes
>>> bbox_aug_op = c_vision.BoundingBoxAugment(c_vision.RandomRotation(90), 1)
>>> # map to apply ops
>>> image_folder_dataset = image_folder_dataset.map(operations=[bbox_aug_op],
...                                                 input_columns=["image", "bbox"],
...                                                 output_columns=["image", "bbox"])
class tinyms.vision.CenterCrop(**kwargs)[source]

Crop the input image at the center to the given size. If input image size is smaller than output size, input image will be padded with 0 before cropping.

Parameters:

size (Union[int, sequence]) – The output size of the cropped image. If size is an integer, a square crop of size (size, size) is returned. If size is a sequence of length 2, an image of size (height, width) will be cropped. The size value(s) must be larger than 0.

Raises:
  • TypeError – If size is not of type int or sequence.

  • ValueError – If size is less than or equal to 0.

  • RuntimeError – If given tensor shape is not <H, W> or <…, H, W, C>.

Supported Platforms:

CPU

Examples

>>> # crop image to a square
>>> transforms_list1 = [c_vision.Decode(), c_vision.CenterCrop(50)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list1,
...                                                 input_columns=["image"])
>>> # crop image to portrait style
>>> transforms_list2 = [c_vision.Decode(), c_vision.CenterCrop((60, 40))]
>>> image_folder_dataset_1 = image_folder_dataset_1.map(operations=transforms_list2,
...                                                     input_columns=["image"])
class tinyms.vision.CutMixBatch(**kwargs)[source]

Apply CutMix transformation on input batch of images and labels. Note that you need to make labels into one-hot format and batched before calling this operation.

Parameters:
  • image_batch_format (ImageBatchFormat) – The method of padding. Can be any of [ImageBatchFormat.NHWC, ImageBatchFormat.NCHW].

  • alpha (float, optional) – Hyperparameter of beta distribution, must be larger than 0. Default: 1.0.

  • prob (float, optional) – The probability by which CutMix is applied to each image, range: [0, 1]. Default: 1.0.

Raises:
  • TypeError – If image_batch_format is not of type mindspore.dataset.vision.ImageBatchFormat .

  • TypeError – If alpha is not of type float.

  • TypeError – If prob is not of type float.

  • ValueError – If alpha is less than or equal 0.

  • ValueError – If prob is not in range [0, 1].

  • RuntimeError – If given tensor shape is not <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import ImageBatchFormat
>>> onehot_op = c_transforms.OneHot(num_classes=10)
>>> image_folder_dataset= image_folder_dataset.map(operations=onehot_op,
...                                                input_columns=["label"])
>>> cutmix_batch_op = c_vision.CutMixBatch(ImageBatchFormat.NHWC, 1.0, 0.5)
>>> image_folder_dataset = image_folder_dataset.batch(5)
>>> image_folder_dataset = image_folder_dataset.map(operations=cutmix_batch_op,
...                                                 input_columns=["image", "label"])
class tinyms.vision.CutOut(**kwargs)[source]

Randomly cut (mask) out a given number of square patches from the input image array.

Parameters:
  • length (int) – The side length of each square patch, must be larger than 0.

  • num_patches (int, optional) – Number of patches to be cut out of an image, must be larger than 0. Default: 1.

Raises:
  • TypeError – If length is not of type int.

  • TypeError – If num_patches is not of type int.

  • ValueError – If length is less than or equal 0.

  • ValueError – If num_patches is less than or equal 0.

  • RuntimeError – If given tensor shape is not <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.CutOut(80, num_patches=10)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Decode(**kwargs)[source]

Decode the input image.

Parameters:

rgb (bool, optional) – Mode of decoding input image. Default: True. If True means format of decoded image is RGB else BGR (deprecated).

Raises:
  • RuntimeError – If rgb is False, since this option is deprecated.

  • RuntimeError – If given tensor is not a 1D sequence.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomHorizontalFlip()]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Equalize(**kwargs)[source]

Apply histogram equalization on input image.

Raises:

RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.Equalize()]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Grayscale(**kwargs)[source]

Convert the input PIL Image to grayscale.

Parameters:

num_output_channels (int) – The number of channels desired for the output image, must be 1 or 3. If 3 is provided, the returned image will have 3 identical RGB channels. Default: 1.

Raises:
  • TypeError – If num_output_channels is not of type int.

  • ValueError – If num_output_channels is not 1 or 3.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.transforms.py_transforms import Compose
>>>
>>> transforms_list = Compose([py_vision.Decode(),
...                            py_vision.Grayscale(3),
...                            py_vision.ToTensor()])
>>> # apply the transform to dataset through map function
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns="image")
classmethod from_json(json_string)

Base from_json for Python tensor operations class

to_json()

Base to_json for Python tensor operations class

class tinyms.vision.HWC2CHW(**kwargs)[source]

Transpose the input image from shape (H, W, C) to (C, H, W). If the input image is of shape <H, W>, it will remain unchanged.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Raises:

RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

Ascend GPU CPU

Examples

>>> transforms_list = [c_vision.Decode(),
...                    c_vision.RandomHorizontalFlip(0.75),
...                    c_vision.RandomCrop(512),
...                    c_vision.HWC2CHW()]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Invert(**kwargs)[source]

Apply invert on input image in RGB mode. This operation will reassign every pixel to (255 - pixel).

Raises:

RuntimeError – If given tensor shape is not <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.Invert()]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.MixUpBatch(**kwargs)[source]

Apply MixUp transformation on input batch of images and labels. Each image is multiplied by a random weight (lambda) and then added to a randomly selected image from the batch multiplied by (1 - lambda). The same formula is also applied to the one-hot labels.

The lambda is generated based on the specified alpha value. Two coefficients x1, x2 are randomly generated in the range [alpha, 1], and lambda = (x1 / (x1 + x2)).

Note that you need to make labels into one-hot format and batched before calling this operation.

Parameters:

alpha (float, optional) – Hyperparameter of beta distribution. The value must be positive. Default: 1.0.

Raises:
  • TypeError – If alpha is not of type float.

  • ValueError – If alpha is not positive.

  • RuntimeError – If given tensor shape is not <N, H, W, C> or <N, C, H, W>.

Supported Platforms:

CPU

Examples

>>> onehot_op = c_transforms.OneHot(num_classes=10)
>>> image_folder_dataset= image_folder_dataset.map(operations=onehot_op,
...                                                input_columns=["label"])
>>> mixup_batch_op = c_vision.MixUpBatch(alpha=0.9)
>>> image_folder_dataset = image_folder_dataset.batch(5)
>>> image_folder_dataset = image_folder_dataset.map(operations=mixup_batch_op,
...                                                 input_columns=["image", "label"])
class tinyms.vision.Normalize(**kwargs)[source]

Normalize the input image with respect to mean and standard deviation. This operation will normalize the input image with: output[channel] = (input[channel] - mean[channel]) / std[channel], where channel >= 1.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:
  • mean (sequence) – List or tuple of mean values for each channel, with respect to channel order. The mean values must be in range [0.0, 255.0].

  • std (sequence) – List or tuple of standard deviations for each channel, with respect to channel order. The standard deviation values must be in range (0.0, 255.0].

Raises:
  • TypeError – If mean is not of type sequence.

  • TypeError – If std is not of type sequence.

  • ValueError – If mean is not in range [0.0, 255.0].

  • ValueError – If std is not in range (0.0, 255.0].

  • RuntimeError – If given tensor shape is not <H, W> or <…,H, W, C>.

Supported Platforms:

Ascend GPU CPU

Examples

>>> decode_op = c_vision.Decode()
>>> normalize_op = c_vision.Normalize(mean=[121.0, 115.0, 100.0], std=[70.0, 68.0, 71.0])
>>> transforms_list = [decode_op, normalize_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Pad(**kwargs)[source]

Pad the image.

Parameters:
  • padding (Union[int, Sequence[tuple]]) – The number of pixels to pad each border of the image. If a single number is provided, it pads all borders with this value. If a tuple or lists of 2 values are provided, it pads the (left and top) with the first value and (right and bottom) with the second value. If 4 values are provided as a list or tuple, it pads the left, top, right and bottom respectively. The pad values must be non-negative.

  • fill_value (Union[int, tuple[int]], optional) – The pixel intensity of the borders, only valid for padding_mode Border.CONSTANT. If it is a 3-tuple, it is used to fill R, G, B channels respectively. If it is an integer, it is used for all RGB channels. The fill_value values must be in range [0, 255]. Default: 0.

  • padding_mode (Border, optional) –

    The method of padding. Default: Border.CONSTANT. Can be any of [Border.CONSTANT, Border.EDGE, Border.REFLECT, Border.SYMMETRIC].

    • Border.CONSTANT, means it fills the border with constant values.

    • Border.EDGE, means it pads with the last value on the edge.

    • Border.REFLECT, means it reflects the values on the edge omitting the last value of edge.

    • Border.SYMMETRIC, means it reflects the values on the edge repeating the last value of edge.

Note

The behavior when padding is a sequence of length 2 will change from padding left/top with the first value and right/bottom with the second, to padding left/right with the first one and top/bottom with the second in the future. Or you can pass in a 4-element sequence to specify left, top, right and bottom respectively.

Raises:
  • TypeError – If padding is not of type int or Sequence[int].

  • TypeError – If fill_value is not of type int or tuple[int].

  • TypeError – If padding_mode is not of type mindspore.dataset.vision.Border .

  • ValueError – If padding is negative.

  • ValueError – If fill_value is not in range [0, 255].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.Pad([100, 100, 100, 100])]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
tinyms.vision.PILRandomHorizontalFlip

alias of mindspore.dataset.vision.py_transforms.RandomHorizontalFlip

class tinyms.vision.RandomAffine(**kwargs)[source]

Apply Random affine transformation to the input image.

Parameters:
  • degrees (Union[int, float, sequence]) – Range of the rotation degrees. If degrees is a number, the range will be (-degrees, degrees). If degrees is a sequence, it should be (min, max).

  • translate (sequence, optional) – Sequence (tx_min, tx_max, ty_min, ty_max) of minimum/maximum translation in x(horizontal) and y(vertical) directions, range [-1.0, 1.0]. Default: None. The horizontal and vertical shift is selected randomly from the range: (tx_min*width, tx_max*width) and (ty_min*height, ty_max*height), respectively. If a tuple or list of size 2, then a translate parallel to the X axis in the range of (translate[0], translate[1]) is applied. If a tuple or list of size 4, then a translate parallel to the X axis in the range of (translate[0], translate[1]) and a translate parallel to the Y axis in the range of (translate[2], translate[3]) are applied. If None, no translation is applied.

  • scale (sequence, optional) – Scaling factor interval, which must be non negative. Default: None, original scale is used.

  • shear (Union[int, float, sequence], optional) – Range of shear factor, which must be positive. Default: None. If a number, then a shear parallel to the X axis in the range of (-shear, +shear) is applied. If a tuple or list of size 2, then a shear parallel to the X axis in the range of (shear[0], shear[1]) is applied. If a tuple or list of size 4, then a shear parallel to X axis in the range of (shear[0], shear[1]) and a shear parallel to Y axis in the range of (shear[2], shear[3]) is applied. If None, no shear is applied.

  • resample (Inter, optional) –

    An optional resampling filter. Default: Inter.NEAREST. It can be any of [Inter.BILINEAR, Inter.NEAREST, Inter.BICUBIC, Inter.AREA].

    • Inter.BILINEAR, means resample method is bilinear interpolation.

    • Inter.NEAREST, means resample method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means resample method is bicubic interpolation.

    • Inter.AREA, means resample method is pixel area interpolation.

  • fill_value (Union[int, tuple[int]], optional) – Optional fill_value to fill the area outside the transform in the output image. There must be three elements in tuple and the value of single element is [0, 255]. Default: 0, filling is performed.

Raises:
  • TypeError – If degrees is not of type int, float or sequence.

  • TypeError – If translate is not of type sequence.

  • TypeError – If scale is not of type sequence.

  • TypeError – If shear is not of type int, float or sequence.

  • TypeError – If resample is not of type mindspore.dataset.vision.Inter .

  • TypeError – If fill_value is not of type int or tuple[int].

  • ValueError – If degrees is negative.

  • ValueError – If translate is not in range [-1.0, 1.0].

  • ValueError – If scale is negative.

  • ValueError – If shear is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> decode_op = c_vision.Decode()
>>> random_affine_op = c_vision.RandomAffine(degrees=15,
...                                          translate=(-0.1, 0.1, 0, 0),
...                                          scale=(0.9, 1.1),
...                                          resample=Inter.NEAREST)
>>> transforms_list = [decode_op, random_affine_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomColor(**kwargs)[source]

Adjust the color of the input image by a fixed or random degree. This operation works only with 3-channel RGB images.

Parameters:

degrees (Sequence[float], optional) – Range of random color adjustment degrees, which must be non-negative. It should be in (min, max) format. If min=max, then it is a single fixed magnitude operation. Default: (0.1, 1.9).

Raises:
Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomColor((0.5, 2.0))]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomColorAdjust(**kwargs)[source]

Randomly adjust the brightness, contrast, saturation, and hue of the input image.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:
  • brightness (Union[float, Sequence[float]], optional) – Brightness adjustment factor. Default: (1, 1). Cannot be negative. If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness), 1+brightness]. If it is a sequence, it should be [min, max] for the range.

  • contrast (Union[float, Sequence[float]], optional) – Contrast adjustment factor. Default: (1, 1). Cannot be negative. If it is a float, the factor is uniformly chosen from the range [max(0, 1-contrast), 1+contrast]. If it is a sequence, it should be [min, max] for the range.

  • saturation (Union[float, Sequence[float]], optional) – Saturation adjustment factor. Default: (1, 1). Cannot be negative. If it is a float, the factor is uniformly chosen from the range [max(0, 1-saturation), 1+saturation]. If it is a sequence, it should be [min, max] for the range.

  • hue (Union[float, Sequence[float]], optional) – Hue adjustment factor. Default: (0, 0). If it is a float, the range will be [-hue, hue]. Value should be 0 <= hue <= 0.5. If it is a sequence, it should be [min, max] where -0.5 <= min <= max <= 0.5.

Raises:
  • TypeError – If brightness is not of type float or Sequence[float].

  • TypeError – If contrast is not of type float or Sequence[float].

  • TypeError – If saturation is not of type float or Sequence[float].

  • TypeError – If hue is not of type float or Sequence[float].

  • ValueError – If brightness is negative.

  • ValueError – If contrast is negative.

  • ValueError – If saturation is negative.

  • ValueError – If hue is not in range [-0.5, 0.5].

  • RuntimeError – If given tensor shape is not <H, W, C>.

Supported Platforms:

Ascend GPU CPU

Examples

>>> decode_op = c_vision.Decode()
>>> transform_op = c_vision.RandomColorAdjust(brightness=(0.5, 1),
...                                           contrast=(0.4, 1),
...                                           saturation=(0.3, 1))
>>> transforms_list = [decode_op, transform_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomCrop(**kwargs)[source]

Crop the input image at a random location. If input image size is smaller than output size, input image will be padded before cropping.

Note

If the input image is more than one, then make sure that the image size is the same.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the cropped image. The size value(s) must be positive. If size is an integer, a square crop of size (size, size) is returned. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • padding (Union[int, Sequence[int]], optional) – The number of pixels to pad each border of the image. The padding value(s) must be non-negative. Default: None. If padding is not None, pad image first with padding values. If a single number is provided, pad all borders with this value. If a tuple or lists of 2 values are provided, pad the (left and top) with the first value and (right and bottom) with the second value. If 4 values are provided as a list or tuple, pad the left, top, right and bottom respectively.

  • pad_if_needed (bool, optional) – Pad the image if either side is smaller than the given output size. Default: False.

  • fill_value (Union[int, tuple[int]], optional) – The pixel intensity of the borders, only valid for padding_mode Border.CONSTANT. If it is a 3-tuple, it is used to fill R, G, B channels respectively. If it is an integer, it is used for all RGB channels. The fill_value values must be in range [0, 255]. Default: 0.

  • padding_mode (Border, optional) –

    The method of padding. Default: Border.CONSTANT. It can be any of [Border.CONSTANT, Border.EDGE, Border.REFLECT, Border.SYMMETRIC].

    • Border.CONSTANT, means it fills the border with constant values.

    • Border.EDGE, means it pads with the last value on the edge.

    • Border.REFLECT, means it reflects the values on the edge omitting the last value of edge.

    • Border.SYMMETRIC, means it reflects the values on the edge repeating the last value of edge.

Note

The behavior when padding is a sequence of length 2 will change from padding left/top with the first value and right/bottom with the second, to padding left/right with the first one and top/bottom with the second in the future. Or you can pass in a 4-element sequence to specify left, top, right and bottom respectively.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If padding is not of type int or Sequence[int].

  • TypeError – If pad_if_needed is not of type boolean.

  • TypeError – If fill_value is not of type int or tuple[int].

  • TypeError – If padding_mode is not of type mindspore.dataset.vision.Border .

  • ValueError – If size is not positive.

  • ValueError – If padding is negative.

  • ValueError – If fill_value is not in range [0, 255].

  • RuntimeError – If given tensor shape is not <H, W> or <…, H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Border
>>> decode_op = c_vision.Decode()
>>> random_crop_op = c_vision.RandomCrop(512, [200, 200, 200, 200], padding_mode=Border.EDGE)
>>> transforms_list = [decode_op, random_crop_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomCropDecodeResize(**kwargs)[source]

A combination of Crop , Decode and Resize . It will get better performance for JPEG images. This operation will crop the input image at a random location, decode the cropped image in RGB mode, and resize the decoded image.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, a square crop of size (size, size) is returned. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • scale (Union[list, tuple], optional) – Range [min, max) of respective size of the original size to be cropped, which must be non-negative. Default: (0.08, 1.0).

  • ratio (Union[list, tuple], optional) – Range [min, max) of aspect ratio to be cropped, which must be non-negative. Default: (3. / 4., 4. / 3.).

  • interpolation (Inter, optional) –

    Image interpolation mode for resize operation. Default: Inter.BILINEAR. It can be any of [Inter.BILINEAR, Inter.NEAREST, Inter.BICUBIC, Inter.AREA, Inter.PILCUBIC].

    • Inter.BILINEAR, means interpolation method is bilinear interpolation.

    • Inter.NEAREST, means interpolation method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means interpolation method is bicubic interpolation.

    • Inter.AREA, means interpolation method is pixel area interpolation.

    • Inter.PILCUBIC, means interpolation method is bicubic interpolation like implemented in pillow, input should be in 3 channels format.

  • max_attempts (int, optional) – The maximum number of attempts to propose a valid crop_area. Default: 10. If exceeded, fall back to use center_crop instead. The max_attempts value must be positive.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If scale is not of type tuple or list.

  • TypeError – If ratio is not of type tuple or list.

  • TypeError – If interpolation is not of type mindspore.dataset.vision.Inter .

  • TypeError – If max_attempts is not of type int.

  • ValueError – If size is not positive.

  • ValueError – If scale is negative.

  • ValueError – If ratio is negative.

  • ValueError – If max_attempts is not positive.

  • RuntimeError – If given tensor is not a 1D sequence.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> resize_crop_decode_op = c_vision.RandomCropDecodeResize(size=(50, 75),
...                                                         scale=(0.25, 0.5),
...                                                         interpolation=Inter.NEAREST,
...                                                         max_attempts=5)
>>> transforms_list = [resize_crop_decode_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomCropWithBBox(**kwargs)[source]

Crop the input image at a random location and adjust bounding boxes accordingly.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the cropped image. The size value(s) must be positive. If size is an integer, a square crop of size (size, size) is returned. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • padding (Union[int, Sequence[int]], optional) – The number of pixels to pad the image The padding value(s) must be non-negative. Default: None. If padding is not None, first pad image with padding values. If a single number is provided, pad all borders with this value. If a tuple or lists of 2 values are provided, pad the (left and top) with the first value and (right and bottom) with the second value. If 4 values are provided as a list or tuple, pad the left, top, right and bottom respectively.

  • pad_if_needed (bool, optional) – Pad the image if either side is smaller than the given output size. Default: False.

  • fill_value (Union[int, tuple[int]], optional) – The pixel intensity of the borders, only valid for padding_mode Border.CONSTANT. If it is a 3-tuple, it is used to fill R, G, B channels respectively. If it is an integer, it is used for all RGB channels. The fill_value values must be in range [0, 255]. Default: 0.

  • padding_mode (Border, optional) –

    The method of padding. Default: Border.CONSTANT. It can be any of [Border.CONSTANT, Border.EDGE, Border.REFLECT, Border.SYMMETRIC].

    • Border.CONSTANT, means it fills the border with constant values.

    • Border.EDGE, means it pads with the last value on the edge.

    • Border.REFLECT, means it reflects the values on the edge omitting the last value of edge.

    • Border.SYMMETRIC, means it reflects the values on the edge repeating the last value of edge.

Note

The behavior when padding is a sequence of length 2 will change from padding left/top with the first value and right/bottom with the second, to padding left/right with the first one and top/bottom with the second in the future. Or you can pass in a 4-element sequence to specify left, top, right and bottom respectively.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If padding is not of type int or Sequence[int].

  • TypeError – If pad_if_needed is not of type boolean.

  • TypeError – If fill_value is not of type int or tuple[int].

  • TypeError – If padding_mode is not of type mindspore.dataset.vision.Border .

  • ValueError – If size is not positive.

  • ValueError – If padding is negative.

  • ValueError – If fill_value is not in range [0, 255].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> decode_op = c_vision.Decode()
>>> random_crop_with_bbox_op = c_vision.RandomCropWithBBox([512, 512], [200, 200, 200, 200])
>>> transforms_list = [decode_op, random_crop_with_bbox_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomHorizontalFlip(**kwargs)[source]

Randomly flip the input image horizontally with a given probability.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:

prob (float, optional) – Probability of the image being flipped, which must be in range of [0, 1]. Default: 0.5.

Raises:
  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0, 1].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

Ascend GPU CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomHorizontalFlip(0.75)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomHorizontalFlipWithBBox(**kwargs)[source]

Flip the input image horizontally randomly with a given probability and adjust bounding boxes accordingly.

Parameters:

prob (float, optional) – Probability of the image being flipped, which must be in range of [0, 1]. Default: 0.5.

Raises:
  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0, 1].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomHorizontalFlipWithBBox(0.70)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomPosterize(**kwargs)[source]

Reduce the number of bits for each color channel to posterize the input image randomly with a given probability.

Parameters:

bits (Union[int, Sequence[int]], optional) – Range of random posterize to compress image. Bits values must be in range of [1,8], and include at least one integer value in the given range. It must be in (min, max) or integer format. If min=max, then it is a single fixed magnitude operation. Default: (8, 8).

Raises:
  • TypeError – If bits is not of type int or sequence of int.

  • ValueError – If bits is not in range [1, 8].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomPosterize((6, 8))]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomResize(**kwargs)[source]

Resize the input image using a randomly selected interpolation mode.

Parameters:

size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, a square of size (size, size) will be cropped with this value. If size is a sequence of length 2, an image of size (height, width) will be cropped.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • ValueError – If size is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> # randomly resize image, keeping aspect ratio
>>> transforms_list1 = [c_vision.Decode(), c_vision.RandomResize(50)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list1,
...                                                 input_columns=["image"])
>>> # randomly resize image to landscape style
>>> transforms_list2 = [c_vision.Decode(), c_vision.RandomResize((40, 60))]
>>> image_folder_dataset_1 = image_folder_dataset_1.map(operations=transforms_list2,
...                                                     input_columns=["image"])
class tinyms.vision.RandomResizedCrop(**kwargs)[source]

This operation will crop the input image randomly, and resize the cropped image using a selected interpolation mode.

Note

If the input image is more than one, then make sure that the image size is the same.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, a square of size (size, size) will be cropped with this value. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • scale (Union[list, tuple], optional) – Range [min, max) of respective size of the original size to be cropped, which must be non-negative. Default: (0.08, 1.0).

  • ratio (Union[list, tuple], optional) – Range [min, max) of aspect ratio to be cropped, which must be non-negative. Default: (3. / 4., 4. / 3.).

  • interpolation (Inter, optional) –

    Method of interpolation. Default: Inter.BILINEAR. It can be any of [Inter.BILINEAR, Inter.NEAREST, Inter.BICUBIC, Inter.AREA, Inter.PILCUBIC].

    • Inter.BILINEAR, means interpolation method is bilinear interpolation.

    • Inter.NEAREST, means interpolation method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means interpolation method is bicubic interpolation.

    • Inter.AREA, means interpolation method is pixel area interpolation.

    • Inter.PILCUBIC, means interpolation method is bicubic interpolation like implemented in pillow, input should be in 3 channels format.

  • max_attempts (int, optional) – The maximum number of attempts to propose a valid crop_area. Default: 10. If exceeded, fall back to use center_crop instead.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If scale is not of type tuple or list.

  • TypeError – If ratio is not of type tuple or list.

  • TypeError – If interpolation is not of type mindspore.dataset.vision.Inter .

  • TypeError – If max_attempts is not of type int.

  • ValueError – If size is not positive.

  • ValueError – If scale is negative.

  • ValueError – If ratio is negative.

  • ValueError – If max_attempts is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <…, H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> decode_op = c_vision.Decode()
>>> resize_crop_op = c_vision.RandomResizedCrop(size=(50, 75), scale=(0.25, 0.5),
...                                             interpolation=Inter.BILINEAR)
>>> transforms_list = [decode_op, resize_crop_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomResizedCropWithBBox(**kwargs)[source]

Crop the input image to a random size and aspect ratio and adjust bounding boxes accordingly.

Parameters:
  • size (Union[int, Sequence[int]]) – The size of the output image. The size value(s) must be positive. If size is an integer, a square of size (size, size) will be cropped with this value. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • scale (Union[list, tuple], optional) – Range (min, max) of respective size of the original size to be cropped, which must be non-negative. Default: (0.08, 1.0).

  • ratio (Union[list, tuple], optional) – Range (min, max) of aspect ratio to be cropped, which must be non-negative. Default: (3. / 4., 4. / 3.).

  • interpolation (Inter mode, optional) –

    Method of interpolation. Default: Inter.BILINEAR. It can be any of [Inter.BILINEAR, Inter.NEAREST, Inter.BICUBIC] .

    • Inter.BILINEAR, means interpolation method is bilinear interpolation.

    • Inter.NEAREST, means interpolation method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means interpolation method is bicubic interpolation.

  • max_attempts (int, optional) – The maximum number of attempts to propose a valid crop area. Default: 10. If exceeded, fall back to use center crop instead.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If scale is not of type tuple or list.

  • TypeError – If ratio is not of type tuple or list.

  • TypeError – If interpolation is not of type mindspore.dataset.vision.Inter .

  • TypeError – If max_attempts is not of type int.

  • ValueError – If size is not positive.

  • ValueError – If scale is negative.

  • ValueError – If ratio is negative.

  • ValueError – If max_attempts is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> decode_op = c_vision.Decode()
>>> bbox_op = c_vision.RandomResizedCropWithBBox(size=50, interpolation=Inter.NEAREST)
>>> transforms_list = [decode_op, bbox_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomResizeWithBBox(**kwargs)[source]

Tensor operation to resize the input image using a randomly selected interpolation mode and adjust bounding boxes accordingly.

Parameters:

size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, a square of size (size, size) will be cropped with this value. If size is a sequence of length 2, an image of size (height, width) will be cropped.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • ValueError – If size is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> # randomly resize image with bounding boxes, keeping aspect ratio
>>> transforms_list1 = [c_vision.Decode(), c_vision.RandomResizeWithBBox(60)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list1,
...                                                 input_columns=["image"])
>>> # randomly resize image with bounding boxes to portrait style
>>> transforms_list2 = [c_vision.Decode(), c_vision.RandomResizeWithBBox((80, 60))]
>>> image_folder_dataset_1 = image_folder_dataset_1.map(operations=transforms_list2,
...                                                     input_columns=["image"])
class tinyms.vision.RandomRotation(**kwargs)[source]

Rotate the input image randomly within a specified range of degrees.

Parameters:
  • degrees (Union[int, float, sequence]) – Range of random rotation degrees. If degrees is a number, the range will be converted to (-degrees, degrees). If degrees is a sequence, it should be (min, max).

  • resample (Inter, optional) –

    An optional resampling filter. Default: Inter.NEAREST. It can be any of [Inter.BILINEAR, Inter.NEAREST, Inter.BICUBIC, Inter.AREA].

    • Inter.BILINEAR, means resample method is bilinear interpolation.

    • Inter.NEAREST, means resample method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means resample method is bicubic interpolation.

    • Inter.AREA: means the interpolation method is pixel area interpolation.

  • expand (bool, optional) – Optional expansion flag. Default: False. If set to True, expand the output image to make it large enough to hold the entire rotated image. If set to False or omitted, make the output image the same size as the input. Note that the expand flag assumes rotation around the center and no translation.

  • center (tuple, optional) – Optional center of rotation (a 2-tuple). Default: None. Origin is the top left corner. None sets to the center of the image.

  • fill_value (Union[int, tuple[int]], optional) – Optional fill color for the area outside the rotated image. If it is a 3-tuple, it is used to fill R, G, B channels respectively. If it is an integer, it is used for all RGB channels. The fill_value values must be in range [0, 255]. Default: 0.

Raises:
  • TypeError – If degrees is not of type int, float or sequence.

  • TypeError – If resample is not of type mindspore.dataset.vision.Inter .

  • TypeError – If expand is not of type boolean.

  • TypeError – If center is not of type tuple.

  • TypeError – If fill_value is not of type int or tuple[int].

  • ValueError – If fill_value is not in range [0, 255].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> transforms_list = [c_vision.Decode(),
...                    c_vision.RandomRotation(degrees=5.0,
...                    resample=Inter.NEAREST,
...                    expand=True)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomSelectSubpolicy(**kwargs)[source]

Choose a random sub-policy from a policy list to be applied on the input image.

Parameters:

policy (list[list[tuple[TensorOperation, float]]]) – List of sub-policies to choose from. A sub-policy is a list of tuple[operation, prob], where operation is a data processing operation and prob is the probability that this operation will be applied, and the prob values must be in range [0, 1]. Once a sub-policy is selected, each operation within the sub-policy with be applied in sequence according to its probability.

Raises:

TypeError – If policy contains invalid data processing operations.

Supported Platforms:

CPU

Examples

>>> policy = [[(c_vision.RandomRotation((45, 45)), 0.5),
...            (c_vision.RandomVerticalFlip(), 1),
...            (c_vision.RandomColorAdjust(), 0.8)],
...           [(c_vision.RandomRotation((90, 90)), 1),
...            (c_vision.RandomColorAdjust(), 0.2)]]
>>> image_folder_dataset = image_folder_dataset.map(operations=c_vision.RandomSelectSubpolicy(policy),
...                                                 input_columns=["image"])
class tinyms.vision.RandomSharpness(**kwargs)[source]

Adjust the sharpness of the input image by a fixed or random degree. Degree of 0.0 gives a blurred image, degree of 1.0 gives the original image, and degree of 2.0 gives a sharpened image.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:

degrees (Union[list, tuple], optional) – Range of random sharpness adjustment degrees, which must be non-negative. It should be in (min, max) format. If min=max, then it is a single fixed magnitude operation. Default: (0.1, 1.9).

Raises:
  • TypeError – If degrees is not of type list or tuple.

  • ValueError – If degrees is negative.

  • ValueError – If degrees is in (max, min) format instead of (min, max).

Supported Platforms:

Ascend GPU CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomSharpness(degrees=(0.2, 1.9))]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomSolarize(**kwargs)[source]

Randomly selects a subrange within the specified threshold range and sets the pixel value within the subrange to (255 - pixel).

Parameters:

threshold (tuple, optional) – Range of random solarize threshold. Default: (0, 255). Threshold values should always be in (min, max) format, where min and max are integers in the range [0, 255], and min <= max. If min=max, then invert all pixel values above min(max).

Raises:
  • TypeError – If threshold is not of type tuple.

  • ValueError – If threshold is not in range of [0, 255].

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomSolarize(threshold=(10,100))]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomVerticalFlip(**kwargs)[source]

Randomly flip the input image vertically with a given probability.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:

prob (float, optional) – Probability of the image being flipped. Default: 0.5.

Raises:
  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0, 1].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

Ascend GPU CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomVerticalFlip(0.25)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.RandomVerticalFlipWithBBox(**kwargs)[source]

Flip the input image vertically, randomly with a given probability and adjust bounding boxes accordingly.

Parameters:

prob (float, optional) – Probability of the image being flipped. Default: 0.5.

Raises:
  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0, 1].

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.RandomVerticalFlipWithBBox(0.20)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Rescale(**kwargs)[source]

Rescale the input image with the given rescale and shift. This operation will rescale the input image with: output = image * rescale + shift.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:
  • rescale (float) – Rescale factor.

  • shift (float) – Shift factor.

Raises:
  • TypeError – If rescale is not of type float.

  • TypeError – If shift is not of type float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> transforms_list = [c_vision.Decode(), c_vision.Rescale(1.0 / 255.0, -1.0)]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.Resize(**kwargs)[source]

Resize the input image to the given size with a given interpolation mode.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, the smaller edge of the image will be resized to this value with the same image aspect ratio. If size is a sequence of length 2, it should be (height, width).

  • interpolation (Inter, optional) –

    Image interpolation mode. Default: Inter.LINEAR. It can be any of [Inter.LINEAR, Inter.NEAREST, Inter.BICUBIC, Inter.AREA, Inter.PILCUBIC].

    • Inter.LINEAR, means interpolation method is bilinear interpolation.

    • Inter.NEAREST, means interpolation method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means interpolation method is bicubic interpolation.

    • Inter.AREA, means interpolation method is pixel area interpolation.

    • Inter.PILCUBIC, means interpolation method is bicubic interpolation like implemented in pillow, input should be in 3 channels format.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If interpolation is not of type mindspore.dataset.vision.Inter .

  • ValueError – If size is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> decode_op = c_vision.Decode()
>>> resize_op = c_vision.Resize([100, 75], Inter.BICUBIC)
>>> transforms_list = [decode_op, resize_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.ResizeWithBBox(**kwargs)[source]

Resize the input image to the given size and adjust bounding boxes accordingly.

Parameters:
  • size (Union[int, Sequence[int]]) – The output size of the resized image. If size is an integer, smaller edge of the image will be resized to this value with the same image aspect ratio. If size is a sequence of length 2, it should be (height, width).

  • interpolation (Inter, optional) –

    Image interpolation mode. Default: Inter.LINEAR. It can be any of [Inter.LINEAR, Inter.NEAREST, Inter.BICUBIC].

    • Inter.LINEAR, means interpolation method is bilinear interpolation.

    • Inter.NEAREST, means interpolation method is nearest-neighbor interpolation.

    • Inter.BICUBIC, means interpolation method is bicubic interpolation.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • TypeError – If interpolation is not of type mindspore.dataset.vision.Inter .

  • ValueError – If size is not positive.

  • RuntimeError – If given tensor shape is not <H, W> or <H, W, C>.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.vision import Inter
>>> decode_op = c_vision.Decode()
>>> bbox_op = c_vision.ResizeWithBBox(50, Inter.NEAREST)
>>> transforms_list = [decode_op, bbox_op]
>>> image_folder_dataset = image_folder_dataset.map(operations=transforms_list,
...                                                 input_columns=["image"])
class tinyms.vision.SoftDvppDecodeRandomCropResizeJpeg(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), max_attempts=10)[source]

A combination of Crop , Decode and Resize using the simulation algorithm of Ascend series chip DVPP module.

The usage scenario is consistent with SoftDvppDecodeResizeJpeg. The input image size should be in range [32*32, 8192*8192]. The zoom-out and zoom-in multiples of the image length and width should in the range [1/32, 16]. Only images with an even resolution can be output. The output of odd resolution is not supported.

Note

SoftDvppDecodeRandomCropResizeJpeg is not supported as of 1.8 version. Please use RandomCropDecodeResize instead.

Parameters:
  • size (Union[int, Sequence[int]]) – The size of the output image. The size value(s) must be positive. If size is an integer, a square crop of size (size, size) is returned. If size is a sequence of length 2, an image of size (height, width) will be cropped.

  • scale (Union[list, tuple], optional) – Range [min, max) of respective size of the original size to be cropped, which must be non-negative. Default: (0.08, 1.0).

  • ratio (Union[list, tuple], optional) – Range [min, max) of aspect ratio to be cropped, which must be non-negative. Default: (3. / 4., 4. / 3.).

  • max_attempts (int, optional) – The maximum number of attempts to propose a valid crop_area. Default: 10. If exceeded, fall back to use center_crop instead. The max_attempts value must be positive.

Raises:
Supported Platforms:

CPU

class tinyms.vision.SoftDvppDecodeResizeJpeg(size)[source]

Decode and resize JPEG image using the simulation algorithm of Ascend series chip DVPP module.

It is recommended to use this algorithm in the following scenarios: When training, the DVPP of the Ascend chip is not used, and the DVPP of the Ascend chip is used during inference, and the accuracy of inference is lower than the accuracy of training; and the input image size should be in range [32*32, 8192*8192]. The zoom-out and zoom-in multiples of the image length and width should in the range [1/32, 16]. Only images with an even resolution can be output. The output of odd resolution is not supported.

Note

SoftDvppDecodeResizeJpeg is not supported as of 1.8 version. Please use Decode and Resize instead.

Parameters:

size (Union[int, Sequence[int]]) – The output size of the resized image. The size value(s) must be positive. If size is an integer, smaller edge of the image will be resized to this value with the same image aspect ratio. If size is a sequence of length 2, an image of size (height, width) will be cropped.

Raises:
  • TypeError – If size is not of type int or Sequence[int].

  • ValueError – If size is not positive.

  • RuntimeError – If given tensor is not a 1D sequence.

Supported Platforms:

CPU

class tinyms.vision.UniformAugment(**kwargs)[source]

Perform randomly selected augmentation on input image.

Parameters:
  • transforms (TensorOperation) – C++ transformation operation to be applied on random selection of bounding box regions of a given image (Python operations are not accepted).

  • num_ops (int, optional) – Number of operations to be selected and applied, which must be positive. Default: 2.

Raises:
  • TypeError – If transform is not an image processing operation in mindspore.dataset.vision.c_transforms .

  • TypeError – If num_ops is not of type int.

  • ValueError – If num_ops is not positive.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.vision.py_transforms as py_vision
>>> transforms_list = [c_vision.RandomHorizontalFlip(),
...                    c_vision.RandomVerticalFlip(),
...                    c_vision.RandomColorAdjust(),
...                    c_vision.RandomRotation(degrees=45)]
>>> uni_aug_op = c_vision.UniformAugment(transforms=transforms_list, num_ops=2)
>>> transforms_all = [c_vision.Decode(), c_vision.Resize(size=[224, 224]),
...                   uni_aug_op]
>>> image_folder_dataset_1 = image_folder_dataset.map(operations=transforms_all,
...                                                   input_columns="image",
...                                                   num_parallel_workers=1)
class tinyms.vision.Compose(**kwargs)[source]

Compose a list of transforms into a single transform.

Parameters:

transforms (list) – List of transformations to be applied.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

Supported Platforms:

CPU

Examples

>>> compose = c_transforms.Compose([c_vision.Decode(), c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=compose)
class tinyms.vision.Concatenate(**kwargs)[source]

Tensor operation that concatenates all columns into a single tensor.

Parameters:
  • axis (int, optional) – Concatenate the tensors along given axis. Default: 0.

  • prepend (numpy.array, optional) – NumPy array to be prepended to the already concatenated tensors. Default: None.

  • append (numpy.array, optional) – NumPy array to be appended to the already concatenated tensors. Default: None.

Raises:
  • TypeError – If axis is not of type int.

  • TypeError – If prepend is not of type numpy.ndarray.

  • TypeError – If append is not of type numpy.ndarray.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> # concatenate string
>>> prepend_tensor = np.array(["dw", "df"], dtype='S')
>>> append_tensor = np.array(["dwsdf", "df"], dtype='S')
>>> concatenate_op = c_transforms.Concatenate(0, prepend_tensor, append_tensor)
>>> data = [["This","is","a","string"]]
>>> dataset = ds.NumpySlicesDataset(data)
>>> dataset = dataset.map(operations=concatenate_op)
class tinyms.vision.Duplicate(**kwargs)[source]

Duplicate the input tensor to output, only support transform one column each time.

Raises:

RuntimeError – If given tensor has two columns.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |  x      |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------+
>>> data = [[1,2,3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["x"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Duplicate(),
...                                                 input_columns=["x"],
...                                                 output_columns=["x", "y"])
>>> # Data after
>>> # |  x      |  y      |
>>> # +---------+---------+
>>> # | [1,2,3] | [1,2,3] |
>>> # +---------+---------+
class tinyms.vision.Fill(**kwargs)[source]

Tensor operation to fill all elements in the tensor with the specified value. The output tensor will have the same shape and type as the input tensor.

Parameters:

fill_value (Union[str, bytes, int, float, bool]) – scalar value to fill the tensor with.

Raises:

TypeError – If fill_value is not of type str, float, bool, int or bytes.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> # generate a 1D integer numpy array from 0 to 4
>>> def generator_1d():
...     for i in range(5):
...         yield (np.array([i]),)
>>> generator_dataset = ds.GeneratorDataset(generator_1d, column_names="col1")
>>> # [[0], [1], [2], [3], [4]]
>>> fill_op = c_transforms.Fill(3)
>>> generator_dataset = generator_dataset.map(operations=fill_op)
>>> # [[3], [3], [3], [3], [3]]
class tinyms.vision.Mask(**kwargs)[source]

Mask content of the input tensor with the given predicate. Any element of the tensor that matches the predicate will be evaluated to True, otherwise False.

Parameters:
  • operator (Relational) – relational operators, it can be any of [Relational.EQ, Relational.NE, Relational.LT, Relational.GT, Relational.LE, Relational.GE], take Relational.EQ as example, EQ refers to equal.

  • constant (Union[str, int, float, bool]) – Constant to be compared to.

  • dtype (mindspore.dtype, optional) – Type of the generated mask. Default: mstype.bool_.

Raises:
  • TypeErroroperator is not of type Relational.

  • TypeErrorconstant is not of type string int, float or bool.

  • TypeErrordtype is not of type mindspore.dtype.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.transforms.c_transforms import Relational
>>> # Data before
>>> # |  col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------+
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Mask(Relational.EQ, 2))
>>> # Data after
>>> # |       col         |
>>> # +--------------------+
>>> # | [False,True,False] |
>>> # +--------------------+
class tinyms.vision.OneHot(**kwargs)[source]

Tensor operation to apply one hot encoding.

Parameters:

num_classes (int) – Number of classes of objects in dataset. It should be larger than the largest label number in the dataset.

Raises:
Supported Platforms:

CPU

Examples

>>> # Assume that dataset has 10 classes, thus the label ranges from 0 to 9
>>> onehot_op = c_transforms.OneHot(num_classes=10)
>>> mnist_dataset = mnist_dataset.map(operations=onehot_op, input_columns=["label"])
class tinyms.vision.PadEnd(**kwargs)[source]

Pad input tensor according to pad_shape, input tensor needs to have same rank.

Parameters:
  • pad_shape (list(int)) – List of integers representing the shape needed. Dimensions that set to None will not be padded (i.e., original dim will be used). Shorter dimensions will truncate the values.

  • pad_value (Union[str, bytes, int, float, bool], optional) – Value used to pad. Default to 0 or empty string in case of tensors of strings.

Raises:
  • TypeError – If pad_shape is not of type list.

  • TypeError – If pad_value is not of type str, float, bool, int or bytes.

  • TypeError – If elements of pad_shape is not of type int.

  • ValueError – If elements of pad_shape is not of positive.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |   col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------|
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.PadEnd(pad_shape=[4],
...                                                                                pad_value=10))
>>> # Data after
>>> # |    col     |
>>> # +------------+
>>> # | [1,2,3,10] |
>>> # +------------|
class tinyms.vision.RandomApply(**kwargs)[source]

Randomly perform a series of transforms with a given probability.

Parameters:
  • transforms (list) – List of transformations to be applied.

  • prob (float, optional) – The probability to apply the transformation list. Default: 0.5.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0.0, 1.0].

Supported Platforms:

CPU

Examples

>>> rand_apply = c_transforms.RandomApply([c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=rand_apply)
class tinyms.vision.RandomChoice(**kwargs)[source]

Randomly select one transform from a list of transforms to perform operation.

Parameters:

transforms (list) – List of transformations to be chosen from to apply.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

Supported Platforms:

CPU

Examples

>>> rand_choice = c_transforms.RandomChoice([c_vision.CenterCrop(50), c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=rand_choice)
class tinyms.vision.Slice(**kwargs)[source]

Slice operation to extract a tensor out using the given n slices.

The functionality of Slice is similar to NumPy’s indexing feature (Currently only rank-1 tensors are supported).

Parameters:

slices (Union[int, list[int], slice, None, Ellipsis]) –

Maximum n number of arguments to slice a tensor of rank n . One object in slices can be one of:

  1. int: Slice this index only along the first dimension. Negative index is supported.

  2. list(int): Slice these indices along the first dimension. Negative indices are supported.

  3. slice: Slice the generated indices from the slice object along the first dimension. Similar to start:stop:step.

  4. None: Slice the whole dimension. Similar to [:] in Python indexing.

  5. Ellipsis: Slice the whole dimension, same result with None .

Raises:

TypeError – If slices is not of type int, list[int], slice , None or Ellipsis .

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |   col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------|
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> # slice indices 1 and 2 only
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Slice(slice(1,3)))
>>> # Data after
>>> # |   col   |
>>> # +---------+
>>> # |  [2,3]  |
>>> # +---------|
class tinyms.vision.TypeCast(**kwargs)[source]

Tensor operation to cast to a given MindSpore data type.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:

data_type (mindspore.dtype) – mindspore.dtype to be cast to.

Raises:

TypeError – If data_type is not of type bool, int, float or string.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import dtype as mstype
>>>
>>> # Generate 1d int numpy array from 0 - 63
>>> def generator_1d():
...     for i in range(64):
...         yield (np.array([i]),)
>>>
>>> dataset = ds.GeneratorDataset(generator_1d, column_names='col')
>>> type_cast_op = c_transforms.TypeCast(mstype.int32)
>>> dataset = dataset.map(operations=type_cast_op)
class tinyms.vision.Unique(**kwargs)[source]

Perform the unique operation on the input tensor, only support transform one column each time.

Return 3 tensor: unique output tensor, index tensor, count tensor.

  • Output tensor contains all the unique elements of the input tensor in the same order that they occur in the input tensor.

  • Index tensor that contains the index of each element of the input tensor in the unique output tensor.

  • Count tensor that contains the count of each element of the output tensor in the input tensor.

Note

Call batch op before calling this function.

Raises:

RuntimeError – If given Tensor has two columns.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |  x                 |
>>> # +--------------------+
>>> # | [[0,1,2], [1,2,3]] |
>>> # +--------------------+
>>> data = [[[0,1,2], [1,2,3]]]
>>> dataset = ds.NumpySlicesDataset(data, ["x"])
>>> dataset = dataset.map(operations=c_transforms.Unique(),
...                       input_columns=["x"],
...                       output_columns=["x", "y", "z"])
>>> # Data after
>>> # |  x      |  y              |z        |
>>> # +---------+-----------------+---------+
>>> # | [0,1,2,3] | [0,1,2,1,2,3] | [1,2,2,1]
>>> # +---------+-----------------+---------+

tinyms.text

This module is to support text processing for NLP tasks. It is a high performance NLP text processing module which is developed with ICU4C and cppjieba.

class tinyms.text.BertDatasetTransform[source]

Apply preprocess operation on GeneratorDataset instance.

class tinyms.text.Lookup(vocab, unknown_token=None, data_type=mindspore.int32)[source]

Look up a word into an id according to the input vocabulary table.

Parameters:
  • vocab (Vocab) – A vocabulary object.

  • unknown_token (str, optional) – Word is used for lookup. In case of the word is out of vocabulary (OOV), the result of lookup will be replaced with unknown_token. If the unknown_token is not specified or it is OOV, runtime error will be thrown. Default: None, means no unknown_token is specified.

  • data_type (mindspore.dtype, optional) – The data type that lookup operation maps string to. Default: mindspore.int32.

Raises:
  • TypeError – If vocab is not of type text.Vocab.

  • TypeError – If unknown_token is not of type string.

  • TypeError – If data_type is not of type mindspore.dtype.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> # Load vocabulary from list
>>> vocab = text.Vocab.from_list(['深', '圳', '欢', '迎', '您'])
>>> # Use Lookup operation to map tokens to ids
>>> lookup = text.Lookup(vocab)
>>> text_file_dataset = text_file_dataset.map(operations=[lookup])
class tinyms.text.JiebaTokenizer(hmm_path, mp_path, mode=<JiebaMode.MIX: 0>, with_offsets=False)[source]

Tokenize Chinese string into words based on dictionary.

Note

The integrity of the HMMSEgment algorithm and MPSegment algorithm files must be confirmed.

Parameters:
  • hmm_path (str) – Dictionary file is used by HMMSegment algorithm. The dictionary can be obtained on the official website of cppjieba.

  • mp_path (str) – Dictionary file is used by MPSegment algorithm. The dictionary can be obtained on the official website of cppjieba.

  • mode (JiebaMode, optional) –

    Valid values can be any of [JiebaMode.MP, JiebaMode.HMM, JiebaMode.MIX]. Default: JiebaMode.MIX.

    • JiebaMode.MP, tokenize with MPSegment algorithm.

    • JiebaMode.HMM, tokenize with Hidden Markov Model Segment algorithm.

    • JiebaMode.MIX, tokenize with a mix of MPSegment and HMMSegment algorithm.

  • with_offsets (bool, optional) – Whether or not output offsets of tokens. Default: False.

Raises:
  • ValueError – If path of HMMSegment dict is not provided.

  • ValueError – If path of MPSegment dict is not provided.

  • TypeError – If hmm_path or mp_path is not of type string.

  • TypeError – If with_offsets is not of type bool.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> from mindspore.dataset.text import JiebaMode
>>> # If with_offsets=False, default output one column {["text", dtype=str]}
>>> jieba_hmm_file = "/path/to/jieba/hmm/file"
>>> jieba_mp_file = "/path/to/jieba/mp/file"
>>> tokenizer_op = text.JiebaTokenizer(jieba_hmm_file, jieba_mp_file, mode=JiebaMode.MP, with_offsets=False)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op)
>>> # If with_offsets=False, then output three columns {["token", dtype=str], ["offsets_start", dtype=uint32],
>>> #                                                   ["offsets_limit", dtype=uint32]}
>>> tokenizer_op = text.JiebaTokenizer(jieba_hmm_file, jieba_mp_file, mode=JiebaMode.MP, with_offsets=True)
>>> text_file_dataset_1 = text_file_dataset_1.map(operations=tokenizer_op, input_columns=["text"],
...                                               output_columns=["token", "offsets_start", "offsets_limit"])
add_dict(user_dict)[source]

Add a user defined word to JiebaTokenizer’s dictionary.

Parameters:

user_dict (Union[str, dict]) –

One of the two loading methods is file path(str) loading (according to the Jieba dictionary format) and the other is Python dictionary(dict) loading, Python Dict format: {word1:freq1, word2:freq2,…}. Jieba dictionary format : word(required), freq(optional), such as:

word1 freq1
word2 None
word3 freq3

Only valid word-freq pairs in user provided file will be added into the dictionary. Rows containing invalid input will be ignored. No error nor warning Status is returned.

Examples

>>> from mindspore.dataset.text import JiebaMode
>>> jieba_hmm_file = "/path/to/jieba/hmm/file"
>>> jieba_mp_file = "/path/to/jieba/mp/file"
>>> user_dict = {"男默女泪": 10}
>>> jieba_op = text.JiebaTokenizer(jieba_hmm_file, jieba_mp_file, mode=JiebaMode.MP)
>>> jieba_op.add_dict(user_dict)
>>> text_file_dataset = text_file_dataset.map(operations=jieba_op, input_columns=["text"])
add_word(word, freq=None)[source]

Add a user defined word to JiebaTokenizer’s dictionary.

Parameters:
  • word (str) – The word to be added to the JiebaTokenizer instance. The added word will not be written into the built-in dictionary on disk.

  • freq (int, optional) – The frequency of the word to be added. The higher the frequency, the better chance the word will be tokenized. Default: None, use default frequency.

Examples

>>> import mindspore.dataset.text as text
>>> from mindspore.dataset.text import JiebaMode
>>> jieba_hmm_file = "/path/to/jieba/hmm/file"
>>> jieba_mp_file = "/path/to/jieba/mp/file"
>>> jieba_op = text.JiebaTokenizer(jieba_hmm_file, jieba_mp_file, mode=JiebaMode.MP)
>>> sentence_piece_vocab_file = "/path/to/sentence/piece/vocab/file"
>>> with open(sentence_piece_vocab_file, 'r') as f:
...     for line in f:
...         word = line.split(',')[0]
...         jieba_op.add_word(word)
>>> text_file_dataset = text_file_dataset.map(operations=jieba_op, input_columns=["text"])
class tinyms.text.UnicodeCharTokenizer(with_offsets=False)[source]

Tokenize a scalar tensor of UTF-8 string to Unicode characters.

Parameters:

with_offsets (bool, optional) – Whether or not output offsets of tokens. Default: False.

Raises:

TypeError – If with_offsets is not of type bool.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> # If with_offsets=False, default output one column {["text", dtype=str]}
>>> tokenizer_op = text.UnicodeCharTokenizer(with_offsets=False)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op)
>>> # If with_offsets=True, then output three columns {["token", dtype=str], ["offsets_start", dtype=uint32],
>>> #                                                   ["offsets_limit", dtype=uint32]}
>>> tokenizer_op = text.UnicodeCharTokenizer(with_offsets=True)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op, input_columns=["text"],
...                                           output_columns=["token", "offsets_start", "offsets_limit"])
class tinyms.text.Ngram(n, left_pad=('', 0), right_pad=('', 0), separator=' ')[source]

Generate n-gram from a 1-D string Tensor.

Refer to N-gram for an overview of what n-gram is and how it works.

Parameters:
  • n (list[int]) – n in n-gram, which is a list of positive integers. For example, if n=[4, 3], then the result would be a 4-gram followed by a 3-gram in the same tensor. If the number of words is not enough to make up for a n-gram, an empty string will be returned. For example, 3 grams on [“mindspore”, “best”] will result in an empty string produced.

  • left_pad (tuple, optional) – Padding performed on left side of the sequence shaped like (“pad_token”, pad_width). pad_width will be capped at n-1. For example, specifying left_pad=(“_”, 2) would pad left side of the sequence with “__”. Default: (‘’, 0).

  • right_pad (tuple, optional) – Padding performed on right side of the sequence shaped like (“pad_token”, pad_width). pad_width will be capped at n-1. For example, specifying right_pad=(“_”, 2) would pad right side of the sequence with “__”. Default: (‘’, 0).

  • separator (str, optional) – Symbol used to join strings together. For example, if 2-gram is [“mindspore”, “amazing”] with separator=”-”, the result would be [“mindspore-amazing”]. Default: ‘ ‘, which will use whitespace as separator.

Raises:
  • TypeError – If values of n not positive is not of type int.

  • ValueError – If values of n not positive.

  • ValueError – If left_pad is not a tuple of length 2.

  • ValueError – If right_pad is not a tuple of length 2.

  • TypeError – If separator is not of type string.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> ngram_op = text.Ngram(3, separator="-")
>>> output = ngram_op(["WildRose Country", "Canada's Ocean Playground", "Land of Living Skies"])
>>> # output
>>> # ["WildRose Country-Canada's Ocean Playground-Land of Living Skies"]
>>> # same ngram_op called through map
>>> text_file_dataset = text_file_dataset.map(operations=ngram_op)
class tinyms.text.WordpieceTokenizer(vocab, suffix_indicator='##', max_bytes_per_token=100, unknown_token='[UNK]', with_offsets=False)[source]

Tokenize the input text to subword tokens.

Parameters:
  • vocab (Vocab) – Vocabulary used to look up words.

  • suffix_indicator (str, optional) – Prefix flags used to indicate subword suffixes. Default: ‘##’.

  • max_bytes_per_token (int, optional) – The maximum length of tokenization, words exceeding this length will not be split. Default: 100.

  • unknown_token (str, optional) – The output for unknown words. When set to an empty string, the corresponding unknown word will be directly returned as the output. Otherwise, the set string will be returned as the output. Default: ‘[UNK]’.

  • with_offsets (bool, optional) – Whether to return the offsets of tokens. Default: False.

Raises:
  • TypeError – If vocab is not of type mindspore.dataset.text.Vocab .

  • TypeError – If suffix_indicator is not of type str.

  • TypeError – If max_bytes_per_token is not of type int.

  • TypeError – If unknown_token is not of type str.

  • TypeError – If with_offsets is not of type bool.

  • ValueError – If max_bytes_per_token is negative.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> vocab_list = ["book", "cholera", "era", "favor", "##ite", "my", "is", "love", "dur", "##ing", "the"]
>>> vocab = text.Vocab.from_list(vocab_list)
>>> # If with_offsets=False, default output one column {["text", dtype=str]}
>>> tokenizer_op = text.WordpieceTokenizer(vocab=vocab, unknown_token='[UNK]',
...                                        max_bytes_per_token=100, with_offsets=False)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op)
>>> # If with_offsets=True, then output three columns {["token", dtype=str], ["offsets_start", dtype=uint32],
>>> #                                                   ["offsets_limit", dtype=uint32]}
>>> tokenizer_op = text.WordpieceTokenizer(vocab=vocab, unknown_token='[UNK]',
...                                       max_bytes_per_token=100, with_offsets=True)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op, input_columns=["text"],
...                                           output_columns=["token", "offsets_start", "offsets_limit"])
class tinyms.text.TruncateSequencePair(max_length)[source]

Truncate a pair of rank-1 tensors such that the total length is less than max_length.

This operation takes two input tensors and returns two output Tensors.

Parameters:

max_length (int) – Maximum length required.

Raises:

TypeError – If max_length is not of type int.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> dataset = ds.NumpySlicesDataset(data={"col1": [[1, 2, 3]], "col2": [[4, 5]]})
>>> # Data before
>>> # |   col1    |   col2    |
>>> # +-----------+-----------|
>>> # | [1, 2, 3] |  [4, 5]   |
>>> # +-----------+-----------+
>>> truncate_sequence_pair_op = text.TruncateSequencePair(max_length=4)
>>> dataset = dataset.map(operations=truncate_sequence_pair_op)
>>> # Data after
>>> # |   col1    |   col2    |
>>> # +-----------+-----------+
>>> # |  [1, 2]   |  [4, 5]   |
>>> # +-----------+-----------+
class tinyms.text.ToNumber(data_type)[source]

Tensor operation to convert every element of a string tensor to a number.

Strings are cast according to the rules specified in the following links, except that any strings which represent negative numbers cannot be cast to an unsigned integer type, rules links are as follows: https://en.cppreference.com/w/cpp/string/basic_string/stof, https://en.cppreference.com/w/cpp/string/basic_string/stoul.

Parameters:

data_type (mindspore.dtype) – Type to be cast to. Must be a numeric type in mindspore.dtype.

Raises:
  • TypeError – If data_type is not of type mindspore.dtype.

  • RuntimeError – If strings are invalid to cast, or are out of range after being cast.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>> from mindspore import dtype as mstype
>>> data = [["1", "2", "3"]]
>>> dataset = ds.NumpySlicesDataset(data)
>>> to_number_op = text.ToNumber(mstype.int8)
>>> dataset = dataset.map(operations=to_number_op)
class tinyms.text.SlidingWindow(width, axis=0)[source]

Construct a tensor from given data (only support 1-D for now), where each element in the dimension axis is a slice of data starting at the corresponding position, with a specified width.

Parameters:
  • width (int) – The width of the window. It must be an integer and greater than zero.

  • axis (int, optional) – The axis along which the sliding window is computed. Default: 0.

Raises:
Supported Platforms:

CPU

Examples

>>> import mindspore.dataset as ds
>>> dataset = ds.NumpySlicesDataset(data=[[1, 2, 3, 4, 5]], column_names="col1")
>>> # Data before
>>> # |     col1     |
>>> # +--------------+
>>> # | [[1, 2, 3, 4, 5]] |
>>> # +--------------+
>>> dataset = dataset.map(operations=text.SlidingWindow(3, 0))
>>> # Data after
>>> # |     col1     |
>>> # +--------------+
>>> # |  [[1, 2, 3], |
>>> # |   [2, 3, 4], |
>>> # |   [3, 4, 5]] |
>>> # +--------------+
class tinyms.text.SentencePieceTokenizer(mode, out_type)[source]

Tokenize scalar token or 1-D tokens to tokens by sentencepiece.

Parameters:
  • mode (Union[str, SentencePieceVocab]) – SentencePiece model. If the input parameter is a file, it represents the path of SentencePiece mode to be loaded. If the input parameter is a SentencePieceVocab object, it should be constructed in advanced.

  • out_type (SPieceTokenizerOutType) –

    The type of output, it can be any of [SPieceTokenizerOutType.STRING, SPieceTokenizerOutType.INT].

    • SPieceTokenizerOutType.STRING, means output type of SentencePice Tokenizer is string.

    • SPieceTokenizerOutType.INT, means output type of SentencePice Tokenizer is int.

Raises:
  • TypeError – If mode is not of type string or SentencePieceVocab.

  • TypeError – If out_type is not of type SPieceTokenizerOutType.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset.text as text
>>> from mindspore.dataset.text import SentencePieceModel, SPieceTokenizerOutType
>>> sentence_piece_vocab_file = "/path/to/sentence/piece/vocab/file"
>>> vocab = text.SentencePieceVocab.from_file([sentence_piece_vocab_file], 5000, 0.9995,
...                                           SentencePieceModel.UNIGRAM, {})
>>> tokenizer = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer)
class tinyms.text.PythonTokenizer(tokenizer)[source]

Class that applies user-defined string tokenizer into input string.

Parameters:

tokenizer (Callable) – Python function that takes a str and returns a list of str as tokens.

Raises:

TypeError – If tokenizer is not a callable Python function.

Supported Platforms:

CPU

Examples

>>> def my_tokenizer(line):
...     return line.split()
>>> text_file_dataset = text_file_dataset.map(operations=text.PythonTokenizer(my_tokenizer))
tinyms.text.to_str(array, encoding='utf8')[source]

Convert NumPy array of bytes to array of str by decoding each element based on charset encoding .

Parameters:
  • array (numpy.ndarray) – Array of bytes type representing strings.

  • encoding (str) – Indicating the charset for decoding. Default: ‘utf8’.

Returns:

numpy.ndarray, NumPy array of str .

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>>
>>> data = np.array([["1", "2", "3"]], dtype=np.bytes_)
>>> dataset = ds.NumpySlicesDataset(data, column_names=["text"])
>>> for item in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     str_data = text.to_str(item["text"])
tinyms.text.to_bytes(array, encoding='utf8')[source]

Convert NumPy array of str to array of bytes by encoding each element based on charset encoding .

Parameters:
  • array (numpy.ndarray) – Array of str type representing strings.

  • encoding (str) – Indicating the charset for encoding. Default: ‘utf8’.

Returns:

numpy.ndarray, NumPy array of bytes .

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>>
>>> data = np.array([["1", "2", "3"]], dtype=np.str_)
>>> dataset = ds.NumpySlicesDataset(data, column_names=["text"])
>>> for item in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     bytes_data = text.to_bytes(item["text"])
class tinyms.text.Vocab[source]

Vocab object that is used to save pairs of words and ids.

It contains a map that maps each word(str) to an id(int) or reverse.

classmethod from_dataset(dataset, columns=None, freq_range=None, top_k=None, special_tokens=None, special_first=True)[source]

Build a Vocab from a dataset.

This would collect all unique words in a dataset and return a vocab within the frequency range specified by user in freq_range. User would be warned if no words fall into the frequency. Words in vocab are ordered from the highest frequency to the lowest frequency. Words with the same frequency would be ordered lexicographically.

Parameters:
  • dataset (Dataset) – dataset to build vocab from.

  • columns (list[str], optional) – column names to get words from. It can be a list of column names. Default: None.

  • freq_range (tuple, optional) – A tuple of integers (min_frequency, max_frequency). Words within the frequency range would be kept. 0 <= min_frequency <= max_frequency <= total_words. min_frequency=0 is the same as min_frequency=1. max_frequency > total_words is the same as max_frequency = total_words. min_frequency/max_frequency can be None, which corresponds to 0/total_words separately. Default: None, all words are included.

  • top_k (int, optional) – top_k is greater than 0. Number of words to be built into vocab. top_k means most frequent words are taken. top_k is taken after freq_range. If not enough top_k, all words will be taken. Default: None, all words are included.

  • special_tokens (list, optional) – A list of strings, each one is a special token. For example special_tokens=[“<pad>”,”<unk>”]. Default: None, no special tokens will be added.

  • special_first (bool, optional) – Whether special_tokens will be prepended/appended to vocab. If special_tokens is specified and special_first is set to True, special_tokens will be prepended. Default: True.

Returns:

Vocab, Vocab object built from the dataset.

Examples

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>> dataset = ds.TextFileDataset("/path/to/sentence/piece/vocab/file", shuffle=False)
>>> vocab = text.Vocab.from_dataset(dataset, "text", freq_range=None, top_k=None,
...                                 special_tokens=["<pad>", "<unk>"],
...                                 special_first=True)
>>> dataset = dataset.map(operations=text.Lookup(vocab, "<unk>"), input_columns=["text"])
classmethod from_dict(word_dict)[source]

Build a vocab object from a dict.

Parameters:

word_dict (dict) – Dict contains word and id pairs, where word should be str and id be int. id is recommended to start from 0 and be continuous. ValueError will be raised if id is negative.

Returns:

Vocab, Vocab object built from the dict.

Examples

>>> import mindspore.dataset.text as text
>>> vocab = text.Vocab.from_dict({"home": 3, "behind": 2, "the": 4, "world": 5, "<unk>": 6})
classmethod from_file(file_path, delimiter='', vocab_size=None, special_tokens=None, special_first=True)[source]

Build a vocab object from a file.

Parameters:
  • file_path (str) – Path to the file which contains the vocab list.

  • delimiter (str, optional) – A delimiter to break up each line in file, the first element is taken to be the word. Default: ‘’, the whole line will be treated as a word.

  • vocab_size (int, optional) – Number of words to read from file_path. Default: None, all words are taken.

  • special_tokens (list, optional) – A list of strings, each one is a special token. For example special_tokens=[“<pad>”,”<unk>”]. Default: None, no special tokens will be added.

  • special_first (bool, optional) – Whether special_tokens will be prepended/appended to vocab, If special_tokens is specified and special_first is set to True, special_tokens will be prepended. Default: True.

Returns:

Vocab, Vocab object built from the file.

Examples

>>> import mindspore.dataset.text as text
>>> # Assume vocab file contains the following content:
>>> # --- begin of file ---
>>> # apple,apple2
>>> # banana, 333
>>> # cat,00
>>> # --- end of file ---
>>>
>>> # Read file through this API and specify "," as delimiter.
>>> # The delimiter will break up each line in file, then the first element is taken to be the word.
>>> vocab = text.Vocab.from_file("/path/to/simple/vocab/file", ",", None, ["<pad>", "<unk>"], True)
>>>
>>> # Finally, there are 5 words in the vocab: "<pad>", "<unk>", "apple", "banana", "cat".
>>> vocabulary = vocab.vocab()
classmethod from_list(word_list, special_tokens=None, special_first=True)[source]

Build a vocab object from a list of word.

Parameters:
  • word_list (list) – A list of string where each element is a word of type string.

  • special_tokens (list, optional) – A list of strings, each one is a special token. For example special_tokens=[“<pad>”,”<unk>”]. Default: None, no special tokens will be added.

  • special_first (bool, optional) – Whether special_tokens is prepended or appended to vocab. If special_tokens is specified and special_first is set to True, special_tokens will be prepended. Default: True.

Returns:

Vocab, Vocab object built from the list.

Examples

>>> import mindspore.dataset.text as text
>>> vocab = text.Vocab.from_list(["w1", "w2", "w3"], special_tokens=["<unk>"], special_first=True)
ids_to_tokens(ids)[source]

Converts a single index or a sequence of indices in a token or a sequence of tokens. If id does not exist, return empty string.

Parameters:

ids (Union[int, list[int]]) – The token id (or token ids) to convert to tokens.

Returns:

The decoded token(s).

Examples

>>> import mindspore.dataset.text as text
>>> vocab = text.Vocab.from_list(["w1", "w2", "w3"], special_tokens=["<unk>"], special_first=True)
>>> token = vocab.ids_to_tokens(0)
tokens_to_ids(tokens)[source]

Converts a token string or a sequence of tokens in a single integer id or a sequence of ids. If token does not exist, return id with value -1.

Parameters:

tokens (Union[str, list[str]]) – One or several token(s) to convert to token id(s).

Returns:

The token id or list of token ids.

Examples

>>> import mindspore.dataset.text as text
>>> vocab = text.Vocab.from_list(["w1", "w2", "w3"], special_tokens=["<unk>"], special_first=True)
>>> ids = vocab.tokens_to_ids(["w1", "w3"])
vocab()[source]

Get the vocabory table in dict type.

Returns:

A vocabulary consisting of word and id pairs.

Examples

>>> import mindspore.dataset.text as text
>>> vocab = text.Vocab.from_list(["word_1", "word_2", "word_3", "word_4"])
>>> vocabory_dict = vocab.vocab()
class tinyms.text.SentencePieceVocab[source]

SentencePiece object that is used to do words segmentation.

classmethod from_dataset(dataset, col_names, vocab_size, character_coverage, model_type, params)[source]

Build a SentencePiece from a dataset.

Parameters:
  • dataset (Dataset) – Dataset to build SentencePiece.

  • col_names (list) – The list of the col name.

  • vocab_size (int) – Vocabulary size.

  • character_coverage (float) – Amount of characters covered by the model, good defaults are: 0.9995 for languages with rich character set like Japanese or Chinese and 1.0 for other languages with small character set.

  • model_type (SentencePieceModel) –

    It can be any of [SentencePieceModel.UNIGRAM, SentencePieceModel.BPE, SentencePieceModel.CHAR, SentencePieceModel.WORD], default is SentencePieceModel.UNIGRAM. The input sentence must be pre-tokenized when using SentencePieceModel.WORD type.

    • SentencePieceModel.UNIGRAM, Unigram Language Model means the next word in the sentence is assumed to be independent of the previous words generated by the model.

    • SentencePieceModel.BPE, refers to byte pair encoding algorithm, which replaces the most frequent pair of bytes in a sentence with a single, unused byte.

    • SentencePieceModel.CHAR, refers to char based sentencePiece Model type.

    • SentencePieceModel.WORD, refers to word based sentencePiece Model type.

  • params (dict) – A dictionary with no incoming parameters.

Returns:

SentencePieceVocab, vocab built from the dataset.

Examples

>>> import mindspore.dataset as ds
>>> from mindspore.dataset.text import SentencePieceVocab, SentencePieceModel
>>> dataset = ds.TextFileDataset("/path/to/sentence/piece/vocab/file", shuffle=False)
>>> vocab = SentencePieceVocab.from_dataset(dataset, ["text"], 5000, 0.9995,
...                                         SentencePieceModel.UNIGRAM, {})
classmethod from_file(file_path, vocab_size, character_coverage, model_type, params)[source]

Build a SentencePiece object from a file.

Parameters:
  • file_path (list) – Path to the file which contains the SentencePiece list.

  • vocab_size (int) – Vocabulary size.

  • character_coverage (float) – Amount of characters covered by the model, good defaults are: 0.9995 for languages with rich character set like Japanese or Chinese and 1.0 for other languages with small character set.

  • model_type (SentencePieceModel) –

    It can be any of [SentencePieceModel.UNIGRAM, SentencePieceModel.BPE, SentencePieceModel.CHAR, SentencePieceModel.WORD], default is SentencePieceModel.UNIGRAM. The input sentence must be pre-tokenized when using SentencePieceModel.WORD type.

    • SentencePieceModel.UNIGRAM, Unigram Language Model means the next word in the sentence is assumed to be independent of the previous words generated by the model.

    • SentencePieceModel.BPE, refers to byte pair encoding algorithm, which replaces the most frequent pair of bytes in a sentence with a single, unused byte.

    • SentencePieceModel.CHAR, refers to char based sentencePiece Model type.

    • SentencePieceModel.WORD, refers to word based sentencePiece Model type.

  • params (dict) – A dictionary with no incoming parameters(The parameters are derived from SentencePiece library).

Returns:

SentencePieceVocab, vocab built from the file.

Examples

>>> from mindspore.dataset.text import SentencePieceVocab, SentencePieceModel
>>> vocab = SentencePieceVocab.from_file(["/path/to/sentence/piece/vocab/file"], 5000, 0.9995,
...                                      SentencePieceModel.UNIGRAM, {})
classmethod save_model(vocab, path, filename)[source]

Save model into given filepath.

Parameters:
  • vocab (SentencePieceVocab) – A SentencePiece object.

  • path (str) – Path to store model.

  • filename (str) – The name of the file.

Examples

>>> from mindspore.dataset.text import SentencePieceVocab, SentencePieceModel
>>> vocab = SentencePieceVocab.from_file(["/path/to/sentence/piece/vocab/file"], 5000, 0.9995,
...                                      SentencePieceModel.UNIGRAM, {})
>>> SentencePieceVocab.save_model(vocab, "./", "m.model")
class tinyms.text.SentencePieceModel[source]

An enumeration for SentencePieceModel.

Possible enumeration values are: SentencePieceModel.UNIGRAM, SentencePieceModel.BPE, SentencePieceModel.CHAR, SentencePieceModel.WORD.

  • SentencePieceModel.UNIGRAM: Unigram Language Model means the next word in the sentence is assumed to be independent of the previous words generated by the model.

  • SentencePieceModel.BPE: refers to byte pair encoding algorithm, which replaces the most frequent pair of bytes in a sentence with a single, unused byte.

  • SentencePieceModel.CHAR: refers to char based sentencePiece Model type.

  • SentencePieceModel.WORD: refers to word based sentencePiece Model type.

class tinyms.text.SPieceTokenizerOutType[source]

An enumeration for mindspore.dataset.text.SentencePieceTokenizer .

Possible enumeration values are: SPieceTokenizerOutType.STRING, SPieceTokenizerOutType.INT.

  • SPieceTokenizerOutType.STRING: means output type of SentencePiece Tokenizer is string.

  • SPieceTokenizerOutType.INT: means output type of SentencePiece Tokenizer is int.

class tinyms.text.SPieceTokenizerLoadType[source]

An enumeration for loading type of mindspore.dataset.text.SentencePieceTokenizer .

Possible enumeration values are: SPieceTokenizerLoadType.FILE, SPieceTokenizerLoadType.MODEL.

  • SPieceTokenizerLoadType.FILE: Load SentencePiece tokenizer from a Vocab file.

  • SPieceTokenizerLoadType.MODEL: Load SentencePiece tokenizer from a SentencePieceVocab object.

class tinyms.text.Compose(**kwargs)[source]

Compose a list of transforms into a single transform.

Parameters:

transforms (list) – List of transformations to be applied.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

Supported Platforms:

CPU

Examples

>>> compose = c_transforms.Compose([c_vision.Decode(), c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=compose)
class tinyms.text.Concatenate(**kwargs)[source]

Tensor operation that concatenates all columns into a single tensor.

Parameters:
  • axis (int, optional) – Concatenate the tensors along given axis. Default: 0.

  • prepend (numpy.array, optional) – NumPy array to be prepended to the already concatenated tensors. Default: None.

  • append (numpy.array, optional) – NumPy array to be appended to the already concatenated tensors. Default: None.

Raises:
  • TypeError – If axis is not of type int.

  • TypeError – If prepend is not of type numpy.ndarray.

  • TypeError – If append is not of type numpy.ndarray.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> # concatenate string
>>> prepend_tensor = np.array(["dw", "df"], dtype='S')
>>> append_tensor = np.array(["dwsdf", "df"], dtype='S')
>>> concatenate_op = c_transforms.Concatenate(0, prepend_tensor, append_tensor)
>>> data = [["This","is","a","string"]]
>>> dataset = ds.NumpySlicesDataset(data)
>>> dataset = dataset.map(operations=concatenate_op)
class tinyms.text.Duplicate(**kwargs)[source]

Duplicate the input tensor to output, only support transform one column each time.

Raises:

RuntimeError – If given tensor has two columns.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |  x      |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------+
>>> data = [[1,2,3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["x"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Duplicate(),
...                                                 input_columns=["x"],
...                                                 output_columns=["x", "y"])
>>> # Data after
>>> # |  x      |  y      |
>>> # +---------+---------+
>>> # | [1,2,3] | [1,2,3] |
>>> # +---------+---------+
class tinyms.text.Fill(**kwargs)[source]

Tensor operation to fill all elements in the tensor with the specified value. The output tensor will have the same shape and type as the input tensor.

Parameters:

fill_value (Union[str, bytes, int, float, bool]) – scalar value to fill the tensor with.

Raises:

TypeError – If fill_value is not of type str, float, bool, int or bytes.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> # generate a 1D integer numpy array from 0 to 4
>>> def generator_1d():
...     for i in range(5):
...         yield (np.array([i]),)
>>> generator_dataset = ds.GeneratorDataset(generator_1d, column_names="col1")
>>> # [[0], [1], [2], [3], [4]]
>>> fill_op = c_transforms.Fill(3)
>>> generator_dataset = generator_dataset.map(operations=fill_op)
>>> # [[3], [3], [3], [3], [3]]
class tinyms.text.Mask(**kwargs)[source]

Mask content of the input tensor with the given predicate. Any element of the tensor that matches the predicate will be evaluated to True, otherwise False.

Parameters:
  • operator (Relational) – relational operators, it can be any of [Relational.EQ, Relational.NE, Relational.LT, Relational.GT, Relational.LE, Relational.GE], take Relational.EQ as example, EQ refers to equal.

  • constant (Union[str, int, float, bool]) – Constant to be compared to.

  • dtype (mindspore.dtype, optional) – Type of the generated mask. Default: mstype.bool_.

Raises:
  • TypeErroroperator is not of type Relational.

  • TypeErrorconstant is not of type string int, float or bool.

  • TypeErrordtype is not of type mindspore.dtype.

Supported Platforms:

CPU

Examples

>>> from mindspore.dataset.transforms.c_transforms import Relational
>>> # Data before
>>> # |  col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------+
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Mask(Relational.EQ, 2))
>>> # Data after
>>> # |       col         |
>>> # +--------------------+
>>> # | [False,True,False] |
>>> # +--------------------+
class tinyms.text.OneHot(**kwargs)[source]

Tensor operation to apply one hot encoding.

Parameters:

num_classes (int) – Number of classes of objects in dataset. It should be larger than the largest label number in the dataset.

Raises:
Supported Platforms:

CPU

Examples

>>> # Assume that dataset has 10 classes, thus the label ranges from 0 to 9
>>> onehot_op = c_transforms.OneHot(num_classes=10)
>>> mnist_dataset = mnist_dataset.map(operations=onehot_op, input_columns=["label"])
class tinyms.text.PadEnd(**kwargs)[source]

Pad input tensor according to pad_shape, input tensor needs to have same rank.

Parameters:
  • pad_shape (list(int)) – List of integers representing the shape needed. Dimensions that set to None will not be padded (i.e., original dim will be used). Shorter dimensions will truncate the values.

  • pad_value (Union[str, bytes, int, float, bool], optional) – Value used to pad. Default to 0 or empty string in case of tensors of strings.

Raises:
  • TypeError – If pad_shape is not of type list.

  • TypeError – If pad_value is not of type str, float, bool, int or bytes.

  • TypeError – If elements of pad_shape is not of type int.

  • ValueError – If elements of pad_shape is not of positive.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |   col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------|
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.PadEnd(pad_shape=[4],
...                                                                                pad_value=10))
>>> # Data after
>>> # |    col     |
>>> # +------------+
>>> # | [1,2,3,10] |
>>> # +------------|
class tinyms.text.RandomApply(**kwargs)[source]

Randomly perform a series of transforms with a given probability.

Parameters:
  • transforms (list) – List of transformations to be applied.

  • prob (float, optional) – The probability to apply the transformation list. Default: 0.5.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

  • TypeError – If prob is not of type float.

  • ValueError – If prob is not in range [0.0, 1.0].

Supported Platforms:

CPU

Examples

>>> rand_apply = c_transforms.RandomApply([c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=rand_apply)
class tinyms.text.RandomChoice(**kwargs)[source]

Randomly select one transform from a list of transforms to perform operation.

Parameters:

transforms (list) – List of transformations to be chosen from to apply.

Raises:
  • TypeError – If transforms is not of type list.

  • ValueError – If transforms is empty.

  • TypeError – If elements of transforms are neither Python callable objects nor data processing operations in c_transforms.

Supported Platforms:

CPU

Examples

>>> rand_choice = c_transforms.RandomChoice([c_vision.CenterCrop(50), c_vision.RandomCrop(512)])
>>> image_folder_dataset = image_folder_dataset.map(operations=rand_choice)
class tinyms.text.Slice(**kwargs)[source]

Slice operation to extract a tensor out using the given n slices.

The functionality of Slice is similar to NumPy’s indexing feature (Currently only rank-1 tensors are supported).

Parameters:

slices (Union[int, list[int], slice, None, Ellipsis]) –

Maximum n number of arguments to slice a tensor of rank n . One object in slices can be one of:

  1. int: Slice this index only along the first dimension. Negative index is supported.

  2. list(int): Slice these indices along the first dimension. Negative indices are supported.

  3. slice: Slice the generated indices from the slice object along the first dimension. Similar to start:stop:step.

  4. None: Slice the whole dimension. Similar to [:] in Python indexing.

  5. Ellipsis: Slice the whole dimension, same result with None .

Raises:

TypeError – If slices is not of type int, list[int], slice , None or Ellipsis .

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |   col   |
>>> # +---------+
>>> # | [1,2,3] |
>>> # +---------|
>>> data = [[1, 2, 3]]
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data, ["col"])
>>> # slice indices 1 and 2 only
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=c_transforms.Slice(slice(1,3)))
>>> # Data after
>>> # |   col   |
>>> # +---------+
>>> # |  [2,3]  |
>>> # +---------|
class tinyms.text.TypeCast(**kwargs)[source]

Tensor operation to cast to a given MindSpore data type.

Note

This operation supports running on Ascend or GPU platforms by Offload.

Parameters:

data_type (mindspore.dtype) – mindspore.dtype to be cast to.

Raises:

TypeError – If data_type is not of type bool, int, float or string.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import dtype as mstype
>>>
>>> # Generate 1d int numpy array from 0 - 63
>>> def generator_1d():
...     for i in range(64):
...         yield (np.array([i]),)
>>>
>>> dataset = ds.GeneratorDataset(generator_1d, column_names='col')
>>> type_cast_op = c_transforms.TypeCast(mstype.int32)
>>> dataset = dataset.map(operations=type_cast_op)
class tinyms.text.Unique(**kwargs)[source]

Perform the unique operation on the input tensor, only support transform one column each time.

Return 3 tensor: unique output tensor, index tensor, count tensor.

  • Output tensor contains all the unique elements of the input tensor in the same order that they occur in the input tensor.

  • Index tensor that contains the index of each element of the input tensor in the unique output tensor.

  • Count tensor that contains the count of each element of the output tensor in the input tensor.

Note

Call batch op before calling this function.

Raises:

RuntimeError – If given Tensor has two columns.

Supported Platforms:

CPU

Examples

>>> # Data before
>>> # |  x                 |
>>> # +--------------------+
>>> # | [[0,1,2], [1,2,3]] |
>>> # +--------------------+
>>> data = [[[0,1,2], [1,2,3]]]
>>> dataset = ds.NumpySlicesDataset(data, ["x"])
>>> dataset = dataset.map(operations=c_transforms.Unique(),
...                       input_columns=["x"],
...                       output_columns=["x", "y", "z"])
>>> # Data after
>>> # |  x      |  y              |z        |
>>> # +---------+-----------------+---------+
>>> # | [0,1,2,3] | [0,1,2,1,2,3] | [1,2,2,1]
>>> # +---------+-----------------+---------+

tinyms.primitives

Primitives module. Operators can be used in the construct function of Layer.

Examples

>>> import tinyms as ts
>>> from tinyms.primitives import tensor_add
>>>
>>> x = ts.ones([2, 3])
>>> y = ts.ones([2, 3])
>>> print(tensor_add(x, y))
[[2. 2. 2.]
[2. 2. 2.]]
tinyms.primitives.add_flags(fn=None, **flags)[source]

A decorator that adds a flag to the function.

Note

Only supports bool value.

Parameters:
  • fn (Function) – Function or cell to add flag. Default: None.

  • flags (dict) – Flags use kwargs. Default: None.

Returns:

Function, the function with added flags.

Examples

>>> net = Net();
>>> net = add_flags(net, predit=True)
>>> print(hasattr(net, '_func_graph_flags'))
True
class tinyms.primitives.Map(ops=None, reverse=False)[source]

Map will apply the set operation on input sequences.

Apply the operations to every element of the sequence.

Parameters:
  • ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance. Default: None

  • reverse (bool) – The optimizer needs to be inverted in some scenarios to improve parallel performance, general users please ignore. Reverse is the flag to decide if apply the operation reversely. Only supported in graph mode. Default is False.

Inputs:
  • args (Tuple[sequence]) - If ops is not None, all the inputs should be the same length sequences, and each row of the sequences. e.g. If the length of args is 2, and for i in length of each sequence (args[0][i], args[1][i]) will be the input of the operation.

    If ops is None, the first input is the operation, and the other is inputs.

Outputs:

Sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import dtype as mstype
>>> from mindspore import Tensor, ops
>>> from mindspore.ops import MultitypeFuncGraph, Map
>>> tensor_list = (Tensor(1, mstype.float32), Tensor(2, mstype.float32), Tensor(3, mstype.float32))
>>> # square all the tensor in the list
>>>
>>> square = MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return ops.square(x)
>>>
>>> common_map = Map()
>>> output = common_map(square, tensor_list)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4),
Tensor(shape=[], dtype=Float32, value= 9))
>>> square_map = Map(square, False)
>>> output = square_map(tensor_list)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4),
Tensor(shape=[], dtype=Float32, value= 9))
class tinyms.primitives.MultitypeFuncGraph(name, read_value=False, doc_url='')[source]

MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs. Initialize an MultitypeFuncGraph object with name, and use register with input types as the decorator for the function to be registered. And the object can be called with different types of inputs, and work with HyperMap and Map.

Parameters:
  • name (str) – Operator name.

  • read_value (bool, optional) – If the registered function do not need to set value on Parameter, and all inputs will pass by value, set read_value to True. Default: False.

  • doc_url (str, optional) – The official document link corresponding to the registered function. Default:””.

Raises:

ValueError – If failed to find a matching function for the given arguments.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # `add` is a metagraph object which will add two objects according to
>>> # input type using ".register" decorator.
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>>
>>> tensor_add = ops.Add()
>>> add = ops.MultitypeFuncGraph('add')
>>> @add.register("Number", "Number")
... def add_scala(x, y):
...     return x + y
>>> @add.register("Tensor", "Tensor")
... def add_tensor(x, y):
...     return tensor_add(x, y)
>>> output = add(1, 2)
>>> print(output)
3
>>> output = add(Tensor([0.1, 0.6, 1.2], dtype=mstype.float32), Tensor([0.1, 0.6, 1.2], dtype=mstype.float32))
>>> print(output)
[0.2 1.2 2.4]
register(*type_names)[source]

Register a function for the given type string.

Parameters:

type_names (Union[str, mindspore.dtype]) – Inputs type names or types list.

Returns:

decorator, a decorator to register the function to run, when called under the types described in type_names.

class tinyms.primitives.GradOperation(get_all=False, get_by_list=False, sens_param=False)[source]

A higher-order function which is used to generate the gradient function for the input function.

The gradient function generated by GradOperation higher-order function can be customized by construction arguments.

For example, given an input function net = Net() that takes x and y as inputs, and has a parameter z, see Net in Examples.

  • Used to get the derivative of the input:

    1. Returns gradients with respect to the first input (see GradNetWrtX in Examples).

      1. Construct a GradOperation higher-order function with default arguments: grad_op = GradOperation().

      2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

      3. Call the gradient function with input function’s inputs to get the gradients with respect to the first input: grad_op(net)(x, y).

    2. Returns gradients with respect to all inputs (see GradNetWrtXY in Examples).

      1. Construct a GradOperation higher-order function with get_all=True which indicates getting gradients with respect to all inputs, they are x and y in example function Net(): grad_op = GradOperation(get_all=True).

      2. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

      3. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs: gradient_function(x, y).

  • Used to get the derivative of the parameters:

    Returns gradients with respect to given parameters (see GradNetWithWrtParams in Examples).

    1. Construct a GradOperation higher-order function with get_by_list=True: grad_op = GradOperation(get_by_list=True).

    2. Construct a ParameterTuple that will be passed to the input function when constructing GradOperation higher-order function, it will be used as a parameter filter that determine which gradient to return: params = ParameterTuple(net.trainable_params()).

    3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

    4. Call the gradient function with input function’s inputs to get the gradients with respect to given parameters: gradient_function(x, y).

  • Used to get the derivative of the inputs and parameters at the same time: Returns gradients with respect to all inputs and given parameters in the format of ((dx, dy), (dz)) (see GradNetWrtInputsAndParams in Examples).

    1. Construct a GradOperation higher-order function with get_all=True and get_by_list=True: grad_op = GradOperation(get_all=True, get_by_list=True).

    2. Construct a ParameterTuple that will be passed along input function when constructing GradOperation higher-order function: params = ParameterTuple(net.trainable_params()).

    3. Call it with input function and params as arguments to get the gradient function: gradient_function = grad_op(net, params).

    4. Call the gradient function with input function’s inputs to get the gradients with respect to all inputs and given parameters: gradient_function(x, y).

  • We can configure the sensitivity(gradient with respect to output) by setting sens_param as True and passing an extra sensitivity input to the gradient function, the sensitivity input should has the same shape and type with input function’s output(see GradNetWrtXYWithSensParam in Examples).

    1. Construct a GradOperation higher-order function with get_all=True and sens_param=True: grad_op = GradOperation(get_all=True, sens_param=True).

    2. Define grad_wrt_output as sens_param which works as the gradient with respect to output: grad_wrt_output = Tensor(np.ones([2, 2]).astype(np.float32)).

    3. Call it with input function as argument to get the gradient function: gradient_function = grad_op(net).

    4. Call the gradient function with input function’s inputs and sens_param to get the gradients with respect to all inputs: gradient_function(x, y, grad_wrt_output).

Note

For above gradient functions, the returned gradient result may vary for grad result element number:

  • Return a single value if only one result.

  • Return a tuple for multiple results.

  • Return an empty tuple for no result.

Parameters:
  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter free variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter free variables at the same time in the form of (“gradients with respect to inputs”, “gradients with respect to parameter free variables”). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sensor_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the location parameter or key-value pair parameter. If the value is transferred through the key-value pair parameter, the key must be sens.

Returns:

The higher-order function which takes a function as argument and returns gradient function for it.

Raises:

TypeError – If get_all, get_by_list or sens_param is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...         self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
...     def construct(self, x, y):
...         x = x * self.z
...         out = self.matmul(x, y)
...         return out
...
>>> class GradNetWrtX(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtX, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation()
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> output = GradNetWrtX(Net())(x, y)
>>> print(output)
[[1.4100001 1.5999999 6.6      ]
 [1.4100001 1.5999999 6.6      ]]
>>>
>>> class GradNetWrtXY(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXY, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation(get_all=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.1, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXY(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 4.50000000e+00,  2.70000005e+00,  3.60000014e+00],
 [ 4.50000000e+00,  2.70000005e+00,  3.60000014e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 2.59999990e+00,  2.59999990e+00,  2.59999990e+00],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]]))
>>>
>>> class GradNetWrtXYWithSensParam(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtXYWithSensParam, self).__init__()
...         self.net = net
...         self.grad_op = ops.GradOperation(get_all=True, sens_param=True)
...         self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net)
...         return gradient_function(x, y, self.grad_wrt_output)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtXYWithSensParam(Net())(x, y)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 2.21099997e+00,  5.09999990e-01,  1.49000001e+00],
 [ 5.58800030e+00,  2.68000007e+00,  4.07000017e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 1.51999998e+00,  2.81999993e+00,  2.14000010e+00],
 [ 1.09999990e+00,  2.04999995e+00,  1.54999995e+00],
 [ 9.00000036e-01,  1.54999995e+00,  1.25000000e+00]]))
>>>
>>> class GradNetWithWrtParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWithWrtParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = ops.GradOperation(get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWithWrtParams(Net())(x, y)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
>>>
>>> class GradNetWrtInputsAndParams(nn.Cell):
...     def __init__(self, net):
...         super(GradNetWrtInputsAndParams, self).__init__()
...         self.net = net
...         self.params = ParameterTuple(net.trainable_params())
...         self.grad_op = ops.GradOperation(get_all=True, get_by_list=True)
...     def construct(self, x, y):
...         gradient_function = self.grad_op(self.net, self.params)
...         return gradient_function(x, y)
>>>
>>> x = Tensor([[0.1, 0.6, 1.2], [0.5, 1.3, 0.1]], dtype=mstype.float32)
>>> y = Tensor([[0.12, 2.3, 1.1], [1.3, 0.2, 2.4], [0.1, 2.2, 0.3]], dtype=mstype.float32)
>>> output = GradNetWrtInputsAndParams(Net())(x, y)
>>> print(output)
((Tensor(shape=[2, 3], dtype=Float32, value=
[[ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00],
 [ 3.51999998e+00,  3.90000010e+00,  2.59999990e+00]]), Tensor(shape=[3, 3], dtype=Float32, value=
[[ 6.00000024e-01,  6.00000024e-01,  6.00000024e-01],
 [ 1.89999998e+00,  1.89999998e+00,  1.89999998e+00],
 [ 1.30000007e+00,  1.30000007e+00,  1.30000007e+00]])), (Tensor(shape=[1], dtype=Float32, value=
 [ 1.29020004e+01]),))
class tinyms.primitives.HyperMap(ops=None, reverse=False)[source]

Hypermap will apply the set operation to input sequences.

Apply the operations to every element of the sequence or nested sequence. Different from mindspore.ops.Map, the HyperMap supports to apply on nested structure.

Parameters:
  • ops (Union[MultitypeFuncGraph, None]) – ops is the operation to apply. If ops is None, the operations should be put in the first input of the instance. Default is None.

  • reverse (bool) – The optimizer needs to be inverted in some scenarios to improve parallel performance, general users please ignore. reverse is the flag to decide if apply the operation reversely. Only supported in graph mode. Default is False.

Inputs:
  • args (Tuple[sequence]) -

    • If ops is not None, all the inputs should be sequences with the same length. And each row of the sequences will be the inputs of the operation.

    • If ops is None, the first input is the operation, and the others are inputs.

Note

Except for the operation input, the number of inputs should be equal to the number of inputs to ops.

Outputs:

Sequence or nested sequence, the sequence of output after applying the function. e.g. operation(args[0][i], args[1][i]).

Raises:
  • TypeError – If ops is neither MultitypeFuncGraph nor None.

  • TypeError – If args is not a Tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),
...                     (Tensor(3, mstype.float32), Tensor(4, mstype.float32)))
>>> # square all the tensor in the nested list
>>>
>>> square = ops.MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
...     return ops.square(x)
>>>
>>> common_map = ops.HyperMap()
>>> output = common_map(square, nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
>>> square_map = ops.HyperMap(square, False)
>>> output = square_map(nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
tinyms.primitives.zeros_like(input, *, dtype=None)[source]

Creates a tensor filled with 0, with the same size as x, and the given dtype.

If dtype = None, the tensor will have the same dtype as input input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, filled with 0.

Raises:

TypeError – If dtype is not a MindSpore dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(4).reshape(2, 2))
>>> output = ops.zeros_like(x, dtype=mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
tinyms.primitives.ones_like(input, *, dtype=None)[source]

Returns a Tensor with a value of 1 and its shape is the same as the input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, has the same shape as input but filled with ones.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = ops.ones_like(x)
>>> print(output)
[[1 1]
 [1 1]]
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Union[Tensor, int, float]) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Union[Tensor, int, float]) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. The value must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[1, 2, 3], [3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 3, 3)
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be the broadcasted shape of input shape and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore import ops as ops
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mindspore.float32)
>>> lambda_param = Tensor(1.0, mindspore.float32)
>>> output = ops.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – Type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mindspore.float32.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If ‘minval’ or ‘maxval’ is neither int32 nor float32 and dtype of ‘minval’ is not the same as ‘maxval’.

  • TypeError – If seed is not an int.

  • TypeError – If ‘dtype’ is neither int32 nor float32.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mindspore.int32)
>>> maxval = Tensor(2, mindspore.int32)
>>> output = ops.uniform(shape, minval, maxval, seed=5, dtype=mindspore.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> maxval = Tensor([8.0, 10.0], mindspore.float32)
>>> output = ops.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The \(\beta\) distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If neither alpha nor beta is a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha and beta is not float32.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: alpha_shape is (2, 2)
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> # case 2: alpha_shape is (2, 3), so shape is (3, 1, 3)
>>> shape = (3, 1, 3)
>>> alpha = Tensor(np.array([[1, 3, 4], [2, 5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> # case 3: beta_shape is (1, 2), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0, 2]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 2.2132034  5.8855834]]
 [ 3.3981476  7.5805717]
[[ 3.3981476  7.5805717]]
 [ 3.7190282 19.941492]
[[ 2.9512358  2.5969937]]
 [ 3.786061   5.160872 ]]]
>>> # case 4: beta_shape is (2, 1), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([[1.0], [2.0]]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 5.6085486  7.8280783]]
 [ 15.97684  16.116285]
[[ 1.8347423  1.713663]]
 [ 3.2434065 15.667398]
[[ 4.2922077  7.3365674]]
 [ 5.3876944  13.159832 ]]]
tinyms.primitives.poisson(shape, mean, seed=None)[source]

The ops.poisson is deprecated, please use mindspore.ops.random_poisson Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

  • TypeError – If seed is not an int.

Supported Platforms:

deprecated

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> # case 1: It can be broadcast.
>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
>>> # case 2: It can not be broadcast. It is recommended to use the same shape.
>>> shape = (2, 2)
>>> mean = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(2, 2)
tinyms.primitives.multinomial(input, num_samples, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • input (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_samples (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default: True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None.

Returns:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Raises:
  • TypeError – If input is not a Tensor whose dtype is not float32.

  • TypeError – If num_samples is not an int.

  • TypeError – If seed is neither an int nor None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> # case 1: The output is random, and the length of the output is the same as num_sample.
>>> input = Tensor([0, 9, 4, 0], mindspore.float32)
>>> output = ops.multinomial(input, 2)
>>> # print(output)
>>> # [1 2] or [2 1]
>>> # the case where the result is [2 1] in multiple times.
>>> # This is because the value corresponding to the index 1 is larger than the value of the index 2.
>>> print(len(output))
2
>>> # case 2: The output is random, and the length of the output is the same as num_sample.
>>> # replacement is False(Default).
>>> # If the extracted value is 0, the index value of 1 will be returned.
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4)
>>> print(output)
[1 1 2 1]
>>> # case 3: The output is random, num_sample == x_length = 4, and replacement is True,
>>> # Can extract the same elements。
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4, True)
>>> print(output)
[1 1 2 2]
tinyms.primitives.count_nonzero(x, axis=(), keep_dims=False, dtype=mindspore.int32)[source]

Count number of nonzero elements across axis of input tensor.

Parameters:
  • x (Tensor) – Input data is used to count non-zero numbers. With shape \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Default: (), reduce all dimensions.

  • keep_dims (bool, optional) – Whether to maintain dimensions specified by axis. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • dtype (Union[Number, mindspore.bool_], optional) – The data type of the output tensor. Default: mindspore.int32.

Returns:

Tensor, number of nonzero element across axis specified by axis. The data type is specified by dtype.

Raises:
  • TypeError – If axis is not int, tuple or list.

  • ValueError – If any value in axis is not in range [-x.ndim, x.ndim).

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> # case 1: each value specified.
>>> x = Tensor(np.array([[0, 1, 0], [1, 1, 0]]).astype(np.float32))
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0, 1], keep_dims=True, dtype=mindspore.int32)
>>> print(nonzero_num)
[[3]]
>>> # case 2: all value is default.
>>> nonzero_num = ops.count_nonzero(x=x)
>>> print(nonzero_num)
3
>>> # case 3: axis value was specified 0.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,])
>>> print(nonzero_num)
[1 2 0]
>>> # case 4: axis value was specified 1.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[1,])
>>> print(nonzero_num)
[1 2]
>>> # case 5: keep_dims value was specified.
>>> nonzero_num = ops.count_nonzero(x=x,  keep_dims=True)
>>> print(nonzero_num)
[[3]]
>>> # case 6: keep_dims and axis value was specified.
>>> nonzero_num = ops.count_nonzero(x=x, axis=[0,], keep_dims=True)
>>> print(nonzero_num)
[[1 2 0]]
tinyms.primitives.cummin(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative minimum value of input Tensor input along the dimension axis, and indices is the index location of each minimum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = min(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative minimum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> output = ops.cummin(a, axis=0)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
tinyms.primitives.tensor_dot(x1, x2, axes)[source]

Computation of Tensor contraction on arbitrary axes between tensors a and b.

Contraction allows for the summation of products of elements of a and b on specified axes. The same number of axes must be specified for both x1 and x2, and values must be within range of number of dims of both a and b.

Selected dims in both inputs must also match.

axes = 0 leads to outer product. axes = 1 leads to normal matrix multiplication when inputs both 2D. axes = 1 is the same as axes = ((1,),(0,)) where both a and b are 2D. axes = 2 is the same as axes = ((1,2),(0,1)) where both a and b are 3D.

Parameters:
  • x1 (Tensor) – First tensor in tensor_dot with datatype float16 or float32

  • x2 (Tensor) – Second tensor in tensor_dot with datatype float16 or float32

  • axes (Union[int, tuple(int), tuple(tuple(int)), list(list(int))]) – Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and first N dims from b input shape in order as axes for each respectively.

Returns:

Tensor, the shape of the output tensor is \((N + M)\). Where \(N\) and \(M\) are the free axes not contracted in both inputs

Raises:
  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If axes is not one of the following: int, tuple, list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> input_x1 = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 1, 2]), mindspore.float32)
>>> output = ops.tensor_dot(input_x1, input_x2, ((0,1),(1,2)))
>>> print(output)
[[2. 2. 2]
 [2. 2. 2]
 [2. 2. 2]]
tinyms.primitives.dot(input, other)[source]

Computation a dot product between samples in two tensors.

Parameters:
  • input (Tensor) – First tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

  • other (Tensor) – Second tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

Returns:

Tensor, dot product of input and other.

Raises:
  • TypeError – If type of input and other are not the same.

  • TypeError – If dtype of input or other is not float16 or float32.

  • ValueError – If rank of input or other less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor(np.ones(shape=[2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[3. 3.]]
 [[3. 3.]]]
>>> print(output.shape)
(2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]]
  [[3. 3.]]]]
>>> print(output.shape)
(1, 2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]
   [3. 3.]]
  [[3. 3.]
   [3. 3.]]]]
>>> print(output.shape)
(1, 2, 2, 2)
>>> input = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]]
>>> print(output.shape)
(3, 2, 2, 1, 2)
tinyms.primitives.batch_dot(x1, x2, axes=None)[source]

Computation of batch dot product between samples in two tensors containing batch dims.

\[output = x1[batch, :] * x2[batch, :]\]
Parameters:
  • x1 (Tensor) – First tensor in Batch Dot op with datatype float32 and the rank of x1 must be greater than or equal to 2.

  • x2 (Tensor) – Second tensor in Batch Dot op with datatype float32. The datatype of x2 should be same as x1 and the rank of x2 must be greater than or equal to 2.

  • axes (Union[int, tuple(int), list(int)]) – Single value or tuple/list of length 2 with dimensions specified for a and b each. If single value N passed, automatically picks up last N dims from a input shape and last N dimensions from b input shape in order as axes for each respectively. Default: None.

Returns:

Tensor, batch dot product of x1 and x2. For example, the Shape of output for input x1 shapes (batch, d1, axes, d2) and x2 shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4), where d1 and d2 means any number.

Raises:
  • TypeError – If type of x1 and x2 are not the same.

  • TypeError – If dtype of x1 or x2 is not float32.

  • ValueError – If rank of x1 or x2 less than 2.

  • ValueError – If batch dim used in axes.

  • ValueError – If len(axes) less than 2.

  • ValueError – If axes is not one of those: None, int, (int, int).

  • ValueError – If axes reversed from negative int is too low for dimensions of input arrays.

  • ValueError – If axes value is too high for dimensions of input arrays.

  • ValueError – If batch size of x1 and x2 are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x1 = Tensor(np.ones(shape=[2, 2, 3]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (-1, -2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[[3. 3.]
  [3. 3.]]
 [[3. 3.]
  [3. 3.]]]
>>> x1 = Tensor(np.ones(shape=[2, 2]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> axes = (1, 2)
>>> output = ops.batch_dot(x1, x2, axes)
>>> print(output)
[[2. 2. 2.]
 [2. 2. 2.]]
>>> print(output.shape)
(2, 3)
>>> x1 = Tensor(np.ones(shape=[6, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[6, 5, 4, 8]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(6, 2, 3, 5, 8)
>>> x1 = Tensor(np.ones(shape=[2, 2, 4]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[2, 5, 4, 5]), mindspore.float32)
>>> output = ops.batch_dot(x1, x2)
>>> print(output.shape)
(2, 2, 5, 5)
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat .

Parameters:
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive.

  • axis (int) – The axis along which to repeat, default 0.

Returns:

One tensor with values repeated along the specified axis. If x has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * rep, ..., sn)\). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : repeat on axis 0
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
>>> # case 2 : repeat on axis 1
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 1)
>>> print(output)
[[0 0 1 1 2 2]
 [3 3 4 4 5 5]]
tinyms.primitives.repeat_interleave(input, repeats, axis=None)[source]

Repeat elements of a tensor along an axis, like numpy.repeat.

Parameters:
  • input (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • repeats (int) – The number of times to repeat, must be positive.

  • axis (int, optional) – The axis along which to repeat, default: None. if dims is None, the input Tensor will be flattened and the output will alse be flattened.

Returns:

One tensor with values repeated along the specified axis. If input has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * repeats, ..., sn)\). The output type will be the same as the type of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_interleave(input, repeats=2, axis=0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
tinyms.primitives.sequence_mask(lengths, maxlen=None)[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape \((d_1, d_2, ..., d_n)\), then the resulting tensor mask has type and shape \((d_1, d_2, ..., d_n, maxlen)\), with mask \([i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\).

Parameters:
  • lengths (Tensor) – Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen.

  • maxlen (int) – size of the last dimension of returned tensor. Must be positive and same type as elements in lengths. Default is None.

Returns:

One mask tensor of shape lengths.shape + (maxlen,) .

Raises:
  • TypeError – If lengths is not a Tensor.

  • TypeError – If maxlen is not an int.

  • TypeError – If dtype of lengths is neither int32 nor int64.

Supported Platforms:

GPU CPU

Examples

>>> # case 1: When maxlen is assigned
>>> x = Tensor(np.array([1, 2, 3, 4]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[ True False False False False]
 [ True  True False False False]
 [ True  True  True False False]
 [ True  True  True  True False]]
>>> # case 2: When there is 0 in x
>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[[ True False False False False]
  [ True  True  True False False]]
 [[ True  True False False False]
  [False False False False False]]]
>>> # case 3: when the maxlen is not assigned
>>> x = Tensor(np.array([[1, 3], [2, 4]]))
>>> output = ops.sequence_mask(x)
>>> print(output)
[[[ True False False False ]
  [ True  True  True False ]]
 [[ True  True False False ]
  [ True  True  True  True ]]]
tinyms.primitives.matmul(input, other)[source]

Returns the matrix product of two tensors.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • input (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

  • other (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

Returns:

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both input, other are 1-d vectors.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of other, or if a scalar value is passed in.

  • ValueError – If the shape of input and other could not broadcast together.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : Reasonable application of broadcast mechanism
>>> input = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
>>> other = Tensor(np.arange(4*5).reshape(4, 5), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
>>> print(output.shape)
(2, 3, 5)
>>> # case 2 : the rank of `input` is 1
>>> input = Tensor(np.ones([1, 2]), mindspore.float32)
>>> other = Tensor(np.ones([2,]), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[2.]
>>> print(output.shape)
(1,)
tinyms.primitives.mm(input, mat2)[source]

Returns the matrix product of two arrays. If input is a \((n \times m)\) Tensor, mat2 is a \((m \times p)\) Tensor, out will be a \((n \times p)\) Tensor.

Note

This function cannot support broadcasting. Refer to mindspore.ops.matmul() instead if you need a broadcastable function.

Parameters:
  • input (Tensor) – The first matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

  • mat2 (Tensor) – The second matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

Returns:

Tensor or scalar, the matrix product of the inputs.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of mat2.

  • ValueError – If input or mat2 is not a matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x1 = ms.Tensor(np.random.rand(2, 3))
>>> x2 = ms.Tensor(np.random.rand(3, 4))
>>> out = ops.mm(x1, x2)
>>> print(out.shape)
(2, 4)
class tinyms.primitives.ACos[source]

Computes arccosine of input tensors element-wise.

Refer to mindspore.ops.acos() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> acos = ops.ACos()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = acos(x)
>>> print(output)
[0.737726  1.5307857 1.2661036 0.9764105]
class tinyms.primitives.Abs[source]

Returns absolute value of a tensor element-wise.

Refer to mindspore.ops.abs() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> abs = ops.Abs()
>>> output = abs(x)
>>> print(output)
[1. 1. 0.]
class tinyms.primitives.AccumulateNV2[source]

Computes accumulation of all input tensors element-wise.

Refer to mindspore.ops.accumulate_n() for more details.

Supported Platforms:

Ascend GPU

Examples

>>> class NetAccumulateNV2(nn.Cell):
...     def __init__(self):
...         super(NetAccumulateNV2, self).__init__()
...         self.accumulateNV2 = ops.AccumulateNV2()
...
...     def construct(self, *z):
...         return self.accumulateNV2(z)
...
>>> net = NetAccumulateNV2()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Acosh[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

Refer to mindspore.ops.acosh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, dtype
>>> acosh = ops.Acosh()
>>> x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), dtype.float32)
>>> output = acosh(x)
>>> print(output)
[0.        0.9624237 1.7627472 5.298292 ]
class tinyms.primitives.Adam(use_locking=False, use_nesterov=False)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

For more details, please refer to mindspore.nn.Adam.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, the shape should be the same as var.

  • v (Parameter) - the 2nd moment vector in the updating formula, the shape should be the same as var.

  • beta1_power (float) - \(beta_1^t(\beta_1^{t})\) in the updating formula.

  • beta2_power (float) - \(beta_2^t(\beta_2^{t})\) in the updating formula.

  • lr (float) - \(l\) in the updating formula. The paper suggested value is \(10^{-8}\).

  • beta1 (float) - The exponential decay rate for the 1st moment estimations. The paper suggested value is \(0.9\).

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations. The paper suggested value is \(0.999\).

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as Inputs var.

  • m (Tensor) - The same shape and data type as Inputs m.

  • v (Tensor) - The same shape and data type as Inputs v.

Raises:
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If var, m or v is not a Parameter.

  • TypeError – If beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adam = ops.Adam()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                               epsilon, grad)
...         return out
...
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.9, 0.999, 0.001, 0.9, 0.999, 1e-8, gradient)
>>> print(net.var.asnumpy())
[[0.9996838 0.9996838]
 [0.9996838 0.9996838]]
class tinyms.primitives.AdamNoUpdateParam(use_locking=False, use_nesterov=False)[source]

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm. This operator do not update the parameter, but calculate the value that should be added to the parameter instead.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ \Delta{w} = - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t(\beta_1^{t})\) and \(beta_2^t(\beta_2^{t})\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents the parameter to be updated, \(\epsilon\) represents epsilon.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • m (Tensor) - The 1st moment vector in the updating formula. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type must be float32.

  • v (Tensor) - the 2nd moment vector in the updating formula. The shape must be the same as m. The data type must be float32.

  • beta1_power (Tensor) - \(beta_1^t(\beta_1^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta2_power (Tensor) - \(beta_2^t(\beta_2^{t})\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • lr (Tensor) - \(l\) in the updating formula. The shape is \((1, )\) and the data type must be float32.

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations. The shape is \((1, )\) and the data type must be float32.

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations. The shape is \((1, )\) and the data type must be float32.

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability. The shape is \((1, )\) and the data type must be float32.

  • gradient (Tensor) - Gradient, the shape must be the same as m, the data type must be float32.

Outputs:

Tensor, whose shape and data type are the same with Inputs gradient, is a value that should be added to the parameter to be updated.

Raises:
  • TypeError – If neither use_locking nor use_nesterov is a bool.

  • TypeError – If m, v, beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam = ops.AdamNoUpdateParam()
...         self.m = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="m")
...         self.v = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.adam(self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad)
...         return out
>>> net = Net()
>>> beta1_power = Tensor(0.9, ms.float32)
>>> beta2_power = Tensor(0.999, ms.float32)
>>> lr = Tensor(0.001, ms.float32)
>>> beta1 = Tensor(0.9, ms.float32)
>>> beta2 = Tensor(0.999, ms.float32)
>>> epsilon = Tensor(1e-8, ms.float32)
>>> gradient = Tensor(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]).astype(np.float32))
>>> result = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient)
>>> print(result)
[[-0.00010004 -0.00010004 -0.00010004]
[-0.00013441 -0.00013441 -0.00013441]]
class tinyms.primitives.AdamWeightDecay(use_locking=False)[source]

Updates gradients by the Adaptive Moment Estimation algorithm with weight decay (AdamWeightDecay).

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + \epsilon} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters:

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, it should have the the shape as var. The data type can be float16 or float32.

  • v (Parameter) - The 2nd moment vector in the updating formula, it should have the same shape and dtype as m.

  • lr (float) - \(lr\) in the updating formula. The paper suggested value is \(10^{-8}\), the data type should be float32.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations, the data type should be float32. The paper suggested value is \(0.9\)

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations, the data type should be float32. The paper suggested value is \(0.999\)

  • epsilon (float) - Term added to the denominator to improve numerical stability, the data type should be float32.

  • decay (float) - The weight decay value, must be a scalar tensor with float32 data type. Default: 0.0.

  • gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If lr, beta1, beta2, epsilon or decay is not a float32.

  • TypeError – If var, m or v is not a Parameter with dtype float16 or float32.

  • TypeError – If gradient is not a Tensor.

  • ValueError

    • If eps <= 0.

  • ValueError

    • If beta1, beta2 is not in range (0.0,1.0).

  • ValueError

    • If decay < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.adam_weight_decay = ops.AdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad):
...         out = self.adam_weight_decay(self.var, self.m, self.v, lr, beta1, beta2,
...                               epsilon, decay, grad)
...         return out
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient)
>>> print(net.var.asnumpy())
[[0.999 0.999]
 [0.999 0.999]]
class tinyms.primitives.AdaptiveAvgPool2D(output_size)[source]

AdaptiveAvgPool2D operation.

Refer to mindspore.ops.adaptive_avg_pool2d() for more details.

Supported Platforms:

GPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((None, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]]
>>> # case 2: output_size=2
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D(2)
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_avg_pool_2d = ops.AdaptiveAvgPool2D((1, 2))
>>> output = adaptive_avg_pool_2d(input_x)
>>> print(output)
[[[4.5 5.5]]
 [[4.5 5.5]]
 [[4.5 5.5]]]
class tinyms.primitives.AdaptiveAvgPool3D(output_size)[source]

AdaptiveAvgPool3D operation.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.adaptive_avg_pool3d() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import nn, Tensor
>>> from mindspore.ops import AdaptiveAvgPool3D
>>> class AdaptiveAvgPool3DNet(nn.Cell):
...     def __init__(self, output_size):
...         super(AdaptiveAvgPool3DNet, self).__init__()
...         self.output_size_ = output_size
...         self.adaptive_avg_pool_3d = AdaptiveAvgPool3D(self.output_size_)
...     def construct(self, x_):
...         return self.adaptive_avg_pool_3d(x_)
...
>>> output_size=(1,1,1)
>>> input_x_val = np.zeros((1,1,2,2,2))
>>> input_x_val[:,:,0,:,:]  += 1
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> adaptive_avg_pool_3d = AdaptiveAvgPool3DNet(output_size)
>>> output = adaptive_avg_pool_3d(input_x)
>>> print(output)
[[[[[0.5]]]]]
class tinyms.primitives.AdaptiveMaxPool2D(output_size)[source]

Performs 2D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool2d() for more details.

Parameters:

output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Inputs:
  • input_x (Tensor) - The input of AdaptiveMaxPool2D, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((None, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D(2)
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((1, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
class tinyms.primitives.AdaptiveMaxPool3D[source]

Performs 3D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool3d() for more details.

Inputs:
  • x (Tensor) - Tensor, with shape \((C, D, H, W)\) or \((N, C, D, H, W)\).

  • output_size (Union[int, tuple]) - The specified output size, which is an integer that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. If it is None, the output size and input size of the corresponding dimension are the same.

Outputs:
  • y (Tensor) - Tensor, with the same number of dims and data type as the input.

  • argmax (Tensor) - Tensor, the indices of max value, which has the same shape as the y and it’s data type is int32.

Supported Platforms:

GPU CPU

Examples

>>> class AdaptiveMaxPool3DNet(nn.Cell):
...     def __init__(self):
...         super(AdaptiveMaxPool3DNet, self).__init__()
...         self.adaptive_max_pool_3d = ops.AdaptiveMaxPool3D()
...     def construct(self, x_, output_size_):
...         return self.adaptive_max_pool_3d(x_, output_size_)
>>> x = np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32)
>>> output_size = np.array([1, 1, 2], dtype=np.int32)
>>> net = AdaptiveMaxPool3DNet()
>>> output = net(Tensor(x), Tensor(output_size))
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
[[[[33 35]]]]
class tinyms.primitives.Add[source]

Adds two input tensors element-wise.

Refer to mindspore.ops.add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: x and y are both Tensor.
>>> add = ops.Add()
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 7. 9.]
>>> # case 2: x is a scalar and y is a Tensor
>>> add = ops.Add()
>>> x = Tensor(1, mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = add(x, y)
>>> print(output)
[5. 6. 7.]
>>> # the data type of x is int32, the data type of y is float32,
>>> # and the output is the data format of higher precision float32.
>>> print(output.dtype)
Float32
class tinyms.primitives.AddN[source]

Computes addition of all input tensors element-wise.

Refer to mindspore.ops.addn() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetAddN(nn.Cell):
...     def __init__(self):
...         super(NetAddN, self).__init__()
...         self.addN = ops.AddN()
...
...     def construct(self, *z):
...         return self.addN(z)
...
>>> net = NetAddN()
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = net(x, y, x, y)
>>> print(output)
[10. 14. 18.]
class tinyms.primitives.Addcdiv[source]

Performs the element-wise division of tensor x1 by tensor x2, multiply the result by the scalar value and add it to input_data.

\[y[i] = input\_data[i] + value[i] * (x1[i] / x2[i])\]
Inputs:
  • input_data (Tensor) - The tensor to be added.

  • x1 (Tensor) - The numerator tensor.

  • x2 (Tensor) - The denominator tensor.

  • value (Tensor) - The multiplier for tensor x1/x2.

Outputs:

Tensor, has the same shape and dtype as x1/x2.

Raises:
  • TypeError – If dtype of x1, x2, value, input_data is not tensor.

  • TypeError – If dtype of x1, x2, value, input_data are not the same.

  • ValueError – If x1 could not be broadcast to x2.

  • ValueError – If value could not be broadcast to x1/x2.

  • ValueError – If input_data could not be broadcast to value*(x1/x2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([1, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.array([4, 3, 2, 1]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> addcdiv = ops.Addcdiv()
>>> y = addcdiv(input_data, x1, x2, value)
>>> print(y)
[1.25      1.6666667 2.5       5.       ]
class tinyms.primitives.Addcmul[source]

Performs the element-wise product of tensor x1 and tensor x2, multiply the result by the scalar value and add it to input_data.

\[output[i] = input\_data[i] + value[i] * (x1[i] * x2[i])\]
Inputs:
  • input_data (Tensor) - The tensor to be added.

  • x1 (Tensor) - The tensor to be multiplied.

  • x2 (Tensor) - The tensor to be multiplied.

  • value (Tensor) - The multiplier for tensor x1*x2.

Outputs:

Tensor, has the same shape and dtype as x1*x2.

Raises:
  • TypeError – If dtype of x1, x2, value, input_data is not tensor.

  • TypeError – If dtype of x1, x2, value, input_data are not the same.

  • ValueError – If x1 could not be broadcast to x2.

  • ValueError – If value could not be broadcast to x1 * x2.

  • ValueError – If input_data could not be broadcast to value*(x1*x2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> x2 = Tensor(np.array([[1, 2, 3]]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> addcmul = ops.Addcmul()
>>> y = addcmul(input_data, x1, x2, value)
>>> print(y)
[[ 2.  3.  4.]
 [ 3.  5.  7.]
 [ 4.  7. 10.]]
class tinyms.primitives.AdjustHue[source]

Adjust hue of RGB images.

Note

A convenience method that transform an RGB image to float representation. The image is adjusted by transforming the image to HSV and shifting the intensities in the hue channel, then transform back to original data mode. It is recommended to minimize the number of redundant transformations when several adjustments are chained.

Inputs:
  • image (Tensor): RGB image or images, a Tensor has at least 3-D. The last dimension is interpreted as channels whose size must be three. the dtype is float16 or float32.

  • delta (Tensor): How much to add to the hue channel, the dtype is float32. Must be 0-D.

Outputs:

Adjusted image(s), same shape and dtype as image.

Raises:
  • TypeError – If neither image nor delta is a tensor.

  • TypeError – If the dtype of image is neither float16 nor float32.

  • TypeError – If the dtype of delta not float32.

  • ValueError – If the dimension of image is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class AdjustHue(nn.Cell):
...   def __init__(self):
...     super(AdjustHue, self).__init__()
...     self.adjustHue = ops.AdjustHue()
...   def construct(self, image, delta):
...     return self.adjustHue(image, delta)
...
>>> image = np.array([[[1, 2, 3], [4, 5, 6]],
...                   [[7, 8, 9], [10, 11, 12]],
...                   [[13, 14, 15], [16, 17, 18]]]).astype(np.float32)
>>> delta = 0.2
>>> adjust_hue = AdjustHue()
>>> output = adjust_hue(Tensor(image), Tensor(delta))
>>> print("output", output)
output [[[ 2.3999996  1.         3.       ]
         [ 5.3999996  4.         6.       ]]
        [[ 8.4        7.         9.       ]
         [11.4       10.        12.       ]]
        [[14.4       13.        15.       ]
         [17.4       16.        18.       ]]]
class tinyms.primitives.AdjustSaturation[source]

Adjust saturation of RGB images.

Note

This is a convenience method that converts RGB images to float representation, converts them to HSV, adds an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions.

Inputs:
  • image (Tensor) - Images to adjust. Must be one of the following types: float16, float32. At least 3-D. The last dimension is interpreted as channels, and must be three.

  • scale (Tensor) - A scale factor determines the amount of saturation adjustment to apply to the image. A value greater than 1.0 increases the saturation, while a value less than 1.0 decreases the saturation. A value of 1.0 leaves the saturation unchanged. Must be 0-D Tensor of type float32.

Outputs:

Adjusted image(s), same shape and dtype as image.

Raises:
  • TypeError – If any iput is not Tensor.

  • TypeError – If the type of image is not one of the following dtype: float16, float32.

  • TypeError – If the type of scale is not float32.

  • ValueError – If the dimension of the ‘image’ is less than 3.

  • ValueError – If the last dimension of the ‘image’ is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[1.0, 2.0, 3.0],
...       [4.0, 5.0, 6.0]],
...     [[7.0, 8.0, 9.0],
...       [10.0, 11.0, 12.0]]])
>>> scale = Tensor(float(0.5))
>>> adjustsaturation = ops.AdjustSaturation()
>>> output = adjustsaturation(x, scale)
>>> print(output)
       [[[ 2.         2.4999998  3.       ]
    [ 5.         5.5        6.       ]]
   [[ 8.         8.5        9.       ]
    [11.        11.5       12.       ]]]
class tinyms.primitives.AffineGrid(align_corners=False)[source]

Creates a 2D or 3D flow field (sampling grid) based on a batch of affine matrices theta.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.affine_grid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> affinegrid = ops.AffineGrid(align_corners=False)
>>> theta = Tensor([[[0.8, 0.5, 0],[-0.5, 0.8, 0]]], mindspore.float32)
>>> out_size = (1, 3, 2, 3)
>>> output = affinegrid(theta, out_size)
>>> print(output)
[[[[-0.78333336 -0.06666666]
[-0.25       -0.4       ]
[ 0.28333336 -0.73333335]]
[[-0.28333336  0.73333335]
[ 0.25        0.4       ]
[ 0.78333336  0.06666666]]]]
class tinyms.primitives.AllGather(group='hccl_world_group')[source]

Gathers tensors from the specified communication group.

Note

  • The tensors must have the same shape and format in all processes of the collection.

  • Currently only supports GRAPH_MODE and it should be called in Cell.

Parameters:

group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).

Raises:
  • TypeError – If group is not a str.

  • ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import mindspore.nn as nn
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allgather = ops.AllGather()
...
...     def construct(self, x):
...         return self.allgather(x)
...
>>> input_x = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1.]]
class tinyms.primitives.AllReduce(op='sum', group='hccl_world_group')[source]

Reduces the tensor data across all devices in such a way that all devices will get the same final result.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • op (str) – Specifies an operation used for element-wise reductions, like sum, prod, max, and min. On the CPU, only ‘sum’ is supported. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.

Raises:

TypeError – If any of op and group is not a str, or fusion is not an integer, or the input’s dtype is bool.

Supported Platforms:

Ascend GPU CPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.AlltoAll(split_count, split_dim, concat_dim, group='hccl_world_group')[source]

AlltoAll is a collective operation.

AlltoAll sends data from the all processes to the all processes in the specified group. It has two phases:

  • The scatter phase: On each process, the operand is split into split_count number of blocks along the split_dimensions, and the blocks are scattered to all processes, e.g., the ith block is send to the ith process.

  • The gather phase: Each process concatenates the received blocks along the concat_dimension.

Note

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details .

Parameters:
  • split_count (int) – On each process, divide blocks into split_count number.

  • split_dim (int) – On each process, split blocks along the split_dim.

  • concat_dim (int) – On each process, gather the received blocks along the concat_dimension.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the shape of input tensor is \((x_1, x_2, ..., x_R)\), then the shape of output tensor is \((y_1, y_2, ..., y_R)\), where:

  • \(y_{split\_dim} = x_{split\_dim} / split\_count\)

  • \(y_{concat\_dim} = x_{concat\_dim} * split\_count\)

  • \(y_{other} = x_{other}\).

Raises:

TypeError – If group is not a string.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 8 devices.

>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.alltoall = ops.AlltoAll(split_count = 8, split_dim = -2, concat_dim = -1)
...
...     def construct(self, x):
...         out = self.alltoall(x)
...         return out
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> net = Net()
>>> rank_id = int(os.getenv("RANK_ID"))
>>> input_x = Tensor(np.ones([1, 1, 8, 1]) * rank_id, dtype = ms.float32)
>>> output = net(input_x)
>>> print(output)
[[[[0. 1. 2. 3. 4. 5. 6. 7.]]]]
class tinyms.primitives.Angle[source]

Returns the element-wise argument of a complex tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.angle() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([-1.5 + 7.8j, 3 + 5.75j], mindspore.complex64)
>>> angle = ops.Angle()
>>> output = angle(input)
>>> print(output)
[1.7607845 1.0899091]
class tinyms.primitives.ApplyAdaMax[source]

Updates relevant entries according to the adamax scheme.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta_1 * m_{t} + (1 - \beta_1) * g \\ v_{t+1} = \max(\beta_2 * v_{t}, \left| g \right|) \\ var = var - \frac{l}{1 - \beta_1^{t+1}} * \frac{m_{t+1}}{v_{t+1} + \epsilon} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(v\) represents the 2nd moment vector, \(v_{t}\) is the last moment of \(v_{t+1}\), \(l\) represents scaling factor lr, \(g\) represents grad, \(\beta_1, \beta_2\) represent beta1 and beta2, \(\beta_1^{t+1}\) represents beta1_power, \(var\) represents the variable to be updated, \(\epsilon\) represents epsilon.

Inputs of var, m, v and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and type as var. With float32 or float16 data type.

  • v (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients with the same shape and type as var. With float32 or float16 data type.

  • beta1_power (Union[Number, Tensor]) - \(beta_1^t\) in the updating formula, must be a scalar. With float32 or float16 data type.

  • lr (Union[Number, Tensor]) - Learning rate, \(l\) in the updating formula, must be a scalar. With float32 or float16 data type.

  • beta1 (Union[Number, Tensor]) - The exponential decay rate for the 1st moment estimations, must be a scalar. With float32 or float16 data type.

  • beta2 (Union[Number, Tensor]) - The exponential decay rate for the 2nd moment estimations, must be a scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient, has the same shape and type as var. With float32 or float16 data type.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Raises:
  • TypeError – If dtype of var, m, v, beta_power, lr, beta1, beta2, epsilon or grad is neither float16 nor float32.

  • TypeError – If beta_power, lr, beta1, beta2 or epsilon is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, m, v and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_ada_max = ops.ApplyAdaMax()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.array([[0.9, 0.1],
...                                             [0.7, 0.8]]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_ada_max(self.var, self.m, self.v, beta1_power, lr, beta1, beta2, epsilon, grad)
...         return out
...
>>> net = Net()
>>> beta1_power =Tensor(0.9, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.99, mindspore.float32)
>>> epsilon = Tensor(1e-10, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(beta1_power, lr, beta1, beta2, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.93602717e-01,  3.92571449e-01],
 [ 9.72582996e-02,  4.92249995e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.69999993e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000005e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.90999973e-01,  6.99999988e-01],
 [ 6.93000019e-01,  8.00000012e-01]]))
class tinyms.primitives.ApplyAdadelta[source]

Updates relevant entries according to the adadelta scheme.

The Adadelta algorithm is proposed in ADADELTA: AN ADAPTIVE LEARNING RATE METHOD.

\[\begin{split}\begin{array}{ll} \\ \text{accum} = \rho * \text{accum} + (1 - \rho) * \text{grad}^2 \\ \text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{\text{grad}}{\sqrt{\text{accum} + \epsilon}} \\ \text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * \text{update}^2 \\ \text{var} = \text{var} - \text{lr} * \text{update} \end{array}\end{split}\]

where \(\rho\) represents rho, \(\epsilon\) represents epsilon.

Inputs of var, accum, accum_update and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated, has the same shape and data type as var.

  • accum_update (Parameter) - Accum_update to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - Learning rate, must be a scalar. With float32 or float16 data type.

  • rho (Union[Number, Tensor]) - Decay rate, must be a scalar. With float32 or float16 data type.

  • epsilon (Union[Number, Tensor]) - A small value added for numerical stability, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - Gradients, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Raises:
  • TypeError – If dtype of var, accum, accum_update, lr, rho, epsilon or grad is neither float16 nor float32.

  • TypeError – If accum_update, lr, rho or epsilon is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum, accum_update and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import nn, Tensor, ops, Parameter
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adadelta = ops.ApplyAdadelta()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                        [0.7, 0.8]]).astype(np.float32)),
...                                                             name="accum_update")
...     def construct(self, lr, rho, epsilon, grad):
...         out = self.apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, epsilon, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> rho = Tensor(0.0, mindspore.float32)
>>> epsilon = Tensor(1e-6, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, rho, epsilon, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99051356e-01,  3.99683774e-01],
 [ 9.91633832e-02,  4.99105573e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.00000036e-02,  4.89999980e-01],
 [ 1.00000007e-02,  6.40000045e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 8.99990857e-01,  1.00000791e-01],
 [ 6.99930906e-01,  7.99999774e-01]]))
class tinyms.primitives.ApplyAdagrad(update_slots=True)[source]

Updates relevant entries according to the adagrad scheme. The Adagrad algorithm was proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. This module can adaptively assign different learning rates for each parameter in view of the uneven number of samples for different parameters.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum}} \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float or complex data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float or complex data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If dtype of var, accum, lr or grad is neither float nor complex.

  • TypeError – If lr is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad = ops.ApplyAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAdagradDA(use_locking=False)[source]

Update var according to the proximal adagrad scheme. The Adagrad algorithm was proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.

\[\begin{split}\begin{array}{ll} \\ grad\_accum += grad \\ grad\_squared\_accum += grad * grad \\ tmp\_val= \begin{cases} sign(grad\_accum) * max\left \{|grad\_accum|-l1*global\_step, 0\right \} & \text{ if } l1>0 \\ grad\_accum & \text{ otherwise } \\ \end{cases} \\ x\_value = -1 * lr * tmp\_val \\ y\_value = l2 * global\_step * lr + \sqrt{grad\_squared\_accum} \\ var = \frac{ x\_value }{ y\_value } \end{array}\end{split}\]

Inputs of var, gradient_accumulator, gradient_squared_accumulator and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If True, updating of the var and accum tensors will be protected by a lock. Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient_accumulator (Parameter) - The dict of mutable tensor \(grad\_accum\). Must have the same shape and dtype as var.

  • gradient_squared_accumulator (Parameter) - The dict of mutable tensor \(grad\_squared\_accum\). Must have the same shape and dtype as var.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Scaling factor. Must be a scalar. With float32 or float16 data type.

  • l1 ([Number, Tensor]) - L1 regularization. Must be a scalar. With float32 or float16 data type.

  • l2 ([Number, Tensor]) - L2 regularization. Must be a scalar. With float32 or float16 data type.

  • global_step ([Number, Tensor]) - Training step number. Must be a scalar. With int32 or int64 data type.

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • gradient_accumulator (Tensor) - The same shape and data type as gradient_accumulator.

  • gradient_squared_accumulator (Tensor) - The same shape and data type as gradient_squared_accumulator.

Raises:
  • TypeError – If var, gradient_accumulator or gradient_squared_accumulator is not a Parameter.

  • TypeError – If grad is not a Tensor.

  • TypeError – If lr, l1, l2 or global_step is neither a Number nor a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, gradient_accumulator, gradient_squared_accumulator, grad, lr, l1 or l2 is neither float16 nor float32.

  • TypeError – If dtype of gradient_accumulator, gradient_squared_accumulator or grad is not same as var.

  • TypeError – If dtype of global_step is not int32 nor int64.

  • ValueError – If the shape size of lr, l1, l2 and global_step is not 0.

  • RuntimeError – If the data type of var, gradient_accumulator, gradient_squared_accumulator and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyAdagradDANet(nn.Cell):
...     def __init__(self, use_locking=False):
...         super(ApplyAdagradDANet, self).__init__()
...         self.apply_adagrad_d_a = ops.ApplyAdagradDA(use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...         self.gradient_squared_accumulator = Parameter(Tensor(np.array([[0.2, 0.1],
...                                                                        [0.1, 0.2]]).astype(np.float32)),
...                                                       name="gradient_squared_accumulator")
...         self.gradient_accumulator = Parameter(Tensor(np.array([[0.1, 0.3],
...                                                                [0.1, 0.5]]).astype(np.float32)),
...                                               name="gradient_accumulator")
...     def construct(self, grad, lr, l1, l2, global_step):
...         out = self.apply_adagrad_d_a(self.var, self.gradient_accumulator,
...                                      self.gradient_squared_accumulator, grad, lr, l1, l2, global_step)
...         return out
...
>>> net = ApplyAdagradDANet()
>>> grad = Tensor(np.array([[0.3, 0.4], [0.1, 0.2]]).astype(np.float32))
>>> lr = Tensor(0.001, mstype.float32)
>>> l1 = Tensor(0.001, mstype.float32)
>>> l2 = Tensor(0.001, mstype.float32)
>>> global_step = Tensor(2, mstype.int32)
>>> output = net(grad, lr, l1, l2, global_step)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[-7.39064650e-04, -1.36888528e-03],
 [-5.96988888e-04, -1.42478070e-03]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.00000006e-01,  7.00000048e-01],
 [ 2.00000003e-01,  6.99999988e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.90000021e-01,  2.60000020e-01],
 [ 1.09999999e-01,  2.40000010e-01]]))
class tinyms.primitives.ApplyAdagradV2(epsilon, update_slots=True)[source]

Updates relevant entries according to the adagradv2 scheme.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

The difference is that ApplyAdagradV2 has one more small constant value \(\epsilon\) than ApplyAdagrad.

Parameters:
  • epsilon (float) – A small value added for numerical stability.

  • update_slots (bool) – If True, accum will be updated. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. The shape and data type must be the same as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If dtype of var, accum, lr or grad is neither float16 nor float32.

  • TypeError – If lr is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_adagrad_v2 = ops.ApplyAdagradV2(epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad):
...         out = self.apply_adagrad_v2(self.var, self.accum, lr, grad)
...         return out
...
>>> net = Net()
>>> lr = Tensor(0.001, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99638879e-01,  3.99296492e-01],
 [ 9.97817814e-02,  4.99281585e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyAdamWithAmsgrad(beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False)[source]

Update var according to the Adam algorithm.

\[\begin{split}\begin{array}{l1} \\ lr_t:=learning\_rate*\sqrt{1-\beta_2^t}/(1-\beta_1^t) \\ m_t:=\beta_1*m_{t-1}+(1-\beta_1)*g \\ v_t:=\beta_2*v_{t-1}+(1-\beta_2)*g*g \\ \hat v_t:=max(\hat v_{t-1}, v_t) \\ var:=var-lr_t*m_t/(\sqrt{\hat v_t}+\epsilon) \\ \end{array}\end{split}\]
Parameters:
  • beta1 (float) – A Tensor. Must have the same type as beta1_power. Momentum factor. Must be a scalar.

  • beta2 (float) – A Tensor. Must have the same type as beta1_power. Momentum factor. Must be a scalar.

  • epsilon (float) – A Tensor. Must have the same type as beta1_power. Ridge term. Must be a scalar.

  • use_locking (bool) – use_locking: If True , updating of the var, m, and v tensors will be protected by a lock; Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type can be float16 or float32.

  • m (Parameter) - The 1st moment vector in the updating formula, the shape and data type value should be the same as var.

  • v (Parameter) - the 2nd moment vector in the updating formula, the shape and data type value should be the same as var.

  • vhat (Parameter) - \(\hat v_t\) in the updating formula, the shape and data type value should be the same as var.

  • beta1_power (Union[float, Tensor]) - \(beta_1^t(\beta_1^{t})\) in the updating formula, a scalar tensor with float16 or float32 data type.

  • beta2_power (Union[float, Tensor]) - \(beta_2^t(\beta_2^{t})\) in the updating formula, a scalar tensor with float16 or float32 data type.

  • lr (Union[float, Tensor]) - Scaling factor, a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - The gradient, has the same shape and data type as var.

Outputs:

Tuple of 4 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

  • vhat (Tensor) - The same shape and data type as vhat.

Raises:
  • TypeError – If var, m, v, vhat is not a Parameter.

  • TypeError – If beta1_power, beta2_power, lr is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • TypeError – If dtype of var, m, v, vhat, beta1_power, beta2_power, lr, grad, momentum is not float32 or float16.

  • ValueError – If m or v or vhat or grad doesn’t have the same shape of var.

  • ValueError – If the shape of beta1_power, beta2_power, lr is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyAdamWithAmsgradNet(nn.Cell):
...     def __init__(self, beta1=0.9, beta2=0.999, epsilon=1e-8, use_locking=False):
...         super(ApplyAdamWithAmsgradNet, self).__init__()
...         self.apply_adam_with_amsgrad = P.ApplyAdamWithAmsgrad(beta1, beta2, epsilon, use_locking)
...         self.var = Parameter(Tensor(np.array([[0.2, 0.2], [0.2, 0.2]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.1, 0.2], [0.4, 0.3]]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.array([[0.2, 0.1], [0.3, 0.4]]).astype(np.float32)), name="v")
...         self.vhat = Parameter(Tensor(np.array([[0.1, 0.2], [0.6, 0.2]]).astype(np.float32)), name="vhat")
...     def construct(self, beta1_power, beta2_power, lr, grad):
...         out = self.apply_adam_with_amsgrad(self.var, self.m, self.v, self.vhat,
...                                            beta1_power, beta2_power, lr, grad)
...         return out
>>> net = ApplyAdamWithAmsgradNet()
>>> grad = Tensor(np.array([[0.4, 0.2], [0.2, 0.3]]).astype(np.float32))
>>> output = net(Tensor(0.9, mstype.float32), Tensor(0.999, mstype.float32), Tensor(0.01, mstype.float32), grad)
>>> print(net.var.asnumpy())
[[0.19908068 0.1985858 ]
[0.19844866 0.19849943]]
class tinyms.primitives.ApplyAddSign[source]

Updates relevant entries according to the AddSign algorithm.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = (\alpha + \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\alpha\) represents alpha, \(\beta\) represents beta.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. The data type of inputs must be float16 or float32 on Ascend and float16, float32 or float64 on CPU and GPU.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float16, float32 or float64 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. With float16, float32 or float64 data type.

  • alpha (Union[Number, Tensor]) - Must be a scalar. With float16, float32 or float64 data type.

  • sign_decay (Union[Number, Tensor]) - Must be a scalar. With float16, float32 or float64 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, must be a scalar. With float16, float32 or float64 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises:
  • TypeError – If dtype of var, lr, alpha, sign_decay or beta is not float16, float32 or float64.

  • TypeError – If lr, alpha or sign_decay is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_add_sign = ops.ApplyAddSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.alpha = 1.0
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_add_sign(self.var, self.m, self.lr, self.alpha, self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.99403024e-01,  3.98607016e-01],
 [ 9.98010039e-02,  4.98407990e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyCenteredRMSProp(use_locking=False)[source]

Optimizer that implements the centered RMSProp algorithm. Please refer to the usage in source code of mindspore.nn.RMSProp.

The updating formulas of ApplyCenteredRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ g_{t+1} = \rho g_{t} + (1 - \rho)\nabla Q_{i}(w) \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} - g_{t+1}^2 + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(g_{t+1}\) represents mean_gradient, \(g_{t}\) is the last moment of \(g_{t+1}\). \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Note

The difference between ApplyCenteredRMSProp and ApplyRMSProp is that the former uses the centered RMSProp algorithm, and the centered RRMSProp algorithm uses an estimate of the centered second moment(i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncertained) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Warning

In dense implementation of this algorithm, mean_gradient, mean_square, and moment will update even if the grad is zero. But in this sparse implementation, mean_gradient, mean_square, and moment will not update in iterations during which the grad is zero.

Parameters:

use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated.

  • mean_gradient (Tensor) - Mean gradients, must be the same type as var.

  • mean_square (Tensor) - Mean square gradients, must be the same type as var.

  • moment (Tensor) - Delta of var, must be the same type as var.

  • grad (Tensor) - Gradient, must be the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_gradient, mean_square, moment or grad is not a Tensor.

  • TypeError – If learing_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of learing_rate is neither float16 nor float32.

  • TypeError – If decay, momentum or epsilon is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_centerd_rms_prop = ops.ApplyCenteredRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_grad, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_centerd_rms_prop(self.var, mean_grad, mean_square, moment, grad,
...                                           lr, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_grad, mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.68377227  0.68377227]
 [0.68377227  0.68377227]]
class tinyms.primitives.ApplyFtrl(use_locking=False)[source]

Updates relevant entries according to the FTRL scheme.

For more details, please refer to mindspore.nn.FTRL.

Note

Currently, only positive numbers are supported on the Ascend platform, and the calculation results for other scenarios are not defined.

Parameters:

use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same shape and data type as var.

  • linear (Parameter) - The linear coefficient to be updated, must be same shape and data type as var.

  • grad (Tensor) - Gradient. The data type must be float16 or float32.

  • lr (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be greater than or equal to zero. Default: 0.0. It must be a float number or a scalar tensor with float16 or float32 data type.

  • lr_power (Union[Number, Tensor]) - Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5. It must be a float number or a scalar tensor with float16 or float32 data type.

Outputs:
  • var (Tensor) - Represents the updated var. As the input parameters has been updated in-place, this value is always zero when the platform is GPU.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, grad, lr, l1, l2 or lr_power is neither float16 nor float32.

  • TypeError – If lr, l1, l2 or lr_power is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the parameter types of var, accum and linear are inconsistent.

  • RuntimeError – If the parameter types of grad, lr, l1, l2, lr_power are inconsistent with var and the precision is greater than var.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class ApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(ApplyFtrlNet, self).__init__()
...         self.apply_ftrl = ops.ApplyFtrl()
...         self.lr = 0.001
...         self.l1 = 0.0
...         self.l2 = 0.0
...         self.lr_power = -0.5
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.9, 0.1],
...                                                  [0.7, 0.8]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad):
...         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
...                               self.lr_power)
...         return out
...
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(input_x)
>>> print(net.var.asnumpy())
[[ 0.0390525  0.11492836]
 [ 0.00066425 0.15075898]]
class tinyms.primitives.ApplyGradientDescent[source]

Updates var by subtracting alpha * delta from it.

\[var = var - \alpha * \delta\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change, has the same shape and data type as var.

Outputs:

Tensor, represents the updated var.

Raises:
  • TypeError – If dtype of var or alpha is neither float16 nor float32.

  • TypeError – If delta is not a Tensor.

  • TypeError – If alpha is neither a Number nor a Tensor.

  • RuntimeError – If the data type of var and delta conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_gradient_descent = ops.ApplyGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...     def construct(self, delta):
...         out = self.apply_gradient_descent(self.var, self.alpha, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.9999 0.9999]
 [0.9999 0.9999]]
class tinyms.primitives.ApplyKerasMomentum(use_locking=False, use_nesterov=False)[source]

Update var according to the momentum scheme.

\[\begin{split}\begin{array}{ll} \\ accum = accum * momentum - grad * lr \\ var = \begin{cases} var + accum * momentum - grad * lr, &\text{if use_nesterov} \\ var + accum, &\text{else} \end{cases} \end{array}\end{split}\]

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Parameters:
  • use_locking (bool) – If True, updating of the var and accum tensors will be protected by a lock; Otherwise the behavior is undefined, but may exhibit less contention. Default: False.

  • use_nesterov (bool) – If True, the tensor passed to compute grad will be var + momentum * accum, so in the end, the var you get is actually var + momentum * accum. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. With float16 or float32 data type.

  • accum (Parameter) - Must have the same shape and type as var. With float16 or float32 data type.

  • lr (Union[Number, Tensor]) - Scaling factor. Must be a scalar. With float16 or float32 data type.

  • grad (Tensor) - The gradient. Must have the same shape and type as var. With float16 or float32 data type.

  • momentum (Union[Number, Tensor]) - Momentum. Must be a scalar. With float16 or float32 data type.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If the use_locking or use_nesterov is not a bool.

  • TypeError – If var or accum is not a Parameter.

  • TypeError – If lr is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • TypeError – If momentum is neither a Number nor a Tensor.

  • TypeError – If dtype of var, accum, lr, grad, momentum is neither float16 nor float32.

  • ValueError – If accum or grad doesn’t have the same shape as var.

  • ValueError – If the shape size of lr, momentum is not 0.

Supported Platforms:

Ascend

Examples

>>> class ApplyKerasMomentumNet(nn.Cell):
...     def __init__(self, use_locking=False, use_nesterov=False):
...         super(ApplyKerasMomentumNet, self).__init__()
...         self.apply_keras_momentum = P.ApplyKerasMomentum(use_locking, use_nesterov)
...         self.var = Parameter(Tensor(np.array([[0.2, 0.3], [0.1, 0.4]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.2, 0.3], [0.1, 0.4]]).astype(np.float32)), name="accum")
...     def construct(self, lr, grad, momentum):
...         out = self.apply_keras_momentum(self.var, self.accum, lr, grad, momentum)
...         return out
...
>>> net = ApplyKerasMomentumNet()
>>> lr = Tensor(0.001, mstype.float32)
>>> grad = Tensor(np.array([[0.3, 0.2], [0.4, 0.1]]).astype(np.float32))
>>> momentum = Tensor(0.99, mstype.float32)
>>> output = net(lr, grad, momentum)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.97700012e-01,  5.96800029e-01],
[ 1.98599994e-01,  7.95899987e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.97699994e-01,  2.96800017e-01],
[ 9.86000001e-02,  3.95900011e-01]]))
class tinyms.primitives.ApplyMomentum(use_nesterov=False, use_locking=False, gradient_scale=1.0)[source]

Optimizer that implements the Momentum algorithm.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • gradient_scale (float) – The scale of the gradient. Default: 1.0.

Inputs:
  • variable (Parameter) - Weights to be updated. Data type must be float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128.

  • accumulation (Parameter) - Accumulated gradient value by moment weight, has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 number or a scalar tensor with float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 number or a scalar tensor with float64, int64, float, float16, int16, int32, int8, uint16, uint32, uint64, uint8, complex64, complex128 data type.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If the use_locking or use_nesterov is not a bool or gradient_scale is not a float.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...    def __init__(self):
...        super(Net, self).__init__()
...        self.apply_momentum = ops.ApplyMomentum()
...        self.variable = Parameter(Tensor(np.array([[0.6, 0.4],
...                                            [0.1, 0.5]]).astype(np.float32)), name="variable")
...        self.accumulate = Parameter(Tensor(np.array([[0.6, 0.5],
...                                            [0.2, 0.6]]).astype(np.float32)), name="accumulate")
...    def construct(self, lr, grad, moment):
...        out = self.apply_momentum(self.variable, self.accumulate, lr, grad, moment)
...        return out
>>> net = Net()
>>> lr = Tensor(0.1, mindspore.float32)
>>> moment = Tensor(0.9, mindspore.float32)
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, grad, moment)
>>> print(output)
[[0.51600003 0.285     ]
[0.072      0.366     ]]
class tinyms.primitives.ApplyPowerSign[source]

Updates relevant entries according to the AddSign algorithm.

The AddSign algorithm was proposed in Neural Optimizer Search with Reinforcement Learning.

\[\begin{split}\begin{array}{ll} \\ m_{t+1} = \beta * m_{t} + (1 - \beta) * g \\ \text{update} = \exp(\text{logbase} * \text{sign_decay} * sign(g) * sign(m)) * g \\ var = var - lr_{t+1} * \text{update} \end{array}\end{split}\]

\(t\) represents updating step while \(m\) represents the 1st moment vector, \(m_{t}\) is the last moment of \(m_{t+1}\), \(lr\) represents scaling factor lr, \(g\) represents grad, \(\beta\) represents beta.

All of inputs comply with the implicit type conversion rules to make the data types consistent. If lr, logbase, sign_decay or beta is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation. If inputs are tensors and have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

On Ascend, input data type of float64 is currently not supported.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float64, float32 or float16 data type. If data type of var is float16, all inputs must have the same data type as var. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Union[Number, Tensor]) - The learning rate value, should be a scalar or Tensor with float64, float32 or float16 data type.

  • logbase (Union[Number, Tensor]) - Should be a scalar or Tensor with float64, float32 or float16 data type.

  • sign_decay (Union[Number, Tensor]) - Should be a scalar or Tensor with float64, float32 or float16 data type.

  • beta (Union[Number, Tensor]) - The exponential decay rate, should be a scalar or Tensor with float64, float32 or float16 data type.

  • grad (Tensor) - A tensor of the same shape and data type as var, for the gradient.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

Raises:
  • TypeError – If dtype of var, lr, logbase, sign_decay, beta or grad is not one of float16,

  • float32 or float64.

  • TypeError – If lr, logbase, sign_decay or beta is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of lr, logbase, sign_decay and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_power_sign = ops.ApplyPowerSign()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.array([[0.6, 0.5],
...                                             [0.2, 0.6]]).astype(np.float32)), name="m")
...         self.lr = 0.001
...         self.logbase = np.e
...         self.sign_decay = 0.99
...         self.beta = 0.9
...     def construct(self, grad):
...         out = self.apply_power_sign(self.var, self.m, self.lr, self.logbase,
...                                        self.sign_decay, self.beta, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.95575690e-01,  3.89676481e-01],
 [ 9.85252112e-02,  4.88201708e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.70000052e-01,  5.19999981e-01],
 [ 1.89999998e-01,  6.20000064e-01]]))
class tinyms.primitives.ApplyProximalAdagrad(use_locking=False)[source]

Updates relevant entries according to the proximal adagrad algorithm. The proximal adagrad algorithm was proposed in Efficient Learning using Forward-Backward Splitting.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the var and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated, must have the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a scalar. The data type must be float16 or float32.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a scalar. The data type must be float16 or float32.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a scalar. The data type must be float16 or float32.

  • grad (Tensor) - Gradient with the same shape and dtype as var.

Outputs:

Tuple of 2 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If use_blocking is not a bool.

  • TypeError – If dtype of var, lr, l1 or l2 is neither float16 nor float32.

  • TypeError – If lr, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If grad is not a Tensor.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_adagrad = ops.ApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[0.6, 0.4],
...                                               [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.6, 0.5],
...                                                 [0.2, 0.6]]).astype(np.float32)), name="accum")
...         self.lr = 0.01
...         self.l1 = 0.0
...         self.l2 = 0.0
...     def construct(self, grad):
...         out = self.apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1, self.l2, grad)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(grad)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.96388459e-01,  3.92964751e-01],
 [ 9.78178233e-02,  4.92815793e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 6.90000057e-01,  9.90000010e-01],
 [ 2.10000008e-01,  1.24000001e+00]]))
class tinyms.primitives.ApplyProximalGradientDescent[source]

Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm. Refer to the paper Efficient Learning using Forward-Backward Splitting for more details.

\[\begin{split}\begin{array}{ll} \\ \text{prox_v} = var - \alpha * \delta \\ var = \frac{sign(\text{prox_v})}{1 + \alpha * l2} * \max(\left| \text{prox_v} \right| - \alpha * l1, 0) \end{array}\end{split}\]

where \(\alpha\) represents alpha, \(\delta\) represents delta.

Inputs of var and delta comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • var (Parameter) - Variable tensor to be updated. With float32 or float16 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • alpha (Union[Number, Tensor]) - Scaling factor, must be a scalar. With float32 or float16 data type.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a scalar. With float32 or float16 data type.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a scalar. With float32 or float16 data type.

  • delta (Tensor) - A tensor for the change.

Outputs:

Tensor, represents the updated var.

Raises:
  • TypeError – If dtype of var, alpha, l1 or l2 is neither float16 nor float32.

  • TypeError – If alpha, l1 or l2 is neither a Number nor a Tensor.

  • TypeError – If delta is not a Tensor.

  • RuntimeError – If the data type of var, and delta conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_proximal_gradient_descent = ops.ApplyProximalGradientDescent()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.alpha = 0.001
...         self.l1 = 0.1
...         self.l2 = 0.1
...     def construct(self, delta):
...         out = self.apply_proximal_gradient_descent(self.var, self.alpha, self.l1, self.l2, delta)
...         return out
...
>>> net = Net()
>>> delta = Tensor(np.array([[0.1, 0.1], [0.1, 0.1]]).astype(np.float32))
>>> output = net(delta)
>>> print(output)
[[0.99969995 0.99969995]
 [0.99969995 0.99969995]]
class tinyms.primitives.ApplyRMSProp(use_locking=False)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of mindspore.nn.RMSProp.

The updating formulas of ApplyRMSProp algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ s_{t+1} = \rho s_{t} + (1 - \rho)(\nabla Q_{i}(w))^2 \\ m_{t+1} = \beta m_{t} + \frac{\eta} {\sqrt{s_{t+1} + \epsilon}} \nabla Q_{i}(w) \\ w = w - m_{t+1} \end{array}\end{split}\]

where \(w\) represents var, which will be updated. \(s_{t+1}\) represents mean_square, \(s_{t}\) is the last moment of \(s_{t+1}\), \(m_{t+1}\) represents moment, \(m_{t}\) is the last moment of \(m_{t+1}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Warning

Note that in dense implementation of this algorithm, “mean_square” and “moment” will update even if “grad” is 0, but in this sparse implementation, “mean_square” and “moment” will not update in iterations during which “grad” is 0.

Parameters:

use_locking (bool) – Whether to enable a lock to protect the variable and accumulation tensors from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated.

  • mean_square (Tensor) - Mean square gradients, must be the same type as var.

  • moment (Tensor) - Delta of var, must be the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate. Must be a float number or a scalar tensor with float16 or float32 data type.

  • grad (Tensor) - Gradient, must be the same type as var.

  • decay (float) - Decay rate. Only constant value is allowed.

  • momentum (float) - Momentum. Only constant value is allowed.

  • epsilon (float) - Ridge term. Only constant value is allowed.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If var, mean_square, moment or decay is not a Tensor.

  • TypeError – If learning_rate is neither a Number nor a Tensor.

  • TypeError – If dtype of decay, momentum or epsilon is not float.

  • TypeError – If dtype of learning_rate is neither float16 nor float32.

  • ValueError – If decay, momentum or epsilon is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.apply_rms_prop = ops.ApplyRMSProp()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...
...     def construct(self, mean_square, moment, grad, decay, momentum, epsilon, lr):
...         out = self.apply_rms_prop(self.var, mean_square, moment, lr, grad, decay, momentum, epsilon)
...         return out
...
>>> net = Net()
>>> mean_square = Tensor(np.ones([2, 2]).astype(np.float32))
>>> moment = Tensor(np.ones([2, 2]).astype(np.float32))
>>> grad = Tensor(np.ones([2, 2]).astype(np.float32))
>>> output = net(mean_square, moment, grad, 0.0, 1e-10, 0.001, 0.01)
>>> print(net.var.asnumpy())
[[0.990005  0.990005]
 [0.990005  0.990005]]
class tinyms.primitives.ApproximateEqual(tolerance=1e-05)[source]

Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False.

\[\begin{split}out_i = \begin{cases} & \text{ if } \left | x_{i} - y_{i} \right | < \text{tolerance},\ \ True \\ & \text{ if } \left | x_{i} - y_{i} \right | \ge \text{tolerance},\ \ False \end{cases}\end{split}\]

where tolerance indicates Acceptable maximum tolerance.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to the relatively highest precision data type.

Parameters:

tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Inputs:
  • x (Tensor) - A tensor. Must be one of the following types: float32, float16. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • y (Tensor) - A tensor of the same type and shape as x.

Outputs:

Tensor, the shape is the same as the shape of x, and the data type is bool.

Raises:
  • TypeError – If tolerance is not a float.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([2, 3, 6]), mindspore.float32)
>>> approximate_equal = ops.ApproximateEqual(2.)
>>> output = approximate_equal(x, y)
>>> print(output)
[ True  True  False]
class tinyms.primitives.ArgMaxWithValue(axis=0, keep_dims=False)[source]

Calculates the maximum value along with the given axis for the input tensor, and returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Also see: func: mindspore.ops.max.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Inputs:
  • x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor.

  • index (Tensor) - The index for the maximum value of the input tensor, with dtype int32. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • values (Tensor) - The maximum value of input tensor, with the same shape as index, and same dtype as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMaxWithValue()(input_x)
>>> print(index, output)
3 0.7
>>> index, output = ops.ArgMaxWithValue(keep_dims=True)(input_x)
>>> print(index, output)
[3] [0.7]
class tinyms.primitives.ArgMinWithValue(axis=0, keep_dims=False)[source]

Calculates the minimum value along with the given axis for the input tensor, and returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Also see: func: mindspore.ops.min.

Parameters:
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Inputs:
  • x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\) .Complex tensor is not supported.

Outputs:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor.

  • index (Tensor) - The index for the minimum value of the input tensor, with dtype int32. If keep_dims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

  • values (Tensor) - The minimum value of input tensor, with the same shape as index, and same dtype as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> index, output = ops.ArgMinWithValue()(x)
>>> print(index, output)
0 0.0
>>> index, output = ops.ArgMinWithValue(keep_dims=True)(x)
>>> print(index, output)
[0] [0.0]
class tinyms.primitives.Argmax(axis=-1, output_type=mindspore.int32)[source]

Returns the indices of the maximum value of a tensor across the axis.

Refer to mindspore.ops.argmax() for more details.

Parameters:
  • axis (int) – Axis where the Argmax operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions. Support data type list as follows:

    • Ascend: Float16, Float32.

    • GPU: Float16, Float32.

    • CPU: Float16, Float32, Float64.

Outputs:

Tensor, indices of the max value of input tensor across the axis.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.Argmax(output_type=mindspore.int32)(input_x)
>>> print(output)
[1 0 0]
class tinyms.primitives.Argmin(axis=-1, output_type=mindspore.int32)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters:
  • axis (int) – Axis where the Argmin operation applies to. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

    • Ascend: Float16, Float32, Float64, Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64.

Outputs:

Tensor, whose dtype is determined by output_type.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If output_type is neither int32 nor int64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.Argmin()(input_x)
>>> print(index)
2
class tinyms.primitives.Asin[source]

Computes arcsine of input tensors element-wise.

Refer to mindspore.ops.asin() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asin = ops.Asin()
>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = asin(x)
>>> print(output)
[0.8330704  0.04001067 0.30469266 0.5943858 ]
class tinyms.primitives.Asinh[source]

Computes inverse hyperbolic sine of the input element-wise.

Refer to mindspore.ops.asinh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> asinh = ops.Asinh()
>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = asinh(x)
>>> print(output)
[-2.3124382  1.1947632  1.8184465  5.298342 ]
class tinyms.primitives.Assert(summarize=3)[source]

Asserts whether the given condition is True. If input condition is identified to be false, print a list of the tensor in data.

Parameters:

summarize (int, optional) – The number of entries to be printed in each tensor while the given condition is identified to be False. Default: 3.

Inputs:
  • condition (Union[Tensor[bool], bool]) - The condition to be identified.

  • input_data (Union[tuple[Tensor], list[Tensor]]) - The tensors to be printed out when the condition is false.

Raises:
  • TypeError – If summarize is not an int.

  • TypeError – If condition is neither a Tensor nor a bool.

  • TypeError – If input_data is neither a tuple nor a list.

Supported Platforms:

GPU CPU

Examples

>>> a = Tensor(np.array([-1, 0, 1, 2, 3]).astype(np.int32))
>>> b = Tensor(np.array([1, 2, 3, 4, 5]).astype(np.float32))
>>> assert1 = ops.Assert(3)
>>> assert1(False, [a, b])
For 'Assert' condition is false.
input data: [-1 0 1]
input data: [1 2 3]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "mindspore/ops/primitive.py", line 294, in __call__
    return _run_op(self, self.name, args)
  File "mindspore/common/api.py", line 99, in wrapper
    results = fn(*arg, **kwargs)
  File "mindspore/ops/primitive.py", line 743, in _run_op
    output = real_run_op(obj, op_name, args)
RuntimeError: assert failed
class tinyms.primitives.Assign[source]

Assigns Parameter with a value.

Refer to mindspore.ops.assign() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> value = Tensor([2.0], mindspore.float32)
>>> variable = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="variable")
>>> assign = ops.Assign()
>>> x = assign(variable, value)
>>> print(variable.asnumpy())
[2.]
class tinyms.primitives.AssignAdd[source]

Updates a Parameter by adding a value to it.

Refer to mindspore.ops.assign_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignAdd = ops.AssignAdd()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
...
...     def construct(self, x):
...         self.AssignAdd(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> output = net(value)
>>> print(net.variable.asnumpy())
[101]
class tinyms.primitives.AssignSub[source]

Updates a Parameter by subtracting a value from it.

Refer to mindspore.ops.assign_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.AssignSub = ops.AssignSub()
...         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
...
...     def construct(self, x):
...         self.AssignSub(self.variable, x)
...         return self.variable
...
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int32)*100)
>>> output = net(value)
>>> print(net.variable.asnumpy())
[-99]
class tinyms.primitives.Atan[source]

Computes the trigonometric inverse tangent of the input element-wise.

Refer to mindspore.ops.atan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> atan = ops.Atan()
>>> output = atan(x)
>>> print(output)
[0.7853982 0.       ]
class tinyms.primitives.Atan2[source]

Returns arctangent of x/y element-wise.

Refer to mindspore.ops.atan2() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 1]), mindspore.float32)
>>> y = Tensor(np.array([1, 1]), mindspore.float32)
>>> atan2 = ops.Atan2()
>>> output = atan2(x, y)
>>> print(output)
[0.        0.7853982]
class tinyms.primitives.Atanh[source]

Computes inverse hyperbolic tangent of the input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.atanh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -0.5]), mindspore.float32)
>>> atanh = ops.Atanh()
>>> output = atanh(x)
>>> print(output)
[ 0.         -0.54930615]
class tinyms.primitives.AvgPool(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

Average pooling operation.

Refer to mindspore.ops.avg_pool2d() for more details.

Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is ‘same’ or ‘valid’. Default: ‘valid’.

    • same: The height and width of the output are the same as the input divided by ‘strides’ and rounded up.

    • valid: Returns the output of the valid calculation without filling. Redundant pixels that do not satisfy the calculation will be discarded.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.avgpool_op = ops.AvgPool(pad_mode="VALID", kernel_size=2, strides=1)
...
...     def construct(self, x):
...         result = self.avgpool_op(x)
...         return result
...
>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> net = Net()
>>> output = net(x)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
class tinyms.primitives.AvgPool3D(kernel_size=1, strides=1, pad_mode='valid', pad=0, ceil_mode=False, count_include_pad=True, divisor_override=0, data_format='NCDHW')[source]

3D Average pooling operation.

Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), AvgPool3D outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

Warning

“kernel_size” is in the range [1, 255]. “strides” is in the range [1, 63].

\[\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents depth, height and width are both kernel_size, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be the same as the input. The total number of padding will be calculated in depth, horizontal and vertical directions and evenly distributed to head and tail, top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height, width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int], list[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (bool) – If True, ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the same data type with x.

Raises:
  • TypeError – If kernel_size, strides or pad is neither an int not a tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If pad_mode or data_format is not a string.

  • TypeError – If divisor_override is not an int.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If element of pad is less than 0.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to 0 or (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float16)
>>> avg_pool3d = ops.AvgPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = avg_pool3d(x)
>>> print(output)
[[[[[ 5.  6.]]]
  [[[17. 18.]]]]]
class tinyms.primitives.BCEWithLogitsLoss(reduction='mean')[source]

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

Sets input logits as \(X\), input label as \(Y\), input weight as \(W\), output as \(L\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})] \end{array}\end{split}\]

\(i\) indicates the \(i^{th}\) sample, \(j\) indicates the category. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

\(\ell\) indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor weight assigns different weights to each piece of data in the batch, and the tensor pos_weight adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

\[\begin{split}\begin{array}{ll} \\ p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\ L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})] \end{array}\end{split}\]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and \(P_c\) is the weight of the positive answer for the class c. \(P_c>1\) increases the recall, \(P_c<1\) increases the precision.

Parameters:

reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’, not case sensitive. If ‘none’, do not perform reduction. Default: ‘mean’.

Inputs:
  • logits (Tensor) - Input logits. Data type must be float16 or float32. Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • label (Tensor) - Ground truth label, has the same shape as logits. Data type must be float16 or float32.

  • weight (Tensor) - A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • pos_weight (Tensor) - A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

Outputs:

Tensor or Scalar, if reduction is ‘none’, it’s a tensor with the same shape and type as input logits. Otherwise, the output is a scalar.

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If data type of any input is neither float16 nor float32.

  • TypeError – If data type of reduction is not string.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> label = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> loss = ops.BCEWithLogitsLoss()
>>> output = loss(logits, label, weight, pos_weight)
>>> print(output)
0.3463612
class tinyms.primitives.BNTrainingReduce(data_format='NCHW')[source]

The BNTrainingReduce interface is deprecated, please use the mindspore.ops.BatchNorm instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BNTrainingUpdate(isRef=True, epsilon=1e-05, factor=0.1, data_format='NCHW')[source]

The BNTrainingUpdate interface is deprecated, please use the mindspore.ops.BatchNorm instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BartlettWindow(periodic=True, dtype=mindspore.float32)[source]

Bartlett window function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bartlett_window() for more details.

Parameters:
  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Default: True.

  • dtype (mindspore.dtype, optional) – The desired datatype of returned tensor. Only float16, float32 and float64 are allowed. Default: mstype.float32.

Inputs:
  • window_length (Tensor) - The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

Outputs:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(5, mstype.int32)
>>> bartlett_window = ops.BartlettWindow(periodic=True, dtype=mstype.float32)
>>> output = bartlett_window(window_length)
>>> print(output)
[0.  0.4 0.8 0.8 0.4]
class tinyms.primitives.BasicLSTMCell(keep_prob=1.0, forget_bias=1.0, state_is_tuple=True, activation='tanh')[source]

It’s similar to operator mindspore.ops.DynamicRNN. BasicLSTMCell will be deprecated in the future. Please use DynamicRNN instead.

Supported Platforms:

Deprecated

class tinyms.primitives.BatchMatMul(transpose_a=False, transpose_b=False)[source]

Computes matrix multiplication between two tensors by batch.

\[\text{output}[..., :, :] = \text{matrix}(x[..., :, :]) * \text{matrix}(y[..., :, :])\]

The first input tensor must be not less than 3 and the second input must be not less than 2.

Parameters:
  • transpose_a (bool) – If true, the last two dimensions of x is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, the last two dimensions of y is transposed before multiplication. Default: False.

Inputs:
  • x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions. If transpose_a is True, its shape must be \((*B, C, N)\).

  • y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\). If transpose_b is True, its shape must be \((*B, M, C)\).

Outputs:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Raises:
  • TypeError – If transpose_a or transpose_b is not a bool.

  • ValueError – If length of shape of x is not equal to length of shape of y or length of shape of x is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[2, 4, 1, 3]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul()
>>> output = batmatmul(x, y)
>>> print(output.shape)
(2, 4, 1, 4)
>>> x = Tensor(np.ones(shape=[2, 4, 3, 1]), mindspore.float32)
>>> y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = ops.BatchMatMul(transpose_a=True)
>>> output = batmatmul(x, y)
>>> print(output.shape)
(2, 4, 1, 4)
class tinyms.primitives.BatchNorm(is_training=False, epsilon=1e-05, momentum=0.1, data_format='NCHW')[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over inputs to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon, \(mean\) is the mean of \(x\), \(variance\) is the variance of \(x\).

Warning

  • If the operation is used for inference, and outputs “reserve_space_1” and “reserve_space_2” are available, then “reserve_space_1” has the same value as “mean” and “reserve_space_2” has the same value as “variance”.

  • For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Parameters:
  • is_training (bool) – If is_training is True, mean and variance are computed during training. If is_training is False, they’re loaded from checkpoint during inference. Default: False.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = (1 - momentum) * running\_mean + momentum * current\_mean\)). Momentum value must be [0, 1]. Default: 0.1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’, and the ‘NHWC’ format is only supported in GPU target. Default: “NCHW”.

Inputs:

If is_training is False, inputs are Tensors.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Tensor) - Tensor of shape \((C,)\), with float16 or float32 data type.

  • bias (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • mean (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

  • variance (Tensor) - Tensor of shape \((C,)\), has the same data type with scale.

If is_training is True, scale, bias, mean and variance are Parameters.

  • input_x (Tensor) - Tensor of shape \((N, C)\), with float16 or float32 data type.

  • scale (Parameter) - Parameter of shape \((C,)\), with float16 or float32 data type.

  • bias (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • mean (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

  • variance (Parameter) - Parameter of shape \((C,)\), has the same data type with scale.

Outputs:

Tuple of 5 Tensors, the normalized inputs and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

  • batch_mean (Tensor) - Tensor of shape \((C,)\).

  • batch_variance (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_1 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_2 (Tensor) - Tensor of shape \((C,)\).

Raises:
  • TypeError – If is_training is not a bool.

  • TypeError – If dtype of epsilon or momentum is not float.

  • TypeError – If data_format is not a str.

  • TypeError – If input_x, scale, bias, mean or variance is not a Tensor.

  • TypeError – If dtype of input_x, scale is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
>>> scale = Tensor(np.ones([2]), mindspore.float32)
>>> bias = Tensor(np.ones([2]), mindspore.float32)
>>> mean = Tensor(np.ones([2]), mindspore.float32)
>>> variance = Tensor(np.ones([2]), mindspore.float32)
>>> batch_norm = ops.BatchNorm()
>>> output = batch_norm(input_x, scale, bias, mean, variance)
>>> print(output[0])
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.BatchToSpace(block_size, crops)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_size, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_size with given amount to crop from dimension, respectively.

Parameters:
  • block_size (int) – The block size of division, has the value not less than 2.

  • crops (Union[list(int), tuple(int)]) – The crop value for H and W dimension, containing 2 subtraction lists. Each list contains 2 integers. All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which corresponds to the input dimension i+2. It is required that \(input\_shape[i+2]*block\_size > crops[i][0]+crops[i][1]\) .

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by product of block_shape. The data type is float16 or float32.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is \((n, c, h, w)\) with block_size and crops. The output shape will be \((n', c', h', w')\), where

\(n' = n//(block\_size*block\_size)\)

\(c' = c\)

\(h' = h*block\_size-crops[0][0]-crops[0][1]\)

\(w' = w*block\_size-crops[1][0]-crops[1][1]\)

Raises:
  • TypeError – If block_size or element of crops is not an int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If block_size is less than 2.

Supported Platforms:

Ascend GPU

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpace(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceND(block_shape, crops)[source]

ops.BatchToSpaceND is deprecated from version 2.0 and will be removed in a future version, use ops.batch_to_space_nd instead.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> batch_to_space = ops.BatchToSpaceND(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = batch_to_space(input_x)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
class tinyms.primitives.BatchToSpaceNDV2[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

Refer to mindspore.ops.batch_to_space_nd() for more details.

Supported Platforms:

Ascend

class tinyms.primitives.Bernoulli(seed=-1, offset=0)[source]

Randomly set the elements of output to 0 or 1 with the probability of P which follows the Bernoulli distribution.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bernoulli() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor([0.1, 0.2, 0.3], mindspore.float32)
>>> bernoulli = ops.Bernoulli()
>>> output = bernoulli(input_x, Tensor([1.0]))
>>> print(output)
[1. 1. 1.]
>>> input_p = Tensor([0.0, 1.0, 1.0], mindspore.float32)
>>> output = bernoulli(input_x, input_p)
>>> print(output)
[0. 1. 1.]
class tinyms.primitives.BesselI0[source]

Computes BesselI0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bessel_i0() for more details.

Supported Platforms:

GPU CPU

Examples

>>> bessel_i0 = ops.BesselI0()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0(x)
>>> print(output)
[1.0144521 1.1797839 1.0241698 1.0020262]
class tinyms.primitives.BesselI0e[source]

Computes BesselI0e of input element-wise.

The formula is defined as:

\[BesselI0e(x) = \exp(|x|) * bessel\_i0(x)\]

where bessel_i0 is Bessel function of the first kind with 0 order.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bessel_i0e = ops.BesselI0e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i0e(x)
>>> print(output)
[0.7979961  0.5144438  0.75117415  0.9157829 ]
class tinyms.primitives.BesselI1[source]

Computes BesselI1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.bessel_i1() for more details.

Supported Platforms:

GPU CPU

Examples

>>> bessel_i1 = ops.BesselI1()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1(x)
>>> print(output)
[0.1208661  0.45177728 0.1568694  0.04504559]
class tinyms.primitives.BesselI1e[source]

Computes BesselI1e of input element-wise.

The formula is defined as:

\[BesselI1e(x) = \exp(|x|) * bessel\_i1(x)\]

where bessel_i1 is Bessel function of the first kind with 1 order.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16 or float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bessel_i1e = ops.BesselI1e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_i1e(x)
>>> print(output)
[0.09507662 0.19699717 0.11505538 0.04116856]
class tinyms.primitives.BesselJ0[source]

Computes BesselJ0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_j0 = ops.BesselJ0()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_j0(x)
>>> print(output)
[0.93846981  0.76519769  0.22389078  -0.39714981]
class tinyms.primitives.BesselJ1[source]

Computes BesselJ1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_j1 = ops.BesselJ1()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_j1(x)
>>> print(output)
[0.24226846,  0.44005059,  0.57672481, -0.06604333]
class tinyms.primitives.BesselK0[source]

Computes BesselK0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k0 = ops.BesselK0()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k0(x)
>>> print(output)
[1.579826  0.5402144 1.3424659 2.5310173]
class tinyms.primitives.BesselK0e[source]

Computes BesselK0e of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k0e = ops.BesselK0e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k0e(x)
>>> print(output)
[2.0083523 1.2388839 1.8303517 2.769374 ]
class tinyms.primitives.BesselK1[source]

Computes BesselK1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k1 = ops.BesselK1()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k1(x)
>>> print(output)
[3.9190812  0.8143549  2.9440577 10.974864]
class tinyms.primitives.BesselK1e[source]

Computes BesselK1e of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32, float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_k1e = ops.BesselK1e()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = bessel_k1e(x)
>>> print(output)
[ 4.9821286  1.8675754  4.0140023 12.008413 ]
class tinyms.primitives.BesselY0[source]

Computes BesselY0 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_y0 = ops.BesselY0()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_y0(x)
>>> print(output)
[-0.44451873  0.08825696  0.51037567  -0.01694074]
class tinyms.primitives.BesselY1[source]

Computes BesselY1 of input element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. Data type must be float16, float32 or float64.

Outputs:

Tensor, has the same shape as x.

Raises:

TypeError – If x is not a Tensor of float16, float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> bessel_y1 = ops.BesselY1()
>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = bessel_y1(x)
>>> print(output)
[-1.47147239  -0.78121282  -0.10703243  0.39792571]
class tinyms.primitives.Betainc[source]

Calculates the regularized incomplete beta function \(I_{x}(a, b)\). It is defined as the ratio of the incomplete beta function to the complete beta function:

\[I_{x}(a, b)=\frac{B(x ; a, b)}{B(a, b)}\]

where

\[B(x ; a, b)=\int_{0}^{x} t^{a-1}(1-t)^{b-1} dt\]

is the incomplete beta function and

\[B(a, b) = \int_0^1 t^{a-1} (1-t)^{b-1} dt\]

is the complete beta function.

Inputs:
  • a (Tensor) - Peak location of beta distribution. A Tensor of types: float32, float64.

  • b (Tensor) - Spread of the beta distribution. A Tensor, must have the same dtype and shape as a .

  • x (Tensor) - Upper limit of integration of the incomplete beta function. A Tensor, must have the same dtype and shape as a .

Outputs:

A Tensor, has the same dtype and shape as a .

Raises:
  • TypeError – If dtype of a is not float32 nor float64.

  • TypeError – If either dtype of b and x is not the same as the a.

  • ValueError – If either shape of b and x is not the same as the a.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([0.3, 0.1, 0.4]), mindspore.float32)
>>> b = Tensor(np.array([0.4, 0.5, 0.9]), mindspore.float32)
>>> x = Tensor(np.array([0.2, 0.6, 0.5]), mindspore.float32)
>>> betainc = ops.Betainc()
>>> print(betainc(a, b, x))
[0.41462693 0.8706035  0.7298298 ]
class tinyms.primitives.BiasAdd(data_format='NCHW')[source]

Returns the sum of the input Tensor and the bias Tensor. Before adding, the bias Tensor will be broadcasted to be consistent with the shape of the input Tensor.

Parameters:

data_format (str) – The format of input and output data. It should be ‘NHWC’, ‘NCHW’ or ‘NCDHW’. Default is ‘NCHW’.

Inputs:
  • input_x (Tensor) - The input tensor. The shape can be 2-5 dimensions.

  • bias (Tensor) - The bias tensor, with shape \((C)\). C must be the same as channel dimension C of input_x.

Outputs:

Tensor, with the same shape and data type as input_x.

Raises:
  • TypeError – If data_format is not a str.

  • ValueError – If value of data_format is not in the range of [‘NHWC’,’NCHW’,’NCDHW’].

  • TypeError – If input_x or bias is not a Tensor.

  • TypeError – If dtype of input_x or bias is neither float16 nor float32.

  • TypeError – If dtype of input_x or bias is inconsistent.

  • TypeError – If dimension of input_x is not in the range [2, 5].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3,)), mindspore.float32)
>>> bias_add = ops.BiasAdd()
>>> output = bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.BinaryCrossEntropy(reduction='mean')[source]

Computes the binary cross entropy between the logits and the labels.

Sets logits as \(x\), labels as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

In which, \(L\) indicates the loss of all batch_sizes, \(l\) indicates the loss of one batch_size, and n indicates one batch_size in the 1-N range, \(w_n\) indicates the weight of \(n\)-th batch of binary cross entropy. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

  • The value of \(x\) must range from 0 to 1.

Parameters:

reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’ or ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - The predictive value whose data type must be float16 or float32, The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • labels (Tensor) - The target value which has the same shape and data type as logits.

  • weight (Tensor, optional) - A rescaling weight applied to the loss of each batch element. And it must have the same shape and data type as logits. Default: None.

Outputs:

Tensor or Scalar. Returns Tensor that has the same dtype and shape as logits if reduction is ‘none’. Otherwise, returns a scalar Tensor.

Raises:
  • TypeError – If dtype of logits, labels or weight (if given) is neither float16 nor float32.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If shape of labels is not the same as logits or weight (if given).

  • TypeError – If logits, labels or weight is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.binary_cross_entropy = ops.BinaryCrossEntropy()
...     def construct(self, logits, labels, weight):
...         result = self.binary_cross_entropy(logits, labels, weight)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = net(logits, labels, weight)
>>> print(output)
0.38240486
class tinyms.primitives.Bincount[source]

Counts the number of occurrences of each value in an integer array.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • array (Tensor) - A Tensor of type int32, whose value can not be less than zero.

  • size (Tensor) - A non-negative Tensor of type int32.

  • weights (Tensor) - A Tensor with the same shape as array, or a length-0 Tensor, in which case it acts as all weights equal to 1. Must be one of the following types: int32, int64, float32, float64.

Outputs:

A Tensor. Has the same type as weights.

Raises:
  • TypeError – If dtype of array is not int32.

  • TypeError – If dtype of size is not int32.

  • ValueError – If size is negative.

  • ValueError – If weights are empty.

  • ValueError – If size of weights is not zero and the shape of weights is different with the shape of array.

  • TypeError – If dtype of weights is not in int32,int64,float32,float64

Supported Platforms:

Ascend GPU CPU

Examples

>>> array = Tensor(np.array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4]), mindspore.int32)
>>> size = Tensor(5, mindspore.int32)
>>> weights = Tensor(np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), mindspore.float32)
>>> bincount = ops.Bincount()
>>> bins = bincount(array, size, weights)
>>> print(bins)
[0. 1. 2. 3. 4.]
class tinyms.primitives.BitwiseAnd[source]

Returns bitwise and of two tensors element-wise.

Refer to mindspore.ops.bitwise_and() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_and = ops.BitwiseAnd()
>>> output = bitwise_and(x, y)
>>> print(output)
[ 0  0  1 -1  1  0  1]
class tinyms.primitives.BitwiseOr[source]

Returns bitwise or of two tensors element-wise.

Refer to mindspore.ops.bitwise_or() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_or = ops.BitwiseOr()
>>> output = bitwise_or(x, y)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
class tinyms.primitives.BitwiseXor[source]

Returns bitwise xor of two tensors element-wise.

Refer to mindspore.ops.bitwise_xor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> y = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> bitwise_xor = ops.BitwiseXor()
>>> output = bitwise_xor(x, y)
>>> print(output)
[ 0  1  0  0 -2  3  2]
class tinyms.primitives.BlackmanWindow(periodic=True, dtype=mindspore.float32)[source]

Blackman window function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.blackman_window() for more details.

Parameters:
  • periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Default: True.

  • dtype (mindspore.dtype, optional) – the desired data type of returned tensor. Only float16, float32 and float64 is allowed. Default: mstype.float32.

Inputs:
  • window_length (Tensor) - the size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

Outputs:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(10, mindspore.int32)
>>> blackman_window = ops.BlackmanWindow(periodic = True, dtype = mindspore.float32)
>>> output = blackman_window(window_length)
>>> print(output)
[-2.9802322e-08  4.0212840e-02  2.0077014e-01  5.0978714e-01
  8.4922993e-01  1.0000000e+00  8.4922981e-01  5.0978690e-01
  2.0077008e-01  4.0212870e-02]
class tinyms.primitives.BoundingBoxDecode(max_shape, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)[source]

Decodes bounding boxes locations.

The function of the operator is to calculate the offset, and this operator converts the offset into a Bbox, which is used to mark the target in the subsequent images, etc.

Parameters:
  • max_shape (tuple) – The max size limit for decoding box calculation.

  • means (tuple) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • wh_ratio_clip (float) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • deltas (Tensor) - Delta of boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, decoded boxes. It has the same data type and shape as anchor_box.

Raises:
  • TypeError – If means, stds or max_shape is not a tuple.

  • TypeError – If wh_ratio_clip is not a float.

  • TypeError – If anchor_box or deltas is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_decode = ops.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0),
...                                          max_shape=(768, 1280), wh_ratio_clip=0.016)
>>> output = boundingbox_decode(anchor_box, deltas)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
class tinyms.primitives.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))[source]

Encodes bounding boxes locations.

This operator will calculate the offset between the predicted bounding boxes and the real bounding boxes, and this offset will be used as a variable for the loss.

Parameters:
  • means (tuple) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Inputs:
  • anchor_box (Tensor) - Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • groundtruth_box (Tensor) - Ground truth boxes. Which has the same shape with anchor_box.

Outputs:

Tensor, encoded bounding boxes. It has the same data type and shape as input anchor_box.

Raises:
  • TypeError – If means or stds is not a tuple.

  • TypeError – If anchor_box or groundtruth_box is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
>>> output = boundingbox_encode(anchor_box, groundtruth_box)
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
class tinyms.primitives.Broadcast(root_rank, group='hccl_world_group')[source]

Broadcasts the tensor to the whole group.

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • root_rank (int) – Source rank. Required in all processes except the one that is sending the data.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (tuple[Tensor]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], Tensor has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises:

TypeError – If root_rank is not an integer or group is not a string.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.broadcast = ops.Broadcast(1)
...
...     def construct(self, x):
...         return self.broadcast((x,))
...
>>> input_x = Tensor(np.ones([2, 4]).astype(np.int32))
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape[2,4], dtype=Int32, value=
[[1, 1, 1, 1],
 [1, 1, 1, 1]]),)
class tinyms.primitives.BroadcastTo(shape)[source]

Broadcasts input tensor to a given shape.

Refer to mindspore.ops.broadcast_to() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 3)
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> output = ops.BroadcastTo(shape=shape)(x)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>>
>>> shape = (-1, 2)
>>> x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> output = ops.BroadcastTo(shape=shape)(x)
>>> print(output)
[[1. 1.]
 [2. 2.]]
class tinyms.primitives.Bucketize(boundaries)[source]

Bucketizes input based on boundaries.

Parameters:

boundaries (list[float]) – A sorted list of floats gives the boundary of the buckets, and no default value.

Inputs:
  • input (Tensor) - A tensor containing the search value(s).

Outputs:

Tensor, with the same shape as the input, and data type is int32.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> class Bucketize(nn.Cell):
...     def __init__(self, boundaries):
...         super().__init__()
...         self.bucketize = ops.Bucketize(boundaries=boundaries)
...     def construct(self, input):
...         return self.bucketize(input)
>>> input = Tensor(np.array([[3, 6, 9], [3, 6, 9]]).astype(np.int32))
>>> boundaries = list(np.array([1., 3., 5., 7., 9.]))
>>> net = Bucketize(boundaries)
>>> output = net(input)
>>> print(output)
[[2 3 5]
 [2 3 5]]
class tinyms.primitives.BufferAppend(capacity, buffer_shape, buffer_dtype)[source]

In reinforcement learning, the experience data is collected in each step. We use BufferAppend to push data to the bottom of buffer under the First-In-First-Out rule.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • exp (tuple(Parameter(Tensor))) - The tuple(Tensor) represents one list of experience data, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

None.

Raises:
  • ValueError – If count and head is not an integer.

  • ValueError – If capacity is not a positive integer.

  • ValueError – If length of data is not equal to length of exp.

  • ValueError – If dim of data is equal to dim of exp, but data[1:] is not equal to the shape in exp.

  • ValueError – If the shape of data[1:] is not equal to the shape in exp.

  • TypeError – If the type in exp is not the same with data.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> exp = [Tensor(np.array([2, 2, 2, 2]), ms.float32), Tensor(np.array([0, 0]), ms.int32),
...        Tensor(np.array([0]), ms.int32), Tensor(np.array([3, 3, 3, 3]), ms.float32)]
>>> batch_exp = [Tensor(np.array([[2, 2, 2, 2], [2, 2, 2, 2]]), ms.float32),
...              Tensor(np.array([[0, 0], [0, 0]]), ms.int32),
...              Tensor(np.array([[0], [0]]), ms.int32),
...              Tensor(np.array([[3, 3, 3, 3], [3, 3, 3, 3]]), ms.float32)]
>>> buffer_append = ops.BufferAppend(capacity, shapes, types)
>>> buffer_append(buffer, exp, count, head)
>>> buffer_append(buffer, batch_exp, count, head)
class tinyms.primitives.BufferGetItem(capacity, buffer_shape, buffer_dtype)[source]

Get the data from buffer in the position of input index.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

  • index (int64) - The position of the data in buffer.

Outputs:

tuple(Tensor). The shape is buffer_shape. The dtype is buffer_dtype.

Raises:
  • ValueError – If count and head is not an integer.

  • ValueError – If capacity is not a positive integer.

  • TypeError – If buffer_shape is not a tuple.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> index = 3
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_get = ops.BufferGetItem(capacity, shapes, types)
>>> output = buffer_get(buffer, count, head, index)
>>> print(output)
    (Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]),
     Tensor(shape=[2], dtype=Int32, value= [6, 7]),
     Tensor(shape=[1], dtype=Int32, value= [1]),
     Tensor(shape=[4], dtype=Float32, value=
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01]))
class tinyms.primitives.BufferSample(capacity, batch_size, buffer_shape, buffer_dtype, seed=0, unique=False)[source]

In reinforcement learning, the data is sampled from the replaybuffer randomly.

Returns the tuple tensor with the given shape, decided by the given batchsize.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • capacity (int64) – Capacity of the buffer, must be non-negative.

  • batch_size (int64) – The size of the sampled data, lessequal to capacity.

  • buffer_shape (tuple(shape)) – The shape of an buffer.

  • buffer_dtype (tuple(type)) – The type of an buffer.

  • seed (int64) – Random seed for sample. Default: 0. If use the default seed, it will generate a ramdom

  • in kernel. Set a number other than 0 to keep a specific seed. Default (one) –

  • unique (bool) – Whether the sampled data is strictly unique. Setting it to False has a better performance. Default: False

Inputs:
  • data (tuple(Parameter(Tensor))) - The tuple(Tensor) represents replaybuffer, each tensor is described by the buffer_shape and buffer_type.

  • count (Parameter) - The count means the real available size of the buffer, data type: int32.

  • head (Parameter) - The position of the first data in buffer, data type: int32.

Outputs:

tuple(Tensor). The shape is batch_size * buffer_shape. The dtype is buffer_dtype.

Raises:
  • TypeError – If buffer_shape is not a tuple.

  • ValueError – If batch_size is larger than capacity.

  • ValueError – If capacity is not a positive integer.

Supported Platforms:

GPU CPU

Examples

>>> capacity = 100
>>> batch_size = 5
>>> count = Parameter(Tensor(5, ms.int32), name="count")
>>> head = Parameter(Tensor(0, ms.int32), name="head")
>>> shapes = [(4,), (2,), (1,), (4,)]
>>> types = [ms.float32, ms.int32, ms.int32, ms.float32]
>>> buffer = [Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="states"),
...           Parameter(Tensor(np.arange(100 * 2).reshape(100, 2).astype(np.int32)), name="action"),
...           Parameter(Tensor(np.ones((100, 1)).astype(np.int32)), name="reward"),
...           Parameter(Tensor(np.arange(100 * 4).reshape(100, 4).astype(np.float32)), name="state_")]
>>> buffer_sample = ops.BufferSample(capacity, batch_size, shapes, types)
>>> output = buffer_sample(buffer, count, head)
>>> print(output)
    (Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]),
     Tensor(shape=[5, 2], dtype=Int32, value=
        [[ 0, 1],
        [ 4, 5],
        [ 8, 9],
        [ 6, 7],
        [16, 17]]),
     Tensor(shape=[5, 1], dtype=Int32, value=
        [[1],
        [1],
        [1],
        [1],
        [1]]),
     Tensor(shape=[5, 4], dtype=Float32, value=
        [[ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, 3.00000000e+00],
        [ 8.00000000e+00, 9.00000000e+00, 1.00000000e+01, 1.10000000e+01],
        [ 1.60000000e+01, 1.70000000e+01, 1.80000000e+01, 1.90000000e+01],
        [ 1.20000000e+01, 1.30000000e+01, 1.40000000e+01, 1.50000000e+01],
        [ 3.20000000e+01, 3.30000000e+01, 3.40000000e+01, 3.50000000e+01]]))
class tinyms.primitives.CTCGreedyDecoder(merge_repeated=True)[source]

Performs greedy decoding on the logits given in inputs.

Refer to mindspore.ops.ctc_greedy_decoder() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.array([[[0.6, 0.4, 0.2], [0.8, 0.6, 0.3]],
...                           [[0.0, 0.6, 0.0], [0.5, 0.4, 0.5]]]), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> decoded_indices, decoded_values, decoded_shape, log_probability = ops.CTCGreedyDecoder()(inputs,
...                                                                                          sequence_length)
>>> print(decoded_indices)
[[0 0]
 [0 1]
 [1 0]]
>>> print(decoded_values)
[0 1 0]
>>> print(decoded_shape)
[2 2]
>>> print(log_probability)
[[-1.2]
 [-1.3]]
class tinyms.primitives.CTCLoss(preprocess_collapse_repeated=False, ctc_merge_repeated=True, ignore_longer_outputs_than_inputs=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

The bottom layer of this interface calls the implementation of the third-party baidu-research::warp-ctc. The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

CTCLoss calculates loss between a continuous time series and a target sequence. CTCLoss sums over the probability of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, such that the length of target series must be less than or equal to the length of input.

Parameters:
  • preprocess_collapse_repeated (bool) – If true, repeated labels will be collapsed prior to the CTC calculation. Default: False.

  • ctc_merge_repeated (bool) – If false, during CTC calculation, repeated non-blank labels will not be merged and these labels will be interpreted as individual ones. This is a simplified version of CTC. Default: True.

  • ignore_longer_outputs_than_inputs (bool) – If true, sequences with longer outputs than inputs will be ignored. Default: False.

Inputs:
  • x (Tensor) - The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float16, float32 or float64.

  • labels_indices (Tensor) - The indices of labels. labels_indices[i, :] = [b, t] means labels_values[i] stores the id for (batch b, time t). The type must be int64 and rank must be 2.

  • labels_values (Tensor) - A 1-D input tensor. The values are associated with the given batch and time. The type must be int32. labels_values[i] must be in the range of [0, num_classes).

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must not be greater than max_time.

Outputs:
  • loss (Tensor) - A tensor containing log-probabilities, the shape is \((batch\_size, )\). The tensor has the same data type as x.

  • gradient (Tensor) - The gradient of loss, has the same shape and data type as x.

Raises:
  • TypeError – If preprocess_collapse_repeated, ctc_merge_repeated or ignore_longer_outputs_than_inputs is not a bool.

  • TypeError – If x, labels_indices, labels_values or sequence_length is not a Tensor.

  • ValueError – If rank of labels_indices is not equal to 2.

  • TypeError – If dtype of x is not one of the following: float16, float32 nor float64.

  • TypeError – If dtype of labels_indices is not int64.

  • TypeError – If dtype of labels_values or sequence_length is not int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[0.3, 0.6, 0.6],
...                       [0.4, 0.3, 0.9]],
...
...                      [[0.9, 0.4, 0.2],
...                       [0.9, 0.9, 0.1]]]).astype(np.float32))
>>> labels_indices = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int64)
>>> labels_values = Tensor(np.array([2, 2]), mindspore.int32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_loss = ops.CTCLoss()
>>> loss, gradient = ctc_loss(x, labels_indices, labels_values, sequence_length)
>>> print(loss)
[ 0.79628  0.5995158 ]
>>> print(gradient)
[[[ 0.27029088  0.36485454  -0.6351454  ]
  [ 0.28140804  0.25462854  -0.5360366 ]]
 [[ 0.47548494  0.2883962    0.04510255 ]
  [ 0.4082751   0.4082751    0.02843709 ]]]
class tinyms.primitives.CTCLossV2(blank=0, reduction='none', zero_infinity=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • blank (int, optional) – The blank label. Default: 0.

  • reduction (str, optional) – Apply specific reduction method to the output. Currently only support ‘none’, not case sensitive. Default: “none”.

  • zero_infinity (bool, optional) – If loss is infinite, this parameter determines whether to set that loss and its correlated gradient to zero. Default: False.

Inputs:
  • log_probs (Tensor) - A tensor of shape \((T, C, N)\), where \(T\) is input length, \(N\) is batch size and \(C\) is number of classes (including blank).

  • targets (Tensor) - A tensor of shape \((N, S)\), where \(S\) is max target length, means the target sequences.

  • input_lengths (Union(Tuple, Tensor)) - A tuple or Tensor of shape \((N)\). It means the lengths of the input.

  • target_lengths (Union(Tuple, Tensor)) - A tuple or Tensor of shape \((N)\). It means the lengths of the target.

Outputs:
  • neg_log_likelihood (Tensor) - A loss value which is differentiable with respect to each input node.

  • log_alpha (Tensor) - The probability of possible trace of input to target.

Raises:
  • TypeError – If zero_infinity is not a bool.

  • TypeError – If reduction is not string.

  • TypeError – If the dtype of log_probs is not float or double.

  • TypeError – If the dtype of targets, input_lengths or target_lengths is not int32 or int64.

  • ValueError – If the rank of log_probs is not 3.

  • ValueError – If the rank of targets is not 2.

  • ValueError – If the shape of input_lengths does not match batch_size \(N\).

  • ValueError – If the shape of target_lengths does not match batch_size \(N\).

  • TypeError – If the types of targets, input_lengths or target_lengths are different.

  • ValueError – If the value of blank is not in range [0, num_labels|C).

  • RuntimeError – If any value of input_lengths is larger than (num_labels|C).

  • RuntimeError – If any target_lengths[i] is not in range [0, input_length[i]].

Supported Platforms:

Ascend GPU CPU

Examples

>>> log_probs = Tensor(np.array([[[0.3, 0.6, 0.6]],
...                              [[0.9, 0.4, 0.2]]]).astype(np.float32))
>>> targets = Tensor(np.array([[0, 1]]), mstype.int32)
>>> input_lengths = Tensor(np.array([2]), mstype.int32)
>>> target_lengths = Tensor(np.array([1]), mstype.int32)
>>> CTCLossV2 = ops.CTCLossV2(blank=0, reduction='none', zero_infinity=False)
>>> neg_log_hood, log_alpha = CTCLossV2(
...     log_probs, targets, input_lengths, target_lengths)
>>> print(neg_log_hood)
[-2.2986124]
>>> print(log_alpha)
[[[0.3       0.3            -inf      -inf      -inf]
  [1.2       1.8931472 1.2            -inf      -inf]]]
class tinyms.primitives.Cast[source]

Returns a tensor with the new specified data type.

Inputs:
  • input_x (Union[Tensor, Number]) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The tensor to be cast.

  • type (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises:
  • TypeError – If input_x is neither Tensor nor Number.

  • TypeError – If type is not a Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> type_dst = mindspore.int32
>>> cast = ops.Cast()
>>> output = cast(input_x, type_dst)
>>> print(output.dtype)
Int32
>>> print(output.shape)
(2, 3, 4, 5)
class tinyms.primitives.Cauchy(size, median=0.0, sigma=1.0)[source]

Create a tensor of shape size with random numbers drawn from Cauchy distribution. It is defined as follows:

\[f(x)= \frac{1}{\pi} \frac{\sigma}{(x-median)^2 +\sigma^2}\]
Parameters:
  • size (list[int]) – The size of tensor.

  • sigma (float, optional) – the location parameter, specifying the location of the peak of the distribution. Default: 1.0.

  • median (float, optional) – the scale parameter which specifies the half-width at half-maximum. Default: 0.0.

Outputs:

Tensor with cauchy distribution data. Tensor shape is size, and data type is float32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> size = [1]
>>> net = ops.Cauchy(size)
>>> y = net()
>>> print(y)
[0.03128606]
class tinyms.primitives.Cdist(p=2.0)[source]

Computes batched the p-norm distance between each pair of the two collections of row vectors.

Refer to mindspore.ops.cdist() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input_x = Tensor(np.array([[[1.0, 1.0], [2.0, 2.0]]]).astype(np.float32))
>>> input_y = Tensor(np.array([[[3.0, 3.0], [3.0, 3.0]]]).astype(np.float32))
>>> op = ops.Cdist(p=2.0)
>>> output = op(input_x, input_y)
>>> print(output)
[[[2.8284273 2.8284273]
  [1.4142137 1.4142137]]]
class tinyms.primitives.CeLU(alpha=1.0)[source]

Computes CeLU (Continuously differentiable exponential linear units) of input tensors element-wise.

Refer to mindspore.ops.celu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
>>> celu = ops.CeLU(alpha=1.0)
>>> output = celu(input_x)
>>> print(output)
[-0.86466473 -0.63212055  1.          2.        ]
class tinyms.primitives.Ceil[source]

Rounds a tensor up to the closest integer element-wise.

Refer to mindspore.ops.ceil() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> ceil_op = ops.Ceil()
>>> output = ceil_op(x)
>>> print(output)
[ 2.  3. -1.]
class tinyms.primitives.ChannelShuffle(group)[source]

Divide the channels in a tensor of shape (, C, H, W) into g groups and rearrange them as (, C/g, g, H*W), while keeping the original tensor shapes.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.channel_shuffle() for more detail.

Supported Platforms:

Ascend CPU

Examples

>>> group = 2
>>> x = Tensor(np.arange(1 * 4 * 2 * 2).reshape(1, 4, 2, 2).astype(np.int16))
>>> channel_shuffle_func = ops.ChannelShuffle(group)
>>> y = channel_shuffle_func(x)
>>> print(y)
[[[[ 0  1]
   [ 2  3]]
   [[ 8  9]
   [10 11]]
   [[ 4  5]
   [ 6  7]]
   [[12 13]
   [14 15]]]]
class tinyms.primitives.CheckNumerics[source]

Checks a tensor for NaN and Inf values. A runtime error is raised if input has NaN or Inf values.

Inputs:
  • x (Tensor) - Input Tensor of any dimension. The data type is float16, float32 or float64.

Outputs:

Tensor, has the same shape and data type as x if x has no NaN or Inf values.

Raises:
  • TypeError – If x data type is not float16, float32, float64.

  • RuntimeError – If x has NaN or Inf values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 3], [2, 4]], dtype=np.float32))
>>> checknumerics = ops.CheckNumerics()
>>> output = checknumerics(x)
>>> print(output)
[[1. 3.]
 [2. 4.]]
class tinyms.primitives.CheckValid[source]

Checks bounding box.

Checks whether the bounding boxes specified by bboxes is valid. Returns True if the box is within borders specified by img_metas, False if not.

Inputs:
  • bboxes (Tensor) - Bounding boxes tensor with shape \((N, 4)\). \(N\) indicates the number of bounding boxes, the value “4” indicates “x0”, “x1”, “y0”, and “y1”. Data type must be float16 or float32.

  • img_metas (Tensor) - Raw image size information with the format of \((height, width, ratio)\), specifying the valid boundary \((height * ratio, width * ratio)\). Data type must be float16 or float32.

Outputs:

Tensor, with shape of \((N,)\) and dtype of bool, specifying whether the bounding boxes is in the image. “True” indicates valid, while “False” indicates invalid.

Raises:
  • TypeError – If bboxes or img_metas is not a Tensor.

  • TypeError – If dtype of bboxes or img_metas is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.check_valid = ops.CheckValid()
...     def construct(self, x, y):
...         valid_result = self.check_valid(x, y)
...         return valid_result
...
>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> net = Net()
>>> output = net(bboxes, img_metas)
>>> print(output)
[ True False False]
class tinyms.primitives.Cholesky(upper=False)[source]

Performs the Cholesky decomposition on a single or a batch of symmetric positive-definite matrices.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cholesky() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 1.0], [1.0, 2.0]]), mindspore.float32)
>>> cholesky = ops.Cholesky(upper=False)
>>> output = cholesky(input_x)
>>> print(output)
[[1. 0.]
 [1. 1.]]
class tinyms.primitives.CholeskyInverse(upper=False)[source]

Returns the inverse of the positive definite matrix using cholesky matrix factorization given its Cholesky factor.

Refer to mindspore.ops.cholesky_inverse() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([[2,0,0], [4,1,0], [-1,1,2]]), mindspore.float32)
>>> net = ops.CholeskyInverse()
>>> y = net(x)
>>> print(y)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
class tinyms.primitives.CholeskySolve(upper=False)[source]

Computes the solution of a set of linear equations with a positive definite matrix, according to its Cholesky decomposition factor u , and outputs the result as c.

If upper is set to True, u is upper triangular and c is returned such that:

\[c = (u^{T}u)^{{-1}}b\]

If upper is set to False, u is lower triangular and c is returned such that:

\[c = (uu^{T})^{{-1}}b\]
Parameters:

upper (bool, optional) – A flag indicates whether to treat the Cholesky factor as an upper or a lower triangular matrix. Default: False.

Inputs:
  • x1 (Tensor) - Tensor of shape \((*, N, M)\), indicating 2D or 3D matrices, with float32 or float64 data type.

  • x2 (Tensor) - Tensor of shape \((*, N, N)\), indicating 2D or 3D square matrices composed of upper or lower triangular Cholesky factor, with float32 or float64 data type. x1 and x2 must have the same type.

Outputs:

Tensor, has the same shape and data type as x1.

Raises:
  • TypeError – If upper is not a bool.

  • TypeError – If dtype of x1 and x2 is not one of: float64, float32.

  • TypeError – If x1 is not a Tensor.

  • TypeError – If x2 is not a Tensor.

  • ValueError – If x1 and x2 have different batch size.

  • ValueError – If x1 and x2 have different row numbers.

  • ValueError – If x1 is not 2D or 3D matrices.

  • ValueError – If x2 is not 2D or 3D square matrices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]), mindspore.float32)
>>> x2 = Tensor(np.array([[2, 0, 0], [4, 1, 0], [-1, 1, 2]]), mindspore.float32)
>>> net = ops.CholeskySolve()
>>> y = net(x1, x2)
>>> print(y)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
class tinyms.primitives.Coalesce[source]

Returns the coalesced sparse tensor of the input.

Inputs:
  • x_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Supported data type is int64. Its elements should be non-negative. The shape is \((y, x)\).

  • x_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in x_indices. Supported data types are float16 and float32. The shape is \((x,)\).

  • x_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Supported data type is int64. The shape is \((y,)\).

Outputs:
  • y_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Data type is int64. It’s elements are non-negative. The shape is \((y, z)\). z represents the number of different indices in x_indices.

  • y_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in y_indices. Data type is the same as x_values’s. The shape is \((z,)\).

  • y_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Data type is int64. The shape is \((y,)\).

Raises:
  • TypeError – If the data type of x_values is neither float32 nor float16.

  • TypeError – If any of the data types of x_indices and x_shape is not int64.

  • ValueError – If any of x_values and x_shape is not a 1-D tensor.

  • ValueError – If x_indices is not a 2-D tensor.

  • ValueError – If sizes of second dimension of x_indices and first dimension of x_values are not the same.

  • ValueError – If sizes of first dimension of x_indices and first dimension of x_shape are not the same.

  • ValueError – If any of the values of elements of x_indices is negative.

  • ValueError – If any of the values of elements of x_indices exceed the limit set by x_shape.

Supported Platforms:

GPU CPU

Examples

>>> x_indices = Tensor([[0, 0, 1], [1, 1, 2]], dtype=mstype.int64)
>>> x_values = Tensor([1, 5, 4], dtype=mstype.float32)
>>> x_shape = Tensor([3, 3], dtype=mstype.int64)
>>> coalesce = ops.Coalesce()
>>> y_indices, y_values, y_shape = coalesce(x_indices, x_values, x_shape)
>>> print(y_indices)
[[0 1]
 [1 2]]
>>> print(y_values)
[6. 4.]
>>> print(y_shape)
[3 3]
class tinyms.primitives.Col2Im(kernel_size, dilation=1, padding=0, stride=1)[source]

Combines an array of sliding local blocks into a large containing tensor. It is usually used to reconstruct an image from a set of image patches(or sliding local blocks).

Consider a batched input tensor containing sliding local blocks, e.g., patches of images, of shape \((N, C, \prod(\text{kernel_size}), L)\), where \(N\) is batch dimension, \(C\) is channel dimension, \(\prod(\text{kernel_size})\) is the block size, and \(L\) is the total number of blocks. This operation combines these local blocks into the large output tensor of shape \((N, C, \text{output_size}[0], \text{output_size}[1], \dots)\) by summing the overlapping values.

\[L = \prod_d \left\lfloor\frac{\text{output_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor\]

where \(d\) is over all spatial dimensions. The padding, stride and dilation arguments specify how the sliding blocks are retrieved.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two positive int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The size of the dilation, should be two positive int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The size of the stride, should be two positive int for height and width. If type is int, it means that height equal with width. Default: 1.

Inputs:
  • x (Tensor) - 4D tensor with data type float16 or float32.

  • output_size (Tensor) - 1D tensor with 2 elements of data type int32.

Outputs:

Tensor, a 4-D Tensor with same type of input x.

Raises:
  • TypeError – If dtype of kernel_size , dilation , padding or stride is not in Union[int, tuple[int], list[int]].

  • ValueError – If values in kernel_size , dilation , padding or stride are not greater than zero or any one of them has more than 2 elements.

  • ValueError – If x.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If x.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> col2im = ops.Col2Im(kernel_size=[2, 2], dilation=[2, 2], padding=[2, 2], stride=[2, 2])
>>> y = col2im(x, output_size)
>>> print(y.shape)
(16, 16, 8, 8)
class tinyms.primitives.CombinedNonMaxSuppression(pad_per_class=False, clip_boxes=True)[source]

Applies a greedy approach to select a subset of bounding boxes from a list of candidates using NonMaxSuppression, where the boxes are sorted in descending order of their confidence score.

Parameters:
  • clip_boxes (bool, optional) –

    Determines whether to apply bounding box normalization to ensure the coordinates are within [0, 1] range. Default: True.

    • If True, clip the boxes that fall outside this range.

    • If False, return the box coordinates as they are without any modifications.

  • pad_per_class (bool, optional) –

    Determines whether the output of the non-maximum suppression (NMS) algorithm should be padded or clipped to meet the maximum size constraints. Default: False.

    • If False, the output is clipped to the maximum size of max_total_size.

    • If True, the output is padded up to max_size_per_class * num_classes and clipped if it exceeds max_total_size.

Inputs:
  • boxes (Tensor) - A float32 Tensor with shape \((batch_size, num_boxes, q, 4)\) representing the bounding box coordinates. q indicates mapping relationship between boxes and classes. If q is 1, all classes use the same bounding box. If q is equal to the number of classes, class-specific boxes are applied.

  • scores (Tensor) - A 3-D Tensor of float32 type with the shape \((batch_size, num_boxes, num_classes)\). It contains a score value for each box, with each row of boxes represented by a single score.

  • max_output_size_per_class (Tensor) - The maximum number of boxes that can be selected for each class by the non-maximum suppression algorithm, represented by a scalar Tensor of type int32.

  • max_total_size (Tensor) - A scalar Tensor of type int32 that represents the maximum number of boxes that are kept for all classes.

  • iou_threshold (Tensor) - A scalar Tensor of float32 type that represents the threshold for determining if the IOU overlap between boxes is too high. iou_threshold must be equal or greater than 0 and be equal or smaller than 1.

  • score_threshold (Tensor) - A scalar Tensor of type float32 that represents the threshold for determining when to remove boxes based on their scores.

Outputs:
  • nmsed_boxes - A Tensor of float32 with shape of (batch_size, num_detection, 4), which contains the non-max suppressed boxes.

  • nmsed_scores - A Tensor of float32 with shape of (batch_size, num_detection), which contains score of boxes.

  • nmsed_classes - A Tensor of float32 with shape of (batch_size, num_detection), which contains classes of boxes.

  • valid_detections A Tensor of int32 with shape of (batch_size,), which indicates the number of valid detections of each batch.

Raises:
  • TypeError – If the dtype of boxes, scores , iou_threshold , score threshold are not float32.

  • TypeError – If the dtype of max_output_size_per_class and max_total_size are not int32.

  • ValueError – If boxes is not 4D.

  • ValueError – If max_output_size_per_class, max_total_size, iou_threshold and score threshold are not 0D.

  • ValueError – If scores is not 3D.

  • ValueError – If shape[0] or shape[1] of boxes is not same with that of the scores.

  • ValueError – If shape[2] of boxes is not same with shape[2] of scores or 1

  • ValueError – If max_total_size < 0.

  • ValueError – If max_output_size_per_class < 0.

  • ValueError – If iou_threshold not in [0,1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> boxes = Tensor(np.array([[[[200, 100, 150, 100]],
...                           [[220, 120, 150, 100]],
...                           [[190, 110, 150, 100]],
...                           [[210, 112, 150, 100]]]])).astype('float32')
>>> scores = Tensor(np.array([[[0.2000, 0.7000, 0.1000], [0.1000, 0.8000, 0.1000], [0.3000, 0.6000, 0.1000],
...                            [0.0500, 0.9000, 0.0500]]])).astype('float32')
>>> max_output_size_per_class = Tensor(4, mstype.int32)
>>> max_total_size = Tensor(1, mstype.int32)
>>> iou_threshold = Tensor(0, mstype.float32)
>>> score_threshold = Tensor(0, mstype.float32)
>>> net = ops.CombinedNonMaxSuppression()
>>> out = net(boxes, scores, max_output_size_per_class, max_total_size, iou_threshold, score_threshold)
>>> print(out)
(Tensor(shape=[1, 1, 4], dtype=Float32, value= [[[1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
                                                  1.00000000e+00]]]),
Tensor(shape=[1, 1], dtype=Float32, value= [[ 8.99999976e-01]]),
Tensor(shape=[1, 1], dtype=Float32, value= [[ 1.00000000e+00]]),
Tensor(shape=[1], dtype=Int32, value= [1]))
class tinyms.primitives.CompareAndBitpack[source]

Compare values of x to threshold and pack resulting bits into a uint8.

Each comparison returns a boolean true (if x_value > threshold) or and false otherwise.

Given an x shaped \((s_0, s_1, ..., s_n)\), the output is a uint8 Tensor shaped \((s_0, s_1, ..., s_n / 8)\).

Inputs:
  • x (Tensor) - Input tensor. Values to compare against threshold and bitpack. The data type must be bool, float16, float32, float64, int8, int16, int32, int64. Note: Currently, the innermost dimension of the tensor must be divisible by 8.

  • threshold (Tensor) - A 0D Tensor, whose data type is same as x.

Outputs:

Tensor, has the uint8 type.

Raises:
  • TypeError – If x or threshold is not a Tensor.

  • TypeError – If the dtype of ‘x’ is not one of: bool, float16, float32, float64, int8, int16, int32, int64.

  • TypeError – If threshold’s type is not as same ‘x’.

  • ValueError – If threshold is not a 0D Tensor.

  • ValueError – If x is a 0D Tensor.

  • ValueError – If the innermost dimension of x’s shape is not disvisible by 8.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32)
>>> threshold = Tensor(6, mindspore.float32)
>>> net = ops.CompareAndBitpack()
>>> output = net(x, threshold)
>>> print(output)
[3]
class tinyms.primitives.Complex[source]

Returns a complex Tensor from the real part and the imag part.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • real (Tensor) - The real input tensor. types: float32, float64.

  • imag (Tensor) - The imag input tensor. types: float32, float64.

Outputs:

Tensor, has the complex type.

Raises:
  • TypeError – If the dtype of input is not one of: float32, float64.

  • TypeError – If the dtypes of two inputs are not same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> real = Tensor(np.array([1]), mindspore.float32)
>>> imag = Tensor(np.array([2]), mindspore.float32)
>>> complex = ops.Complex()
>>> output = complex(real, imag)
>>> print(output)
[1.+2.j]
class tinyms.primitives.ComplexAbs[source]

Returns a Tensor that contains the magnitudes of the input.

The complex numbers in input must be of the form \(a + bj\), where \(a\) is the real part and \(b\) is the imaginary part.

\[y = \sqrt{a^2+b^2}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - A Tensor, types: complex64, complex128.

Outputs:

Tensor, has the same shape as x. If the type of x is complex64, the type of output is float32. If the type of x is complex128, the type of output is float64.

Raises:
  • TypeError – If the input is not a Tensor.

  • TypeError – If the input type is not complex64 or complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(3+4j)), mindspore.complex64)
>>> complex_abs = ops.ComplexAbs()
>>> output = complex_abs(x)
>>> print(output)
5.0
class tinyms.primitives.ComputeAccidentalHits(num_true=1)[source]

Compute accidental hits of sampled classes which match target classes.

When a target class matches the sample class, we call it “accidental hit”. The result of calculating accidental hits contain three parts (index, id, weight), where index represents the row number in true_classes, and id represents the position in sampled_candidates, the weight is FLOAT_MAX. FLOAT_MAX indicates the max value in the type of Float

Parameters:

num_true (int) – The number of target classes per training example. Default: 1.

Inputs:
  • true_classes (Tensor) - The target classes. With data type of int32 or int64 and shape \((batch\_size, num\_true)\).

  • sampled_candidates (Tensor) - The Candidate sampling results of operators, types of training samples, with data type of int32 or int64 and shape \((num\_sampled, )\).

Outputs:

Tuple of 3 Tensors.

  • indices (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • ids (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the same type as true_classes.

  • weights (Tensor) - A Tensor with shape \((num\_accidental\_hits, )\), with the type float32.

Raises:
  • TypeError – If dtype of num_true is not int.

  • TypeError – If true_classes or sampled_candidates is not a Tensor.

  • TypeError – If dtype of true_classes or sampled_candidates is neither int32 nor int64.

Supported Platforms:

Ascend

Examples

>>> true_classes = np.array([[1, 2], [0, 4], [3, 3]])
>>> sampled_candidates = np.array([0, 1, 2, 3, 4])
>>> sampler = ops.ComputeAccidentalHits(2)
>>> indices, ids, weights = sampler(Tensor(true_classes), Tensor(sampled_candidates))
>>> print(indices, ids, weights)
[0 0 1 1 2 2]
[1 2 0 4 3 3]
[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
class tinyms.primitives.Concat(axis=0)[source]

Connect tensor in the specified axis.

Refer to mindspore.ops.concat() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> op = ops.Concat()
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
>>> op = ops.Concat(1)
>>> output = op((input_x1, input_x2))
>>> print(output)
[[0. 1. 0. 1.]
 [2. 1. 2. 1.]]
infer_value(input_x)[source]

Implement Concat infer value

class tinyms.primitives.ConfusionMatrix(num_classes, dtype='int32')[source]

Calculates the confusion matrix from labels and predictions.

Parameters:
  • num_classes (int) – The num of classes.

  • dtype (str) – Data type of confusion matrix. Default: ‘int32’.

Inputs:
  • labels (Tensor) - real labels, tensor of 1-D. the dtype must be non-negative Integer.

  • predictions (Tensor) - the labels from prediction, tensor of 1-D. the shape same as labels and the dtype must be non-negative Integer.

  • weights (Tensor) - tensor of 1-D. the shape same as predictions.

Outputs:

Tensor, the confusion matrix, with shape (num_classes, num_classes).

Raises:
  • TypeError – If num_classes is not an int.

  • TypeError – If dtype is not a str.

  • TypeError – If labels, predictions or weight` is not a Tensor.

Examples

>>> confusion_matrix = ops.ConfusionMatrix(4)
>>> labels = Tensor([0, 1, 1, 3], mindspore.int32)
>>> predictions = Tensor([1, 2, 1, 3], mindspore.int32)
>>> output = confusion_matrix(labels, predictions)
>>> print(output)
[[0 1 0 0]
 [0 1 1 0]
 [0 0 0 0]
 [0 0 0 1]]
class tinyms.primitives.Conj[source]

Returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form a + bj, where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form a - bj.

If input is real, it is returned unchanged.

Inputs:
  • input (Tensor) - The input tensor to compute to. Must have numeric type.

Outputs:

Tensor, has the same dtype as the input.

Raises:
  • TypeError – If the dtype of input is not a numeric type.

  • TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> conj = ops.Conj()
>>> output = conj(x)
>>> print(output)
(1.3-0.4j)
class tinyms.primitives.ConjugateTranspose[source]

Calculate the conjugate matrix of input x which has been transposed according to input perm.

\[y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])\]
Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • perm (tuple[int]) - The permutation to be converted. The elements in perm are composed of the indexes of each dimension of x. The length of perm and the shape of x must be the same. Only constant value is allowed. Must be in the range [0, rank(x)).

Outputs:

Tensor, the type of output tensor is the same as x and the shape of output tensor is decided by the shape of x and the value of Conj(perm):

\[y.shape[i] = x.shape[perm[i]]\]

where i is in range [0, rank(x) - 1].

Raises:
  • TypeError – If perm is not a tuple.

  • ValueError – If length of shape of x is not equal to length of shape of perm.

  • ValueError – If the same element exists in perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1 + 1j,2 + 2j], [3 + 3j, 4 + 4j]]), mindspore.complex64)
>>> perm = (1, 0)
>>> conjugate_transpose = ops.ConjugateTranspose()
>>> output = conjugate_transpose(x, perm)
>>> print(output)
    [[1.-1.j 3.-3.j]
    [2.-2.j 4.-4.j]]
class tinyms.primitives.Conv2D(out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, data_format='NCHW')[source]

2D convolution layer.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is height, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the group number to split the input in the channel dimension.

If the ‘pad_mode’ is set to be “pad”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} - (\text{kernel_size[0]} - 1) \times (\text{dilation[0]} - 1) }{\text{stride[0]}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + \text{padding[2]} + \text{padding[3]} - \text{kernel_size[1]} - (\text{kernel_size[1]} - 1) \times (\text{dilation[1]} - 1) }{\text{stride[1]}}} \right \rfloor\) respectively. Where \(dilation\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition.

Note

On Ascend platform, \(group = 1\) must be satisfied.

Parameters:
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both the height and the width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. The value is currently not used. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in top and bottom, left and right possiblily. Otherwise, the last extra padding will be calculated from the bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – Implicit paddings on both sides of the input x. If pad is one integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple with four integers, the paddings of top, bottom, left and right will be equal to pad[0], pad[1], pad[2], and pad[3] accordingly. Default: 0.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: “NCHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})\).

Outputs:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> conv2d = ops.Conv2D(out_channel=32, kernel_size=3)
>>> output = conv2d(x, weight)
>>> print(output.shape)
(10, 32, 30, 30)
class tinyms.primitives.Conv2DBackpropInput(out_channel, kernel_size, pad_mode='valid', pad=0, pad_list=None, mode=1, stride=1, dilation=1, group=1, data_format='NCHW')[source]

The Conv2DBackpropInput interface is deprecated, please refer to mindspore.ops.Conv2DTranspose if you want to do unsampling.

Supported Platforms:

Deprecated

class tinyms.primitives.Conv2DTranspose(out_channel, kernel_size, pad_mode='valid', pad=0, pad_list=None, mode=1, stride=1, dilation=1, group=1, data_format='NCHW')[source]

Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input, also called deconvolution, although it is not an actual deconvolution. Because it cannot restore the original input data completely, but it can restore the shape of the original input.

Parameters:
  • out_channel (int) – The dimensionality of the output space.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution window.

  • pad_mode (str) – Modes to fill padding. It could be “valid”, “same”, or “pad”. Default: “valid”. Please refer to mindspore.nn.Conv2dTranspose for more specifications about pad_mode.

  • pad (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If pad is a tuple of four integers, the padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

  • pad_list (Union[str, None]) – The pad list like (top, bottom, left, right). Default: None.

  • mode (int) – Modes for different convolutions. The value is currently not used. Default: 1.

  • stride (Union[int, tuple[int]]) – The stride to be applied to the convolution filter. Default: 1.

  • dilation (Union[int, tuple[int]]) – Specifies the dilation rate to be used for the dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

  • data_format (str) – The format of input and output data. It should be ‘NHWC’ or ‘NCHW’. Default is ‘NCHW’.

Inputs:
  • dout (Tensor) - the gradients with respect to the output of the convolution. The shape conforms to the default data_format \((N, C_{out}, H_{out}, W_{out})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

  • input_size (Tensor) - A tuple describes the shape of the input which conforms to the format \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the gradients with respect to the input of convolution. It has the same shape as the input.

Raises:
  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • TypeError – If out_channel or group is not an int.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode it not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_transpose_input = ops.Conv2DTranspose(out_channel=32, kernel_size=3)
>>> output = conv2d_transpose_input(dout, weight, ops.shape(x))
>>> print(output.shape)
(10, 32, 32, 32)
class tinyms.primitives.Conv3D(out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, data_format='NCDHW')[source]

3D convolution layer.

Applies a 3D convolution over an input tensor which is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H, W\) is feature height and width respectively. the output value of a layer is calculated as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation , \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the jth channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where groups is the number of groups to split input in the channel dimension.

For more details, please refer to the paper Gradient Based Learning Applied to Document Recognition .

If the ‘pad_mode’ is set to be “valid”, the output depth, height and width will be \(\left \lfloor{1 + \frac{D_{in} + 2 \times \text{padding} - \text{ks_d} - (\text{ks_d} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively. Where \(dilation\) is Spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input.

Parameters:
  • out_channel (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – Specifies the depth, height and width of the 3D convolution window. It can be a single int or a tuple of 3 integers. Single int means the value is for the depth, height and width of the kernel. A tuple of 3 ints corresponds to the depth, height and width of the kernel respectively.

  • mode (int) – Modes for different convolutions. It is currently not used. Default: 1.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. The value ranges for the depth, height, and width dimensions are [1, D], [1, H], and [1, W], respectively. Default: 1.

  • group (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default: 1.

  • data_format (str) – The optional value for data format. Currently only support “NCDHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

  • weight (Tensor) - Set size of kernel is \((k_d, K_h, K_w)\), then the shape is \((C_{out}, C_{in}/groups, k_d, K_h, K_w)\). Currently weight data type only support float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{in}\). Currently, only support none.

Outputs:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • ValueError – If out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> conv3d = ops.Conv3D(out_channel=32, kernel_size=(4, 3, 3))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(16, 32, 7, 30, 30)
class tinyms.primitives.Conv3DTranspose(in_channel, out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, output_padding=0, data_format='NCDHW')[source]

Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Input is typically of shape \((N, C, D, H, W)\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H\) is height, \(W\) is width.

If the ‘pad_mode’ is set to be “pad”, the depth, height and width of output are defined as:

\[ \begin{align}\begin{aligned}D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{pad}[0] + \text{dilation}[0] \times (\text{kernel_size}[0] - 1) + \text{output_padding}[0] + 1\\H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{pad}[1] + \text{dilation}[1] \times (\text{kernel_size}[1] - 1) + \text{output_padding}[1] + 1\\W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{pad}[2] + \text{dilation}[2] \times (\text{kernel_size}[2] - 1) + \text{output_padding}[2] + 1\end{aligned}\end{align} \]
Parameters:
  • in_channel (int) – The channel of the input x.

  • out_channel (int) – The channel of the weight x.

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 3 integers. Specifies the depth, height and width of the 3D convolution window. Single int means the value is for the depth, height and width of the kernel. A tuple of 3 ints means the first value is for the depth, the second value is for the height and the other is for the width of the kernel.

  • mode (int) – Modes for different convolutions. Default is 1. It is currently not used.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad and output_padding must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – Add extra size to each dimension of the output. Default: 0.

  • data_format (str) – The optional value for data format. Currently only ‘NCDHW’ is supported.

Inputs:
  • dout (Tensor) - The gradients with respect to the output of the convolution. The shape conforms to the default. data_format \((N, C_{in}, D_{out}, H_{out}, W_{out})\). Currently dout data type only supports float16 and float32.

  • weight (Tensor) - Set size of kernel is \((K_d, K_h, K_w)\), then the shape is \((C_{in}, C_{out}//group, K_d, K_h, K_w)\). Where \(group\) is the Args parameter, \(//\) is the symbol for integer division. Currently weight data type only supports float16 and float32.

  • bias (Tensor) - Tensor of shape \(C_{out}\). Currently, only support none. Default: None.

Outputs:

Tensor, the gradients with respect to the input of convolution 3D. Tensor of shape \((N, C_{out}//group, D_{out}, H_{out}, W_{out})\), where \(group\) is the Args parameter.

Raises:
  • TypeError – If in_channel, out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad , dilation or output_padding is neither an int not a tuple.

  • ValueError – If in_channel, out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ nor ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

  • TypeError – If data type of dout and weight is not float16.

  • ValueError – If bias is not none. The rank of dout and weight is not 5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
>>> conv3d_transpose = ops.Conv3DTranspose(in_channel=16, out_channel=3, kernel_size=(4, 6, 2))
>>> output = conv3d_transpose(dout, weight)
>>> print(output.shape)
(32, 3, 13, 37, 33)
class tinyms.primitives.Cos[source]

Computes cosine of input element-wise.

Refer to mindspore.ops.cos() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> cos = ops.Cos()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cos(x)
>>> print(output)
[0.971338 0.6748758 0.95233357 0.9959527]
class tinyms.primitives.Cosh[source]

Computes hyperbolic cosine of input element-wise.

Refer to mindspore.ops.cosh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> cosh = ops.Cosh()
>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cosh(x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
class tinyms.primitives.CountNonZero(dims=None)[source]

Calculates the total number of non-zero entries in the input tensor along the specified dimensions.

Refer to mindspore.ops.count_nonzero() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([[0, 0, 1], [1, 1, 2], [0, 0, 1]], dtype=mindspore.int64)
>>> countnonzero = ops.CountNonZero(dims=[1])
>>> y = countnonzero(x)
>>> print(y)
[1 3 1]
class tinyms.primitives.CropAndResize(method='bilinear', extrapolation_value=0.0)[source]

Extracts crops from the input image tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant. For now, the backward of the operator only support bilinear method, for other methods, will return 0.

Parameters:
  • method (str, optional) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. Default: “bilinear”

  • extrapolation_value (float, optional) – An optional float value used extrapolation, if applicable. Default: 0.0.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape \((batch, image\_height, image\_width, depth)\). Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • boxes (Tensor) - A 2-D tensor of shape \((num\_boxes, 4)\). The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the [0, 1] range are allowed, in which case we use extrapolation_value to extrapolate the input image values. Types allowed: float32.

  • box_index (Tensor) - A 1-D tensor of shape \((num\_boxes)\) with int32 values in [0, batch). The value of box_index[i] specifies the image that the i-th box refers to. Types allowed: int32.

  • crop_size (Tuple[int]) - A tuple of two int32 elements: (crop_height, crop_width). Only constant value is allowed. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive.

Outputs:

A 4-D tensor of shape \((num\_boxes, crop\_height, crop\_width, depth)\) with type: float32.

Raises:
  • TypeError – If x or boxes or box_index is not a Tensor.

  • TypeError – If crop_size is not a Tuple with two int32 elements.

  • TypeError – If dtype of boxes is not float or that of box_index is not int.

  • TypeError – If method is not a str.

  • TypeError – If extrapolation_value is not a float.

  • ValueError – If the shape rank of x is not 4.

  • ValueError – If the shape rank of boxes is not 2.

  • ValueError – If the second dim of boxes is not 4.

  • ValueError – If the shape rank of box_index is not 1.

  • ValueError – If the first dim of box_index is not equal to that of boxes.

  • ValueError – If existing element in box_index is out of range [0, batch).

  • ValueError – If the data of crop_size is not positive.

  • ValueError – If method is not one of ‘bilinear’, ‘nearest’, ‘bilinear_v2’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class CropAndResizeNet(nn.Cell):
...     def __init__(self, crop_size):
...         super(CropAndResizeNet, self).__init__()
...         self.crop_and_resize = ops.CropAndResize()
...         self.crop_size = crop_size
...
...     def construct(self, x, boxes, box_index):
...         return self.crop_and_resize(x, boxes, box_index, self.crop_size)
...
>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> crop_and_resize = CropAndResizeNet(crop_size=crop_size)
>>> output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
>>> print(output.shape)
(5, 24, 24, 3)
class tinyms.primitives.Cross(dim=-65530)[source]

Returns the cross product of vectors in dimension dim of x1 and x2.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cross() for more details.

Parameters:

dim (int) – Spefcified dim along which to cumpute cross product with. Default: -65530.

Inputs:
  • x1 (Tensor) - Input Tensor.

  • x2 (Tensor) - Another input Tensor, must have the same shape and the same type as x1, and the size of their dim dimension should be 3.

Outputs:

Tensor, has the same shape and type as inputs.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.common import dtype as mstype
>>> import mindspore.ops as ops
>>> cross = ops.Cross(dim = 0)
>>> x1 = Tensor([1, 2, 3], mstype.int8)
>>> x2 = Tensor([1, 2, 3], mstype.int8)
>>> output = cross(x1, x2)
>>> print(output)
[0 0 0]
class tinyms.primitives.CumProd(exclusive=False, reverse=False)[source]

Computes the cumulative product of the tensor x along axis. For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

\[y_i = x_1 * x_2 * x_3 * ... * x_i\]
Parameters:
  • exclusive (bool) – If true, perform exclusive cumulative product. Default: False.

  • reverse (bool) – If true, reverse the result along axis. Default: False

Inputs:
  • x (Tensor[Number]) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (int) - The dimensions to compute the cumulative product. Only constant value is allowed.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> a, b, c, = 1, 2, 3
>>> x = Tensor(np.array([a, b, c]).astype(np.float32))
>>> op0 = ops.CumProd()
>>> output0 = op0(x, 0) # output=[a, a * b, a * b * c]
>>> op1 = ops.CumProd(exclusive=True)
>>> output1 = op1(x, 0) # output=[1, a, a * b]
>>> op2 = ops.CumProd(reverse=True)
>>> output2 = op2(x, 0) # output=[a * b * c, b * c, c]
>>> op3 = ops.CumProd(exclusive=True, reverse=True)
>>> output3 = op3(x, 0) # output=[b * c, c, 1]
>>> print(output0)
[1. 2. 6.]
>>> print(output1)
[1. 1. 2.]
>>> print(output2)
[6. 6. 3.]
>>> print(output3)
[6. 3. 1.]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [5, 3, 5]]).astype(np.float32))
>>> output4 = op0(x, 0)
>>> output5 = op0(x, 1)
>>> print(output4)
[[ 1.  2.  3.]
 [ 4. 10. 18.]
 [20. 30. 90.]]
>>> print(output5)
[[  1.   2.   6.]
 [  4.  20. 120.]
 [  5.  15.  75.]]
class tinyms.primitives.CumSum(exclusive=False, reverse=False)[source]

Computes the cumulative sum of input tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]
Parameters:
  • exclusive (bool) – By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output. Default: False.

  • reverse (bool) – If true, perform inverse cumulative sum. Default: False.

Inputs:
  • input (Tensor) - The input tensor to accumulate.

  • axis (int) - The axis to accumulate the tensor’s value. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Outputs:

Tensor, the shape of the output tensor is consistent with the input tensor’s.

Raises:
  • TypeError – If exclusive or reverse is not a bool.

  • TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> cumsum = ops.CumSum()
>>> # case 1: along the axis 0
>>> y = cumsum(x, 0)
>>> print(y)
[[ 3.  4.  6. 10.]
 [ 4. 10. 13. 19.]
 [ 8. 13. 21. 26.]
 [ 9. 16. 28. 35.]]
>>> # case 2: along the axis 1
>>> y = cumsum(x, 1)
>>> print(y)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
>>> # Next demonstrate exclusive and reverse, along axis 1
>>> # case 3: exclusive = True
>>> cumsum = ops.CumSum(exclusive=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[ 0.  3.  7. 13.]
 [ 0.  1.  7. 14.]
 [ 0.  4.  7. 15.]
 [ 0.  1.  4. 11.]]
>>> # case 4: reverse = True
>>> cumsum = ops.CumSum(reverse=True)
>>> y = cumsum(x, 1)
>>> print(y)
[[23. 20. 16. 10.]
 [23. 22. 16.  9.]
 [22. 18. 15.  7.]
 [20. 19. 16.  9.]]
class tinyms.primitives.Cummax(axis)[source]

Returns the cumulative maximum of elements and the index.

Refer to mindspore.ops.cummax() for more details.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> cummax = ops.Cummax(axis=0)
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> output = cummax(x)
>>> print(output[0])
[[ 3.  4.  6. 10.]
 [ 3.  6.  7. 10.]
 [ 4.  6.  8. 10.]
 [ 4.  6.  8. 10.]]
>>> print(output[1])
[[0 0 0 0]
 [0 1 1 0]
 [2 1 2 0]
 [2 1 2 0]]
class tinyms.primitives.Cummin(axis)[source]

Returns the cumulative minimum of elements and the index.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.cummin() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> func = ops.Cummin(axis=0)
>>> output = func(a)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
class tinyms.primitives.CumulativeLogsumexp(exclusive=False, reverse=False)[source]

Compute the cumulative log-sum-exp of the input tensor x along axis . For example, with all parameters at default values, if the input x is a tensor [a, b, c], the output will be [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))].

Parameters:
  • exclusive (bool, optional) – If true, the last element will be skipped during the calculation and thus an exclusive cumulative log-sum-exp will be performed. For example, this operation will output [-inf, a, log(exp(a) * exp(b))] with tensor [a, b, c] as the input. Note that the minimal value -inf, for performance reasons, is representable by the floating point type. Default: False.

  • reverse (bool, optional) – If true, the function accumulation values will be calculated after the elements of x on axis are flipped, and the calculation result will be flipped afterwards. For example, this operation will output [log(exp(c) + exp(b) + exp(a)), log(exp(c) + exp(b)), c] with tensor [a, b, c] as the input. Default: False.

Inputs:
  • x (Tensor) - The input tensor. Must be one of the following types: float16, float32, float64. The dimension of x must greater than 0.

  • axis (Tensor) - A 0-D tensor describing the dimension to compute the cumulative product. Must be one of the following types: int64, int32, int16. Must be in the range [-rank(x), rank(x)). Default: 0.

Outputs:

Tensor, has the same dtype and shape as the x.

Raises:
  • TypeError – If x or axis not a Tensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • TypeError – If dtype of axis is not in [int16, int32, int64].

  • TypeError – If exclusive or reverse is not a bool.

  • ValueError – If the dimension of x is not greater than 0.

  • RuntimeError – If axis is out of range [-rank(x), rank(x)).

Supported Platforms:

Ascend CPU GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=False, reverse=False)
>>> output = op(x, Tensor(0))
>>> print(output)
[1.        2.3132617 3.407606 ]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=True, reverse=False)
>>> output = op(x, Tensor(0))
>>> print(output)
[-3.4028235e+38  1.0000000e+00  2.3132617e+00]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=False, reverse=True)
>>> output = op(x, Tensor(0))
>>> print(output)
[3.407606  3.3132617 3.       ]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
>>> op = ops.CumulativeLogsumexp(exclusive=True, reverse=True)
>>> output = op(x, Tensor(0))
>>> print(output)
[ 3.3132617e+00  3.0000000e+00 -3.4028235e+38]
class tinyms.primitives.Custom(func, out_shape=None, out_dtype=None, func_type='hybrid', bprop=None, reg_info=None)[source]

Custom primitive is used for user defined operators and is to enhance the expressive ability of built-in primitives. You can construct a Custom object with a predefined function, which describes the computation logic of a user defined operator. You can also construct another Custom object with another predefined function if needed. Then these Custom objects can be directly used in neural networks. Detailed description and introduction of user-defined operators, including correct writing of parameters, please refer to Custom Operators Tutorial .

Warning

This is an experimental API that is subject to change.

Note

The supported platforms are determined by the input func_type. The supported platforms are as follows:

  • “hybrid”: supports [“Ascend”, “GPU”, “CPU”].

  • “akg”: supports [“Ascend”, “GPU”, “CPU”].

  • “tbe”: supports [“Ascend”].

  • “aot”: supports [“GPU”, “CPU”].

  • “pyfunc”: supports [“CPU”].

  • “julia”: supports [“CPU”].

  • “aicpu”: supports [“Ascend”].

Parameters:
  • func (Union[function, str]) –

    • function: If func is of function type, then func should be a Python function which describes the computation logic of a user defined operator. The function can be one of the following:

      1. A AKG operator implementation function, which can use ir builder/tvm compute/hybrid grammar.

      2. A TBE operator implementation function.

      3. A pure python function

      4. An kernel decorated function written by the Hybrid DSL.

    • str: If func is of str type, then str should be a path of file along with a function name. This could be used when func_type is “aot” or “julia”.

      1. for “aot”:

        Currently “aot” supports GPU/CPU(linux only) platform. “aot” means ahead of time, in which case Custom directly launches user defined “xxx.so” file as an operator. Users need to compile a handwriting “xxx.cu”/”xxx.cc” file into “xxx.so” ahead of time, and offer the path of the file along with a function name.

        • ”xxx.so” file generation:

          1) GPU Platform: Given user defined “xxx.cu” file (ex. “{path}/add.cu”), use nvcc command to compile it.(ex. “nvcc –shared -Xcompiler -fPIC -o add.so add.cu”)

          2) CPU Platform: Given user defined “xxx.cc” file (ex. “{path}/add.cc”), use g++/gcc command to compile it.(ex. “g++ –shared -fPIC -o add.so add.cc”)

        • Define a “xxx.cc”/”xxx.cu” file:

          ”aot” is a cross-platform identity. The functions defined in “xxx.cc” or “xxx.cu” share the same args. Typically, the function should be as:

          int func(int nparam, void **params, int *ndims, int64_t **shapes, const char **dtypes,
                  void *stream, void *extra)
          

          Parameters:

          • nparam(int): total number of inputs plus outputs; suppose the operator has 2 inputs and 3 outputs, then nparam=5

          • params(void **): a pointer to the array of inputs and outputs’ pointer; the pointer type of inputs and outputs is void * ; suppose the operator has 2 inputs and 3 outputs, then the first input’s pointer is params[0] and the second output’s pointer is params[3]

          • ndims(int *): a pointer to the array of inputs and outputs’ dimension num; suppose params[i] is a 1024x1024 tensor and params[j] is a 77x83x4 tensor, then ndims[i]=2, ndims[j]=3.

          • shapes(int64_t **): a pointer to the array of inputs and outputs’ shapes(int64_t *); the ith input’s jth dimension’s size is shapes[i][j](0<=j<ndims[i]); suppose params[i] is a 2x3 tensor and params[j] is a 3x3x4 tensor, then shapes[i][0]=2, shapes[j][2]=4.

          • dtypes(const char **): a pointer to the array of inputs and outputs’ types(const char *); (ex. “float32”, “float16”, “float”, “float64”, “int”, “int8”, “int16”, “int32”, “int64”, “uint”, “uint8”, “uint16”, “uint32”, “uint64”, “bool”)

          • stream(void *): stream pointer, only used in cuda file

          • extra(void *): used for further extension

          Return Value(int):

          • 0: MindSpore will continue to run if this aot kernel is successfully executed

          • others: MindSpore will raise exception and exit

          Examples: see details in tests/st/ops/graph_kernel/custom/aot_test_files/

        • Use it in Custom:

          Custom(func="{dir_path}/{file_name}:{func_name}",...)
          (ex. Custom(func="./reorganize.so:CustomReorganize", out_shape=[1], out_dtype=mstype.float32,
          "aot"))
          
      2. for “julia”:

        Currently “julia” supports CPU(linux only) platform. For julia use JIT compiler, and julia support c api to call julia code. The Custom can directly launches user defined “xxx.jl” file as an operator. Users need to write a “xxx.jl” file which include modules and functions, and offer the path of the file along with a module name and function name.

        Examples: see details in tests/st/ops/graph_kernel/custom/julia_test_files/

        • Use it in Custom:

          Custom(func="{dir_path}/{file_name}:{module_name}:{func_name}",...)
          (ex. Custom(func="./add.jl:Add:add", out_shape=[1], out_dtype=mstype.float32, "julia"))
          

  • out_shape (Union[function, list, tuple]) –

    The output shape infer function or the value of output shape of func. Default: None.

    If func has single output, then the value of output shape is a list or tuple of int.

    If func has multiple outputs, then the value of output shape is a tuple, each item represents the shape of each output.

    The input can be None only when the func_type input is “hybrid”. In this case, the automatic infer shape mechanic will be enabled.

  • out_dtype (Union[function, mindspore.dtype, tuple[mindspore.dtype]]) –

    The output data type infer function or the value of output data type of func. Default: None.

    If func has single output, then the value of output shape is a mindspore.dtype.

    If func has multiple outputs, then the value of output shape is a tuple of mindspore.dtype, each item represents the data type of each output.

    The input can be None only when the func_type input is “hybrid”. In this case, the automatic infer value mechanic will be enabled.

  • func_type (str) –

    The implementation type of func, should be one of

    [“hybrid”, “akg”, “tbe”, “aot”, “pyfunc”, “julia”, “aicpu”].

    Each func_type only supports specific platforms(targets). Default: “hybrid”. The supported platforms of func_type:

    • ”hybrid”: supports [“Ascend”, “GPU”, “CPU”].

    • ”akg”: supports [“Ascend”, “GPU”, “CPU”].

    • ”tbe”: supports [“Ascend”].

    • ”aot”: supports [“GPU”, “CPU”].

    • ”pyfunc”: supports [“CPU”].

    • ”julia”: supports [“CPU”].

    • ”aicpu”: supports [“Ascend”].

  • bprop (function) – The back propagation function of func. Default: None.

  • reg_info (Union[str, dict, list, tuple]) –

    Represents the registration information(reg info) of func with json format of type str or dict. The reg info specifies supported data types and formats of inputs and outputs, attributes and target of func. Default: None.

    If reg info is a list or tuple, then each item should be with json format of type str or dict, which represents the registration information of func in a specific target. You need to invoke CustomRegOp or the subclass of RegOp to generate the reg info for func. Then you can invoke custom_info_register to bind the reg info to func or just pass the reg info to reg_info parameter. The reg_info parameter takes higher priority than custom_info_register and the reg info in a specific target will be registered only once.

    If reg info is not set, then we will infer the data types and formats from the inputs of Custom operator.

    Please note that, if func_type is “tbe” or the func only supports some specified data types and formats, or it has attribute inputs, then you should set the reg info for func.

Inputs:
  • input (Union(tuple, list)) - The input tuple or list is made up of multiple tensors, and attributes value(optional).

Outputs:

Tensor or tuple[Tensor], execution results.

Raises:
  • TypeError – If the type of func is invalid or the type of register information for func is invalid.

  • ValueError – If func_type is invalid.

  • ValueError – If the register information is invalid, including the target is not supported, the input numbers or the attributes of func differs in different targets.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore.ops import CustomRegOp, custom_info_register, DataType, kernel
>>> from mindspore import dtype as mstype
>>> from mindspore.nn import Cell
>>> input_x = Tensor(np.ones([16, 16]).astype(np.float32))
>>> input_y = Tensor(np.ones([16, 16]).astype(np.float32))
>>>
>>> # Example, func_type = "hybrid"
>>> # This is the default func_type in Custom,
>>> # and both out_shape and out_dtype can be None(default value).
>>> # In this case, the input func must be a function written in the Hybrid DSL
>>> # and decorated by @kernel.
>>> @kernel
... def add_script(a, b):
...     c = output_tensor(a.shape, a.dtype)
...     for i0 in range(a.shape[0]):
...         for i1 in range(a.shape[1]):
...             c[i0, i1] = a[i0, i1] + b[i0, i1]
...     return c
>>>
>>> test_op_hybrid = ops.Custom(add_script)
>>> output = test_op_hybrid(input_x, input_y)
>>> # the result will be a 16 * 16 tensor with all elements 2
>>> print(output.shape)
(16, 16)
>>> # Example, func_type = "tbe"
>>> square_with_bias_op_info = CustomRegOp() \
...     .fusion_type("OPAQUE") \
...     .attr("bias", "required", "float") \
...     .input(0, "x") \
...     .output(0, "y") \
...     .dtype_format(DataType.F32_Default, DataType.F32_Default) \
...     .dtype_format(DataType.F16_Default, DataType.F16_Default) \
...     .target("Ascend") \
...     .get_op_info()
>>>
>>> @custom_info_register(square_with_bias_op_info)
... def square_with_bias(input_x, output_y, bias=0.0, kernel_name="square_with_bias"):
...     import te.lang.cce
...     from te import tvm
...     from topi.cce import util
...
...     shape = input_x.get("shape")
...     dtype = input_x.get("dtype").lower()
...
...     shape = util.shape_refine(shape)
...     data = tvm.placeholder(shape, name="data", dtype=dtype)
...
...     with tvm.target.cce():
...         res0 = te.lang.cce.vmul(data, data)
...         res = te.lang.cce.vadds(res0, bias)
...         sch = te.lang.cce.auto_schedule(res)
...
...     config = {"print_ir": False,
...               "name": kernel_name,
...               "tensor_list": [data, res]}
...
...     te.lang.cce.cce_build_code(sch, config)
>>>
>>> def test_tbe():
...     square_with_bias = ops.Custom(square_with_bias, out_shape=lambda x, _: x, \
...                                   out_dtype=lambda x, _: x, func_type="tbe")
...     res = self.square_with_bias(input_x, 1.0)
...     return res
>>>
>>> # Example, func_type = "aicpu"
>>> resize_bilinear_op_info = CustomRegOp("ResizeBilinear") \
...     .fusion_type("OPAQUE") \
...     .input(0, "input", "required") \
...     .output(1, "output", "required") \
...     .attr("align_corners", "required", "bool") \
...     .attr("cust_aicpu", "optional", "str", "aicpu_kernels") \
...     .dtype_format(DataType.F32_Default, DataType.F32_Default) \
...     .dtype_format(DataType.F16_Default, DataType.F32_Default) \
...     .target("Ascend") \
...     .get_op_info()
>>>
>>> @custom_info_register(resize_bilinear_op_info)
... def resize_bilinear_aicpu():
...     return
>>>
>>> def test_aicpu(x):
...     resize_bilinear_op = ops.Custom(resize_bilinear_aicpu, out_shape=[1, 1, 9, 9], \
...                                     out_dtype=mstype.float32, func_type="aicpu")
...     res = resize_bilinear_op(x, True, "aicpu_kernels")
...     return res
>>>
>>> # Example, func_type = "aot"
>>> def test_aot(x, y, out_shapes, out_types):
...     program = ops.Custom("./reorganize.so:CustomReorganize", out_shapes, out_types, "aot")
...     out = program(x, y)
...     return out
>>>
>>> # Example, func_type = "pyfunc"
>>> def func_multi_output(x1, x2):
...     return (x1 + x2), (x1 - x2)
>>>
>>> test_pyfunc = ops.Custom(func_multi_output, lambda x, _: (x, x), lambda x, _: (x, x), "pyfunc")
>>> output = test_pyfunc(input_x, input_y)
>>>
>>> # Example, func_type = "julia"
>>> # julia code:
>>> # add.jl
>>> # module Add
>>> # function add(x, y, z)
>>> #   z .= x + y
>>> #   return z
>>> # end
>>> # end
>>> def test_julia(x, y, out_shapes, out_types):
...     program = ops.Custom("./add.jl:Add:add", out_shapes, out_types, "julia")
...     out = program(x, y)
...     return out
get_bprop()[source]

Get the bprop of the custom op

class tinyms.primitives.DType[source]

Returns the data type of the input tensor as mindspore.dtype.

Inputs:
  • input_x (Tensor) - Input Tensor.

Outputs:

mindspore.dtype, the data type of a tensor.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.DType()(input_tensor)
>>> print(output)
Float32
class tinyms.primitives.DataFormatDimMap(src_format='NHWC', dst_format='NCHW')[source]

Returns the dimension index in the destination data format given in the source data format.

Parameters:
  • src_format (str) – An optional value for source data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NHWC’.

  • dst_format (str) – An optional value for destination data format. The format can be ‘NHWC’ and ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor, each element is used as a dimension index of the source data format. The suggested values are in the range [-4, 4). Only supports int32.

Outputs:

Tensor, Return the dimension index in the given target data format, has the same data type and shape as the input_x.

Raises:
  • TypeError – If src_format or dst_format is not a str.

  • TypeError – If input_x is not a Tensor whose dtype is not int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 2, 3], mindspore.int32)
>>> dfdm = ops.DataFormatDimMap()
>>> output = dfdm(input_x)
>>> print(output)
[0 3 1 2]
class tinyms.primitives.DataFormatVecPermute(src_format='NHWC', dst_format='NCHW')[source]

Converts the input tensor from the src_format to the dst_format by permuting its dimensions.

Parameters:
  • src_format (str, optional) – the source data format, which can be ‘NHWC’ and ‘NCHW’. Default: ‘NHWC’.

  • dst_format (str, optional) – the target data format, which can be ‘NHWC’ and ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • input_x (Tensor) - A Tensor of shape \((4, )\) or \((4, 2)\) in source data format. Supports int32 and int64 datatype.

Outputs:

Tensor, has the same data type and shape as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither int32 nor int64.

  • ValueError – If src_format or dst_format is not a str in [‘NHWC’, ‘NCHW’].

  • ValueError – If input_x shape is not \((4, )\) or \((4, 2)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self, src_format="NHWC", dst_format="NCHW"):
...         super().__init__()
...         self.op = ops.DataFormatVecPermute(src_format, dst_format)
...     def construct(self, x):
...         return self.op(x)
...
>>> net = Net()
>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> output = net(x)
>>> print(output)
[1 4 2 3]
class tinyms.primitives.DeformableOffsets(strides, pads, ksize, dilations=(1, 1, 1, 1), data_format='NCHW', deformable_groups=1, modulated=True)[source]

Computes the deformed convolution output with the expected input.

Refer to mindspore.ops.deformable_conv2d() for more details.

Supported Platforms:

Ascend GPU CPU

class tinyms.primitives.Depend[source]

Depend is used for processing dependency operations.

In most scenarios, if operators have IO side effects or memory side effects, they will be executed according to the user’s semantics. In some scenarios, if the two operators A and B have no order dependency, and A must be executed before B, we recommend using Depend to specify their execution order. The usage method is as follows:

a = A(x)                --->        a = A(x)
b = B(y)                --->        y = Depend(y, a)
                        --->        b = B(y)
Inputs:
  • value (Tensor) - the real value to return for depend operator.

  • expr (Expression) - the expression to execute with no outputs.

Outputs:

Tensor, the value passed by last operator.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.softmax = ops.Softmax()
...         self.depend = ops.Depend()
...
...     def construct(self, x, y):
...         mul = x * y
...         y = self.depend(y, mul)
...         ret = self.softmax(y)
...         return ret
...
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
[[0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]
 [0.2 0.2 0.2 0.2 0.2]]
class tinyms.primitives.DepthToSpace(block_size)[source]

Rearrange blocks of depth data into spatial dimensions.

This is the reverse operation of SpaceToDepth.

The depth of output tensor is \(input\_depth / (block\_size * block\_size)\).

The output tensor’s height dimension is \(height * block\_size\).

The output tensor’s weight dimension is \(weight * block\_size\).

The input tensor’s depth must be divisible by block_size * block_size. The data format is “NCHW”.

Parameters:

block_size (int) – The block size used to divide depth data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. It must be a 4-D tensor with shape \((N, C_{in}, H_{in}, W_{in})\). The data type is Number.

Outputs:

Tensor of shape \((N, C_{in} / \text{block_size} ^ 2, H_{in} * \text{block_size}, W_{in} * \text{block_size})\).

Raises:
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1, 12, 1, 1), mindspore.float32)
>>> block_size = 2
>>> depth_to_space = ops.DepthToSpace(block_size)
>>> output = depth_to_space(x)
>>> print(output.shape)
(1, 3, 2, 2)
class tinyms.primitives.DepthwiseConv2dNative(channel_multiplier, kernel_size, mode=3, pad_mode='valid', pad=0, stride=1, dilation=1, group=1)[source]

DepthwiseConv2dNative will be deprecated in the future. Please use mindspore.nn.Conv2d instead.

Supported Platforms:

Deprecated

class tinyms.primitives.Diag[source]

Constructs a diagonal tensor with a given diagonal values.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.diag() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([1, 2, 3, 4]).astype('int32')
>>> diag = ops.Diag()
>>> output = diag(input_x)
>>> print(output)
[[1 0 0 0]
 [0 2 0 0]
 [0 0 3 0]
 [0 0 0 4]]
class tinyms.primitives.DiagPart[source]

Extracts the diagonal elements from the given Tensor.

If the input_x is a Tensor of shape \([D_1,..., D_k, D_1,..., D_k]\), then the output will be a Tensor of rank k of shape \([D_1,..., D_k]\) where: \(output[i_1,..., i_k] = input_x[i_1,..., i_k, i_1,..., i_k]\).

Inputs:
  • input_x (Tensor) - The rank of input tensor is 2k(k > 0).

Outputs:

Tensor, the extracted diagonal has the same dtype as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • ValueError – If rank of input_x is not even or zero.

  • ValueError – If input_shape[i] is not equal to input_shape[i + len(input_shape)/2].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1, 0, 0, 0],
...                   [0, 2, 0, 0],
...                   [0, 0, 3, 0],
...                   [0, 0, 0, 4]])
>>> diag_part = ops.DiagPart()
>>> output = diag_part(input_x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.Digamma[source]

Computes the grad of the lgamma function on input.

\[P(x) = grad(ln(gamma(x)))\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. With type of float16 or float32 or float64.

Outputs:

Tensor, has the same dtype as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of input x is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.5, 0.5, 9]).astype(np.float16))
>>> digamma = ops.Digamma()
>>> output = digamma(x)
>>> print(output)
[ 0.0365 -1.964   2.14  ]
class tinyms.primitives.Dilation2D(stride, dilation, pad_mode='SAME', data_format='NCHW')[source]

Computes the grayscale dilation of 4-D input and 3-D filters tensors.

Applies a 2D dilation over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(H\) is height, \(W\) is width, \(C\) is channel number. Given kernel size \(ks = (h_{ker}, w_{ker})\), stride \(s = (s_0, s_1)\) and dilation \(d = (d_0, d_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + d_0 \times m, s_1 \times w + d_1 \times n) + \text{filter}(C_j, m, n)\]

Warning

This is an experimental API that is subjected to change or deletion.

Note

If the input data type is float32, this operator is still executed in float16 mode.

Parameters:
  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively, or a tuple of four int numbers when data_format is ‘NCHW’ represents [1, 1, stride_height, stride_width].

  • dilation (Union(int, tuple[int])) – The data type is int or a tuple of 2 integers or a tuple of 4 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input x.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid”. Default: “same”. Both upper and lower case are supported.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input x.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str, optional) – The value for data format, only ‘NCHW’ is supported at present. Default: “NCHW”.

Inputs:
  • x (Tensor) - Input data. A 4-D Tensor, its shape must be \((N, C_{in}, H_{in}, W_{in})\).

  • filter (Tensor) - A three dimension tensor with the same type as input. The shape must be \((C_{in}, H_{filter}, W_{filter})\).

Outputs:

Tensor, the value that applied 2D dilation. The shape is \((N, C_{out}, H_{out}, W_{out})\) which is not necessarily the same as the input x, the type is the same as the input x.

Raises:
  • TypeError – If type of x or filter is not the type in [uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64].

  • TypeError – If stride or dilation is not an int number or a tuple of two or four int numbers.

  • ValueError – If the length of stride or dilation is neither two nor four when they are tuple.

  • ValueError – If stride or dilation shape is not (1, 1, height, width) when it is a tuple of four int numbers.

  • ValueError – If stride is not in the range of [1, 255].

  • ValueError – If dilation is less than 1.

  • ValueError – If pad_mode is not a str of ‘same’, ‘valid’, ‘SAME’ or ‘VALID’.

  • ValueError – If data_format is not the str of ‘NCHW’.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.ones([10, 5, 32, 32]), mindspore.float16)
>>> filter = Tensor(np.ones([5, 3, 3]), mindspore.float16)
>>> dilation2d = ops.Dilation2D(stride=1, dilation=1, pad_mode='VALID')
>>> output = dilation2d(x, filter)
>>> print(output.shape)
(10, 5, 30, 30)
class tinyms.primitives.Div[source]

Computes the quotient of dividing the first input tensor by the second input tensor element-wise.

\[out_{i} = \frac{x_i}{y_i}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • y (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Outputs:

Tensor, the shape is the same as the one of the input x , y after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If neither x nor y is a Tensor.

  • TypeError – If data types of x and y are both Tensor with bool_.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 :has same data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = ops.Div()
>>> output = div(x, y)
>>> print(output)
[-1.3333334  2.5        2.        ]
>>> # case 2 : different data type and shape of the two inputs
>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(2, mindspore.int32)
>>> output = div(x, y)
>>> print(output)
[-2.  2.5  3.]
>>> print(output.dtype)
Float32
class tinyms.primitives.DivNoNan[source]

Operates a safe division between x1 and x2 element-wise. Returns 0 if element of x2 is zero.

Inputs of x1 and x2 comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}output_{i} = \begin{cases} 0, & \text{ if } x2_{i} = 0\\ x1_{i} / x2_{i}, & \text{ if } x2_{i} \ne 0 \end{cases}\end{split}\]
Inputs:
  • x1 (Union[Tensor, number.Number, bool]) - The first input is a number.Number or a bool or a tensor whose data type is number or bool_ <https://www.mindspore.cn/docs/en/r2.0/api_python/mindspore.html#mindspore.dtype> _.

  • x2 (Union[Tensor, number.Number, bool]) - The second input is a number.Number or a bool when the first input is a bool or a tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x1 and x2 is not a number.Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([-1.0, 0., 1.0, 5.0, 6.0]), mindspore.float32)
>>> x2 = Tensor(np.array([0., 0., 0., 2.0, 3.0]), mindspore.float32)
>>> div_no_nan = ops.DivNoNan()
>>> output = div_no_nan(x1, x2)
>>> print(output)
[0.  0.  0.  2.5 2. ]
class tinyms.primitives.Dropout(keep_prob=0.5, Seed0=0, Seed1=0)[source]

During training, randomly zeroes some of the elements of the input tensor with probability 1-keep_prob from a Bernoulli distribution. It plays the role of reducing neuron correlation and avoid overfitting.

Refer to mindspore.ops.dropout() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout(keep_prob=0.5)
>>> x = Tensor(np.ones([1, 2, 3, 4, 5]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape, mask.shape, mask.dtype)
(1, 2, 3, 4, 5) (16,) UInt8
class tinyms.primitives.Dropout2D(keep_prob=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 4-dimensional tensor with a shape of NCHW, the channel feature map refers to a 2-dimensional feature map with the shape of HW).

Dropout2D can improve the independence between channel feature maps.

Note

The keep probability \(keep\_prob\) is equal to \(1 - p\) in mindspore.ops.dropout2d().

Parameters:

keep_prob (float, optional) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 4-D tensor with shape \((N, C, H, W)\), where N is the batch size, C is the number of channels, H is the feature height, and W is the feature width. The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0].

  • ValueError – If x shape is not 4D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout2D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 3)
class tinyms.primitives.Dropout3D(keep_prob=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 5-dimensional tensor with a shape of NCDHW, the channel feature map refers to a 3-dimensional feature map with a shape of DHW).

Note

The keep probability \(keep\_prob\) is equal to \(1 - p\) in mindspore.ops.dropout3d().

Dropout3D can improve the independence between channel feature maps.

Parameters:

keep_prob (float) – The keep probability of a channel, between 0 and 1, e.g. keep_prob = 0.8, means dropping out 20% of channels. Default: 0.5.

Inputs:
  • x (Tensor) - A 5-D tensor with shape \((N, C, D, H, W)\), where N is the batch size, C is the number of channels, D is the feature depth, H is the feature height, and W is the feature width. The data type should be int8, int16, int32, int64, float16 or float32.

Outputs:
  • output (Tensor) - With the same shape and data type as x.

  • mask (Tensor) - With the same shape as x and the data type is bool.

Raises:
  • TypeError – If the data type of keep_prob is not float.

  • ValueError – If keep_prob is out of the range [0.0, 1.0]; or if the dim of input is not 5-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = ops.Dropout3D(keep_prob=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output, mask = dropout(x)
>>> print(output.shape)
(2, 1, 2, 1, 2)
class tinyms.primitives.DropoutDoMask[source]

The DropoutDoMask interface is deprecated, please use the mindspore.ops.Dropout instead.

Supported Platforms:

Deprecated

class tinyms.primitives.DropoutGenMask(Seed0=0, Seed1=0)[source]

The DropoutGenMask interface is deprecated, please use the mindspore.ops.Dropout instead.

Supported Platforms:

Deprecated

class tinyms.primitives.DynamicGRUV2(direction='UNIDIRECTIONAL', cell_depth=1, keep_prob=1.0, cell_clip=-1.0, num_proj=0, time_major=True, activation='tanh', gate_order='rzh', reset_after=True, is_training=True)[source]

Applies a single-layer gated recurrent unit (GRU) to an input sequence.

\[\begin{split}\begin{array}{ll} r_{t+1} = \sigma(W_{ir} x_{t+1} + b_{ir} + W_{hr} h_{(t)} + b_{hr}) \\ z_{t+1} = \sigma(W_{iz} x_{t+1} + b_{iz} + W_{hz} h_{(t)} + b_{hz}) \\ n_{t+1} = \tanh(W_{in} x_{t+1} + b_{in} + r_{t+1} * (W_{hn} h_{(t)}+ b_{hn})) \\ h_{t+1} = (1 - z_{t+1}) * n_{t+1} + z_{t+1} * h_{(t)} \end{array}\end{split}\]

where \(h_{t+1}\) is the hidden state at time t+1, \(x_{t+1}\) is the input at time t+1, \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0. \(r_{t+1}\), \(z_{t+1}\), \(n_{t+1}\) are the reset, update, and new gates, respectively. \(W\), \(b\) are the weight parameter and the deviation parameter respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.

Parameters:
  • direction (str) – A string identifying the direction in the operator. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the operator. Default: 1.

  • keep_prob (float) – A float identifying the keep prob in the operator. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the operator. Default: -1.0.

  • num_proj (int) – An integer identifying the number projection in the operator. Default: 0.

  • time_major (bool) – A bool identifying the time major in the operator. Default: True.

  • activation (str) – A string identifying the type of activation function in the operator. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • gate_order (str) – A string identifying the gate order in weight and bias. Default: ‘rzh’. ‘zrh’ is another option. Here, ‘rzh’ means the gate order is: reset gate, update gate, hidden gate. ‘zrh’ means the gate order is: update gate, reset gate, hidden gate.

  • reset_after (bool) – A bool identifying whether to apply reset gate after matrix multiplication. Default: True.

  • is_training (bool) – A bool identifying is training in the operator. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((\text{num_step}, \text{batch_size}, \text{input_size})\). The data type must be float16.

  • weight_input (Tensor) - Input-hidden weight \(W_{\{ir,iz,in\}}\). Tensor of shape \((\text{input_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • weight_hidden (Tensor) - Hidden-hidden weight \(W_{\{hr,hz,hn\}}\). Tensor of shape \((\text{hidden_size}, 3 \times \text{hidden_size})\). The data type must be float16.

  • bias_input (Tensor) - Input-hidden bias \(b_{\{ir,iz,in\}}\). Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • bias_hidden (Tensor) - Hidden-hidden bias \(b_{\{hr,hz,hn\}}\). Tensor of shape \((3 \times \text{hidden_size})\), or None. Has the same data type with input init_h.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((\text{batch_size})\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((\text{batch_size}, \text{hidden_size})\). The data type must be float16 or float32.

Outputs:
  • y (Tensor) - A Tensor of shape:

    • y_shape = \((num\_step, batch\_size, min(hidden\_size, num\_proj))\): If num_proj > 0,

    • y_shape = \((num\_step, batch\_size, hidden\_size)\): If num_proj = 0.

    Has the same data type with input bias_type.

  • output_h (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • update (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • reset (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

  • hidden_new (Tensor) - A Tensor of shape \((\text{num_step}, \text{batch_size}, \text{hidden_size})\). Has the same data type with input bias_type.

A note about the bias_type:

  • If bias_input and bias_hidden both are None, bias_type is the data type of init_h.

  • If bias_input is not None, bias_type is the data type of bias_input.

  • If bias_input is None and bias_hidden is not None, bias_type is the data type of bias_hidden.

Raises:
  • TypeError – If direction, activation or gate_order is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob or cell_clip is not a float.

  • TypeError – If time_major, reset_after or is_training is not a bool.

  • TypeError – If x, weight_input, weight_hidden, bias_input, bias_hidden, seq_length or ini_h is not a Tensor.

  • TypeError – If dtype of x, weight_input or weight_hidden is not float16.

  • TypeError – If dtype of init_h is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 8, 64).astype(np.float16))
>>> weight_i = Tensor(np.random.rand(64, 48).astype(np.float16))
>>> weight_h = Tensor(np.random.rand(16, 48).astype(np.float16))
>>> bias_i = Tensor(np.random.rand(48).astype(np.float16))
>>> bias_h = Tensor(np.random.rand(48).astype(np.float16))
>>> init_h = Tensor(np.random.rand(8, 16).astype(np.float16))
>>> dynamic_gru_v2 = ops.DynamicGRUV2()
>>> output = dynamic_gru_v2(x, weight_i, weight_h, bias_i, bias_h, None, init_h)
>>> print(output[0].shape)
(2, 8, 16)
class tinyms.primitives.DynamicRNN(cell_type='LSTM', direction='UNIDIRECTIONAL', cell_depth=1, use_peephole=False, keep_prob=1.0, cell_clip=-1.0, num_proj=0, time_major=True, activation='tanh', forget_bias=0.0, is_training=True)[source]

Applies a recurrent neural network to the input. Only long short-term memory (LSTM) is supported currently.

\[\begin{split}\begin{array}{ll} \\ i_{t+1} = \sigma(W_{ix} x_{t+1} + b_{ix} + W_{ih} h_{(t)} + b_{ih}) \\ f_{t+1} = \sigma(W_{fx} x_{t+1} + b_{fx} + W_{fh} h_{(t)} + b_{fh}) \\ \tilde{c}_{t+1} = \tanh(W_{cx} x_{t+1} + b_{cx} + W_{ch} h_{(t)} + b_{ch}) \\ o_{t+1} = \sigma(W_{ox} x_{t+1} + b_{ox} + W_{oh} h_{(t)} + b_{oh}) \\ c_{t+1} = f_{t+1} * c_{(t)} + i_t * \tilde{c}_{t+1} \\ h_{t+1} = o_{t+1} * \tanh(c_{t+1}) \\ \end{array}\end{split}\]

\(h_{t+1}\) is the hidden state at time t+1. \(x_{t+1}\) is the input at time t+1. \(h_{t}\) is the hidden state of the layer at time t or the initial hidden state at time 0. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\).

Parameters:
  • cell_type (str) – A string identifying the cell type in the operator. Default: ‘LSTM’. Only ‘LSTM’ is currently supported.

  • direction (str) – A string identifying the direction in the operator. Default: ‘UNIDIRECTIONAL’. Only ‘UNIDIRECTIONAL’ is currently supported.

  • cell_depth (int) – An integer identifying the cell depth in the operator. Default: 1.

  • use_peephole (bool) – A bool identifying if use peephole in the operator. Default: False.

  • keep_prob (float) – A float identifying the keep prob in the operator. Default: 1.0.

  • cell_clip (float) – A float identifying the cell clip in the operator. Default: -1.0.

  • num_proj (int) – An integer identifying the number projection in the operator. Default: 0.

  • time_major (bool) – A bool specify the data format of x. If it is set to True, the format is \((num\_step, batch\_size, input\_size)\), if it is set to False, the format is \((batch\_size, num\_step, input\_size)\). Default: True. Only supports True at present.

  • activation (str) – A string identifying the type of activation function in the operator. Default: ‘tanh’. Only ‘tanh’ is currently supported.

  • forget_bias (float) – A float identifying the forget bias in the operator. Default: 0.0.

  • is_training (bool) – A bool identifying is training in the operator. Default: True.

Inputs:
  • x (Tensor) - Current words. Tensor of shape \((num\_step, batch\_size, input\_size)\). The data type must be float16.

  • w (Tensor) - Weight. Tensor of shape \((input\_size + hidden\_size, 4 * hidden\_size)\). The data type must be float16.

  • b (Tensor) - Bias. Tensor of shape \((4 * hidden\_size)\). The data type must be float16 or float32.

  • seq_length (Tensor) - The length of each batch. Tensor of shape \((batch\_size, )\). Only None is currently supported.

  • init_h (Tensor) - Hidden state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

  • init_c (Tensor) - Cell state of initial time. Tensor of shape \((1, batch\_size, hidden\_size)\). The data type must be float16.

Outputs:
  • y (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • output_h (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). With data type of float16.

  • output_c (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • i (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • j (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • f (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • o (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

  • tanhct (Tensor) - A Tensor of shape \((num\_step, batch\_size, hidden\_size)\). Has the same type with input b.

Raises:
  • TypeError – If cell_type, direction or activation is not a str.

  • TypeError – If cell_depth or num_proj is not an int.

  • TypeError – If keep_prob, cell_clip or forget_bias is not a float.

  • TypeError – If use_peehpole, time_major or is_training is not a bool.

  • TypeError – If x, w, b, seq_length, init_h or init_c is not a Tensor.

  • TypeError – If dtype of x, w, init_h or init_c is not float16.

  • TypeError – If dtype of b is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.random.rand(2, 16, 64).astype(np.float16))
>>> w = Tensor(np.random.rand(96, 128).astype(np.float16))
>>> b = Tensor(np.random.rand(128).astype(np.float16))
>>> init_h = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> init_c = Tensor(np.random.rand(1, 16, 32).astype(np.float16))
>>> dynamic_rnn = ops.DynamicRNN()
>>> output = dynamic_rnn(x, w, b, None, init_h, init_c)
>>> print(output[0].shape)
(2, 16, 32)
class tinyms.primitives.DynamicShape(dtype=9)[source]

Same as operator TensorShape. DynamicShape will be deprecated in the future. Please use TensorShape instead.

Supported Platforms:

Deprecated

class tinyms.primitives.EditDistance(normalize=True)[source]

Computes the Levenshtein Edit Distance. It is used to measure the similarity of two sequences. The inputs are variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape) and (truth_indices, truth_values, truth_shape).

\[\begin{split}\operatorname{lev}_{a, b}(i, j)=\left\{\begin{array}{ll} \max (i, j) \qquad \qquad \qquad \qquad \qquad \quad \ \text { if } \min (i, j)=0 \\ \min \left\{\begin{array}{ll} \operatorname{lev}_{a, b}(i-1, j)+1 & \\ \operatorname{lev}_{a, b}(i, j-1)+1 & \text { otherwise. } \\ \operatorname{lev}_{a, b}(i-1, j-1)+1_{\left(a_{i} \neq b_{j}\right)} \end{array}\right. & \end{array}\right.\end{split}\]

Where the \(a\) indicates the hypothesis and the \(b\) indicates the truth. For ease of understanding, i and j here in may be considered as lengths of a and b.

Warning

Unorded truth_indices or hypothesis_indices might lead to expected result, so it is suggested to make sure truth_indices and hypothesis_indices are both in ascending order before calling this API.

Parameters:

normalize (bool) – If true, edit distances are normalized by length of truth. Default: True.

Inputs:
  • hypothesis_indices (Tensor) - The indices of the hypothesis list SparseTensor. With int64 data type. The shape of tensor is \((N, R)\).

  • hypothesis_values (Tensor) - The values of the hypothesis list SparseTensor. Must be 1-D vector with length of N.

  • hypothesis_shape (Tensor) - The shape of the hypothesis list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

  • truth_indices (Tensor) - The indices of the truth list SparseTensor. With int64 data type. The shape of tensor is \((M, R)\).

  • truth_values (Tensor) - The values of the truth list SparseTensor. Must be 1-D vector with length of M.

  • truth_shape (Tensor) - The shape of the truth list SparseTensor. Must be R-length vector with int64 data type. Only constant value is allowed.

Outputs:

Tensor, a dense tensor with rank R-1 and float32 data type.

Raises:

TypeError – If normalize is not a bool.

Supported Platforms:

Ascend CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> class EditDistance(nn.Cell):
...     def __init__(self, hypothesis_shape, truth_shape, normalize=True):
...         super(EditDistance, self).__init__()
...         self.edit_distance = ops.EditDistance(normalize)
...         self.hypothesis_shape = hypothesis_shape
...         self.truth_shape = truth_shape
...
...     def construct(self, hypothesis_indices, hypothesis_values, truth_indices, truth_values):
...         return self.edit_distance(hypothesis_indices, hypothesis_values, self.hypothesis_shape,
...                                   truth_indices, truth_values, self.truth_shape)
...
>>> hypothesis_indices = Tensor(np.array([[0, 0, 0], [1, 0, 1], [1, 1, 1]]).astype(np.int64))
>>> hypothesis_values = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> hypothesis_shape = Tensor(np.array([1, 1, 2]).astype(np.int64))
>>> truth_indices = Tensor(np.array([[0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]).astype(np.int64))
>>> truth_values = Tensor(np.array([1, 3, 2, 1]).astype(np.float32))
>>> truth_shape = Tensor(np.array([2, 2, 2]).astype(np.int64))
>>> edit_distance = EditDistance(hypothesis_shape, truth_shape)
>>> output = edit_distance(hypothesis_indices, hypothesis_values, truth_indices, truth_values)
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Eig(compute_v=False)[source]

Computes the eigenvalues and eigenvectors of a square matrix(batch square matrices).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

compute_v (bool, optional) – If True, compute both eigenvalues and eigenvectors; If False, just eigenvalues will be computed. Default: False.

Inputs:
  • x (Tensor) - Square matrices of shape \((*, N, N)\), with float32, float64, complex64 or complex128 data type.

Outputs:
  • eigen_values (Tensor) - Shape \((*, N)\). Each inner most vector represents eigenvalues of the corresponding matrix. The eigenvalues may not have an order.

  • eigen_vectors (Tensor) - If compute_v is False, it’s an empty tensor. Otherwise, this tensor has shape \((*, N, N)\), whose columns represent normalized (unit length) eigenvectors of corresponding eigenvalues.

Raises:
  • TypeError – If compute_v is not a bool.

  • TypeError – If dtype of x is not one of: float64, float32, complex64 or complex128.

  • TypeError – If x is not a Tensor.

  • ValueError – If x is not a square(batch squares).

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 0.0], [0.0, 2.0]]), mindspore.float32)
>>> eig = ops.Eig(compute_v=True)
>>> u, v = eig(input_x)
>>> print(u)
[1.+0.j 2.+0.j]
>>> print(v)
[[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]]
class tinyms.primitives.Einsum(equation)[source]

Sums the product of the elements of the input Tensor along dimensions specified notation based on the Einstein summation convention(Einsum). You can use this operator to perform diagonal/reducesum/transpose/matmul/mul/inner product operations, etc.

The inputs must be a tuple of tensors. When the inputs are only one tensor, you can input (tensor, ) dtypes of them should be float16/float32/float64.

Parameters:

equation (str) – An attribute, represent the operation you want to do. the value can contain only letters([a-z][A-Z]), commas(,), ellipsis(…), and arrow(->). the letters represent inputs’s tensor dimension, commas(,)represent separate tensors, ellipsis(…) indicates the tensor dimension that you do not care about, the left of the arrow(->) indicates the input tensors, and the right of it indicates the desired output dimension.

Inputs:
  • x (Tuple) - input tensor used for calculation. the data type of the tensor must be the same.

Outputs:

Tensor, the shape of it can be obtained from the equation, and the data type is the same as input tensors.

Raises:

TypeError – If equation itself is invalid, or the equation does not match the input tensor.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> equation = "i->"
>>> einsum = ops.Einsum(equation)
>>> output = einsum([x])
>>> print(output)
[7.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> equation = "i,i->i"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[ 2. 8. 12.]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> y = Tensor(np.array([[2.0, 3.0], [1.0, 2.0], [4.0, 5.0]]), mindspore.float32)
>>> equation = "ij,jk->ik"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[[16. 22.]
[37. 52.]]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->ji"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[[1. 4.]
[2. 5.]
[3. 6.]]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->j"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[5. 7. 9.]
>>>
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "...->"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x,))
>>> print(output)
[21.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 1.0]), mindspore.float32)
>>> equation = "j,i->ji"
>>> einsum = ops.Einsum(equation)
>>> output = einsum((x, y))
>>> print(output)
[[ 2. 4. 1.]
[ 4. 8. 2.]
[ 6. 12. 3.]]
class tinyms.primitives.Elu(alpha=1.0)[source]

Exponential Linear Uint activation function.

Applies the exponential linear unit function element-wise. The activation function is defined as:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

The picture about ELU looks like this ELU .

Parameters:

alpha (float) – The alpha value of ELU, the data type is float. Only support ‘1.0’ currently. Default: 1.0.

Inputs:
  • input_x (Tensor) - The input of ELU is a Tensor of any dimension with data type of float16, float32 or float64.

Outputs:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of input_x is neither float16, float32 nor float64.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> elu = ops.Elu()
>>> output = elu(input_x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
class tinyms.primitives.EmbeddingLookup[source]

Returns a slice of input tensor based on the specified indices.

This Primitive has the similar functionality as GatherV2 operating on axis = 0, but has one more inputs: offset.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). This represents a Tensor slice, instead of the entire Tensor. Currently, the dimension is restricted to be 2.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of input_params, and the exceeding part will be filled with 0 in the output. Values do not support negative and the result is undefined if values are negative. The data type should be int32 or int64.

  • offset (int) - Specifies the offset value of this input_params slice. Thus the real indices are equal to input_indices minus offset.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\). The data type is the same with input_params.

Raises:
  • TypeError – If dtype of input_indices is not int.

  • ValueError – If length of shape of input_params is greater than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
>>> input_indices = Tensor(np.array([[5, 2], [8, 5]]), mindspore.int32)
>>> offset = 4
>>> output = ops.EmbeddingLookup()(input_params, input_indices, offset)
>>> print(output)
[[[10. 11.]
  [ 0.  0.]]
 [[ 0.  0.]
  [10. 11.]]]
class tinyms.primitives.Eps[source]

Create a Tensor with the same data type and shape as input, and the element value is the minimum value that the corresponding data type can be expressed.

Inputs:
  • x (Tensor) - Tensor of any dimension used to obtain the minimum value that its data type can be expressed. The data type must be float16, float32 or float64.

Outputs:

Tensor, has the same type and shape as x, but filled with x dtype minimum val.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If data type of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([4, 1, 2, 3], mindspore.float32)
>>> output = ops.Eps()(x)
>>> print(output)
[1.5258789e-05 1.5258789e-05 1.5258789e-05 1.5258789e-05]
class tinyms.primitives.Equal[source]

Computes the equivalence between two tensors element-wise.

Refer to mindspore.ops.equal() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: The shape of two inputs are different
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> equal = ops.Equal()
>>> output = equal(x, 2.0)
>>> print(output)
[False True False]
>>> # case 2: The shape of two inputs are the same
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal = ops.Equal()
>>> output = equal(x, y)
>>> print(output)
[ True  True False]
class tinyms.primitives.EqualCount[source]

Computes the number of the same elements of two tensors.

The two input tensors must have the same data type and shape.

Inputs:
  • x (Tensor) - The first input tensor. If the data type and shape of y are determined, then x must be the same as y, and vice versa. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The second input tensor. If the data type and shape of x are determined, then y must be the same as x, and vice versa.

Outputs:

Tensor, with the type same as input tensor and shape as \((1,)\).

Raises:
  • TypeError – If x or y is not a Tensor.

  • ValueError – If shape of x is not equal to shape of y.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal_count = ops.EqualCount()
>>> output = equal_count(x, y)
>>> print(output)
[2]
class tinyms.primitives.Erf[source]

Computes the Gauss error function of x element-wise.

Refer to mindspore.ops.erf() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erf = ops.Erf()
>>> output = erf(x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
class tinyms.primitives.Erfc[source]

Computes the complementary error function of x element-wise.

Refer to mindspore.ops.erfc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erfc = ops.Erfc()
>>> output = erfc(x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
class tinyms.primitives.Erfinv[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1).

The formula is defined as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16 or float64.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises:

TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = ops.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.Erfinv[source]

Computes the inverse error function of input. The inverse error function is defined in the range (-1, 1).

The formula is defined as:

\[erfinv(erf(x)) = x\]
Inputs:
  • input_x (Tensor) - The input tensor to compute to, with data type float32, float16 or float64.

Outputs:

Tensor, has the same shape and dtype as input_x.

Raises:

TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> erfinv = ops.Erfinv()
>>> output = erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
class tinyms.primitives.EuclideanNorm(keep_dims=False)[source]

Calculates the Euclidean norm(aka L2 norm) of a Tensor along the specified axes. The specified axes are removed by default.

Parameters:

keep_dims (bool, optional) – whether to retain the reduced dimensions. If true, retains them with length 1. If false, these dimensions are removed. Default: False.

Inputs:
  • x (Tensor) - The input Tensor to reduce.

  • axes (Tensor) - The axes to perform reduction on. Must be one of the following types: int32, int64. It must be in range \([-rank(x), rank(x))\).

Outputs:

Tensor, has the same type as the ‘x’.

Raises:
Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[3, 5], [4, 12]])).astype(np.int32)
>>> axes = Tensor([0])
>>> op = ops.EuclideanNorm(keep_dims=True)
>>> output = op(x, axes)
>>> print(output)
[[5 13]]
class tinyms.primitives.Exp[source]

Returns exponential of a tensor element-wise.

Refer to mindspore.ops.exp() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 1.0, 3.0]), mindspore.float32)
>>> exp = ops.Exp()
>>> output = exp(x)
>>> print(output)
[ 1.        2.718282 20.085537]
class tinyms.primitives.Expand[source]

Expands the Tensor along singleton dimensions(dim with size 1) to match given desired shape.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.expand() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> shape = Tensor(np.array([3,4]), mindspore.int32)
>>> expand = ops.Expand()
>>> y = expand(x, shape)
>>> print(y)
[[1. 1. 1. 1.]
 [2. 2. 2. 2.]
 [3. 3. 3. 3.]]
class tinyms.primitives.ExpandDims[source]

Adds an additional dimension to input_x at the given axis.

Refer to mindspore.ops.expand_dims() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> expand_dims = ops.ExpandDims()
>>> output = expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
class tinyms.primitives.Expm1[source]

Returns exponential then minus 1 of a tensor element-wise.

Refer to mindspore.ops.expm1() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 2.0, 3.0, 5.0]), mindspore.float32)
>>> expm1 = ops.Expm1()
>>> output = expm1(x)
>>> print(output)
[  0.         6.389056  19.085537 147.41316 ]
class tinyms.primitives.ExtractGlimpse(centered=True, normalized=True, uniform_noise=True, noise='uniform')[source]

Extracts glimpses(usually subarea of rectangle) from the input image Tensor and return as windows.

Note

If extracted windows and the input image only partially overlap, random noise is filled in those non overlapping areas.

Parameters:
  • centered (bool, optional) – An optional bool. Indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the center of the input images. If false, the (0, 0) offset corresponds to the upper left corner of the input images. Defaults to True.

  • normalized (bool, optional) – An optional bool. indicates if the offset coordinates are normalized. Defaults to True.

  • uniform_noise (bool, optional) – An optional bool. indicates if the noise should be generated using a uniform distribution(aka. Gaussian distribution). Defaults to True.

  • noise (str, optional) –

    An optional string specifies the type of noise to fill. The window is determined by size and offsets. When the window and input image tensor don’t not overlap, random noise is filled. The value can be ‘uniform’, ‘gaussian’ and ‘zero’. Default: uniform.

    • When noise is ‘uniform’ and ‘gaussian’, the result is variable.

    • When noise is ‘zero’, the value of uniform_noise must be ‘False’ and the filling noise will be zero so that the result is fixed.

    • When uniform_noise is ‘True’, the value of noise only can be ‘uniform’. When uniform_noise is ‘False’, the value of noise can be ‘uniform’, ‘gaussian’ and ‘zero’.

Inputs:
  • x (Tensor) - A 4-D float tensor of shape \((batch_size, height, width, channels)\). Types allowed: float32.

  • size (Tensor) - A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width. Types allowed: int32. The value of size must be greater than zero.

  • offsets (Tensor) - A 2-D integer tensor of shape \((batch_size, 2)\) containing the y, x locations of the center of each window. Types allowed: float32.

Outputs:

A 4-D tensor of shape \((batch_size, glimpse_height, glimpse_width, channels)\) with type: float32.

Raises:
  • TypeError – If centered is not a bool.

  • TypeError – If normalize is not a bool.

  • TypeError – If uniform_noise is not a bool.

  • ValueError – If noise is not uniform, gaussian or zero.

  • ValueError – If the value of size is not constant value.

  • ValueError – If the batch_size of input is inconsistent with the batch_size of offsets.

  • ValueError – If the value of offsets[1] is not 2.

  • ValueError – If the input is not Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[[0.0], [1.0], [2.0]], [[3.0], [4.0], [5.0]], [[6.0], [7.0], [8.0]]]], dtype=mindspore.float32)
>>> size = Tensor((2, 2), dtype=mindspore.int32)
>>> offsets = Tensor([[1, 1]], dtype=mindspore.float32)
>>> ops = P.image_ops.ExtractGlimpse(centered = False, normalized = False,
>>>                                  uniform_noise = False, noise = "uniform")
>>> output = ops(x, size, offsets)
>>> print(output)
[[[[0.]
   [1.]]
  [[3.]
   [4.]]]]
class tinyms.primitives.ExtractImagePatches(ksizes, strides, rates, padding='valid')[source]

Extracts patches from images. The input tensor must be a 4-D tensor and the data format is NCHW.

Parameters:
  • ksizes (Union[tuple[int], list[int]]) – The size of sliding window, must be a tuple or a list of integers, and the format is [1, 1, ksize_row, ksize_col].

  • strides (Union[tuple[int], list[int]]) – Distance between the centers of the two consecutive patches, must be a tuple or list of int, and the format is [1, 1, stride_row, stride_col].

  • rates (Union[tuple[int], list[int]]) – In each extracted patch, the gap between the corresponding dimension pixel positions, must be a tuple or a list of integers, and the format is [1, 1, rate_row, rate_col].

  • padding (str) –

    The type of padding algorithm, is a string whose value is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Means that the patch can take the part beyond the original image, and this part is filled with 0.

    • valid: Means that the taken patch area must be completely covered in the original image.

Inputs:
  • input_x (Tensor) - A 4-D tensor whose shape is [in_batch, in_depth, in_row, in_col] and data type is number.

Outputs:

Tensor, a 4-D tensor whose data type is same as ‘input_x’, and the shape is [out_batch, out_depth, out_row, out_col], Where the out_batch is the same as the in_batch and

\[out_depth=ksize\_row * ksize\_col * in\_depth\]

and if ‘padding’ is “valid”:

\[out\_row=floor((in\_row - (ksize\_row + (ksize\_row - 1) * (rate\_row - 1))) / stride\_row) + 1 out\_col=floor((in\_col - (ksize\_col + (ksize\_col - 1) * (rate\_col - 1))) / stride\_col) + 1\]

if ‘padding’ is “same”:

\[out\_row=floor((in\_row - 1) / stride\_row) + 1 out\_col=floor((in\_col - 1) / stride\_col) + 1\]
Supported Platforms:

Ascend GPU

class tinyms.primitives.ExtractVolumePatches(kernel_size, strides, padding)[source]

Extract patches from input and put them in the “depth” output dimension. “depth” dimension is the second dim of output.

Parameters:
  • kernel_size (Union[int, tuple[int], list[int]]) – A list of ints which’s length is 3 or 5. The size of the sliding window for each dimension of input. Must be: \([1, 1, k_d, k_h, k_w]\) or \([k_d, k_h, k_w]\). If \(k_d = k_h = k_w\), you can enter an integer.

  • strides (Union[int, tuple[int], list[int]]) – A list of ints which’s length is 3 or 5. How far the centers of two consecutive patches are in input. Must be: \([1, 1, s_d, s_h, s_w]\) or \([s_d, s_h, s_w]\). If \(s_d = s_h = s_w\), you can enter an integer.

  • padding (str) – A string from: “SAME”, “VALID”. The type of padding algorithm to use.

Inputs:
  • input_x (Tensor) - A Tensor. 5-D Tensor with shape \(()\).

Outputs:

Tensor, has the same type as input. If padding is “VALID”, the shape is \((x_n, k_d * k_h * k_w * x_c, 1 + (x_d - k_d) / s_d, 1 + (x_h - k_h) / s_h, 1 + (x_w - k_w) / s_w)\); if padding is “SAME”, the shape is \(( x_n, k_d * k_h * k_w * x_c, (x_d + s_d - 1) / s_d, (x_h + s_h - 1) / s_h, (x_w + s_w - 1) / s_w)\).

Raises:
  • TypeError – If kernel_size or strides is not a list, a tuple or an int.

  • TypeError – If input_x is not a tensor.

  • TypeError – If padding is not str.

  • ValueError – If the length of kernel_size is neither 3 nor 5 and kernel_size is not an integer.

  • ValueError – If the length of strides is neither 3 nor 5 and strides is not an integer.

  • ValueError – If padding is neither “VALID” nor “SAME”.

  • ValueError – If elements of kernel_size or strides are not positive integer.

  • ValueError – If input_x is not a tensor in dimension 5.

  • ValueError – If input_x’s shape has zero.

  • ValueError – If one of kernel_size or strides’ first two numbers is not 1.

  • ValueError – If padding = “VALID” and \(input\_x - kernel\_size\) is less than 0 in d, h or w dimension.

  • ValueError – If padding = “SAME” and \(padding\_needed = ((input\_x + strides - 1) / strides - 1) * strides + kernel\_size - input\_x\) is less than 0 in d, h or w dimension.

  • ValueError – If x_h is not 1 or x_w is not 1 and \(x_w + padding\_needed - k_w - s_w\) is less than 0.

  • ValueError – If \(x_d * x_h * x_w\) is greater than 2048.

Supported Platforms:

Ascend GPU CPU

Examples

>>> kernel_size = (1, 1, 2, 2, 2)
>>> strides = (1, 1, 1, 1, 1)
>>> padding = "VALID"
>>> input_x = P.Reshape()(Tensor(np.arange(1, 28), mstype.float16), (1, 1, 3, 3, 3))
>>> output_y = ops.ExtractVolumePatches(kernel_size, strides, padding)(input_x)
>>> print(output_y.shape)
(1, 8, 2, 2, 2)
class tinyms.primitives.Eye[source]

Creates a tensor with ones on the diagonal and zeros in the rest.

Refer to mindspore.ops.eye() for more details.

Inputs:
  • n (int) - The number of rows of returned tensor. Constant value only.

  • m (int) - The number of columns of returned tensor. Constant value only.

  • t (mindspore.dtype) - MindSpore’s dtype, the data type of the returned tensor. The data type can be bool or Number. Default: None, the data type of the returned tensor is mindspore.float32.

Outputs:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of output depends on the user’s Inputs n and m. And the data type depends on Inputs t.

Supported Platforms:

Ascend GPU CPU

Examples

>>> eye = ops.Eye()
>>> output = eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = eye(1, 2, mindspore.float64)
>>> print(output)
[[1. 0.]]
>>> print(output.dtype)
Float64
class tinyms.primitives.FFTWithSize(signal_ndim, inverse, real, norm='backward', onesided=True, signal_sizes=())[source]

Fourier transform, can be adjusted by parameters to achieve FFT/IFFT/RFFT/IRFFT.

For fft, it computes the following expression:

\[X[\omega_1, \dots, \omega_d] = \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},\]

where \(d\) = signal_ndim is number of dimensions for the signal, and \(N_i\) is the size of signal dimension \(i\).

For ifft, it computes the following expression:

\[X[\omega_1, \dots, \omega_d] = \frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},\]

where \(d\) = signal_ndim is number of dimensions for the signal, and \(N_i\) is the size of signal dimension \(i\).

Note

  • FFT/IFFT requires complex64 or complex128 inputs, return complex64 or complex128 outputs.

  • RFFT requires float32 or float64 inputs, return complex64 or complex128 outputs.

  • IRFFT requires complex64 or complex128 inputs, return float32 or float64 outputs.

Parameters:
  • signal_ndim (int) – The number of dimensions in each signal, this controls how many dimensions of the fourier transform are realized, can only be 1, 2 or 3.

  • inverse (bool) – Whether it is the inverse transformation.

  • real (bool) –

    Whether it is the real transformation.

    • ”inverse:False real:False” corresponds to FFT.

    • ”inverse:True real:False” corresponds to IFFT.

    • ”inverse:False real:True” corresponds to RFFT.

    • ”inverse:True real:True” corresponds to IRFFT.

  • norm (str, optional) –

    The normalization, optional values: [“backward”, “forward”, “ortho”]. Default value: “backward”.

    • ”backward” has the direct transforms unscaled and the inverse transforms scaled by \(1/n\), where n is the input x’s element numbers.

    • ”ortho” has both direct and inverse transforms are scaled by \(1/\sqrt n\).

    • ”forward” has the direct transforms scaled by \(1/n\) and the inverse transforms unscaled.

  • onesided (bool, optional) – Controls whether the input is halved to avoid redundancy. Default: True.

  • signal_sizes (tuple, optional) –

    Size of the original signal (the signal before rfft, no batch dimension), only in IRFFT mode and set onesided to True requires the parameter, the following conditions must be satisfied. Default: ().

    • The length of signal_sizes is equal to the signal_ndim of the IRFFT: \(len(signal_sizes)=signal_ndim\).

    • The last dimension of signal_sizes divided by 2 is equal to the last dimension of the IRFFT input: \(signal_size[-1]/2+1=x.shape[-1]\).

    • signal_sizes has exactly the same dimensions as the input shape except for the last dimension: \(signal_sizes[:-1]=x.shape[:-1]\).

Inputs:
  • x (Tensor) - The dimension of the input tensor must be greater than or equal to signal_ndim.

Outputs:

A tensor containing the complex-to-complex, real-to-complex or complex-to-real Fourier transform result.

Raises:
  • TypeError – If the input type of FFT/IFFT/IRFF is not one of: complex64, complex128.

  • TypeError – If the input type of RFFT is not one of: float32, float64.

  • TypeError – If the input type is not Tensor.

  • ValueError – If x dimension is less than signal_ndim.

  • ValueError – If signal_ndim is greater than 3 or less than 1.

  • ValueError – If norm is none of “backward”, “forward” or “ortho”.

Supported Platforms:

GPU CPU

Examples

>>> # case FFT: signal_ndim: 1, inverse: False, real: False.
>>> fft_in = Tensor(np.array([2, 1, 2]), mindspore.complex64)
>>> fft_net = ops.FFTWithSize(signal_ndim=1, inverse=False, real=False)
>>> fft_output = fft_net(fft_in)
>>> print(fft_output)
[5.        +0.j         0.5       +0.86602545j 0.50000006-0.8660255j ]
>>> # case IFFT: signal_ndim: 1, inverse: True, real: False.
>>> ifft_in = fft_output
>>> ifft_net = ops.FFTWithSize(signal_ndim=1, inverse=True, real=False)
>>> ifft_output = ifft_net(ifft_in)
>>> print(ifft_output)
[2.        -1.9868216e-08j 0.99999994+0.0000000e+00j
 1.9999999 +7.9472862e-08j]
>>> # case RFFT2D: signal_ndim: 2, inverse: False, real: True.
>>> rfft_in = Tensor(np.array([[2, 1, 2], [3, 1, 6]]), mindspore.float32)
>>> rfft_net = ops.FFTWithSize(signal_ndim=2, inverse=False, real=True)
>>> rfft_output = rfft_net(rfft_in)
>>> print(rfft_output)
[[ 1.5000000e+01+1.1920929e-07j -2.3841858e-07+5.1961522e+00j]
 [-5.0000000e+00-2.9802322e-08j  9.9999988e-01-3.4641016e+00j]]
>>> # case IRFFT2D: signal_ndim: 2, inverse: True, real: True.
>>> irfft_in = rfft_output
>>> irfft_net = ops.FFTWithSize(signal_ndim=2, inverse=True, real=True, signal_sizes=rfft_in.shape)
>>> irfft_output = irfft_net(irfft_in)
>>> print(irfft_output)
[[2.         1.         2.        ]
 [3.         0.99999994 5.9999995 ]]
class tinyms.primitives.FastGeLU[source]

Fast Gaussian Error Linear Units activation function.

Refer to mindspore.ops.fast_gelu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = ops.FastGeLU()
>>> output = fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
class tinyms.primitives.FastGelu[source]

Same as operator FastGeLU. FastGelu will be deprecated in the future. Please use FastGeLU instead.

class tinyms.primitives.Fill[source]

The Fill interface is deprecated, please use the mindspore.ops.FillV2 instead.

Supported Platforms:

Deprecated

class tinyms.primitives.FillDiagonal(fill_value, wrap=False)[source]

Fills the main diagonal of a Tensor in-place with a specified value and returns the result. The input has at least 2 dimensions, and all dimensions of input must be equal in length when the dimension of input is greater than 2.

Parameters:
  • fill_value (float) – The value to fill the diagonal of input_x.

  • wrap (bool, optional) – Controls whether the diagonal elements continue onto the remaining rows in case of a tall matrix(A matrix has more rows than columns). Examples blow demonstrates how it works on a tall matrix if wrap is set True. Default: False.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type must be float32, int32 or int64.

Outputs:
  • y (Tensor) - Tensor, has the same shape and data type as the input input_x.

Raises:
  • TypeError – If data type of input_x is not one of the following: float32, int32, int64.

  • ValueError – If the dimension of input_x is not greater than 1.

  • ValueError – If the size of each dimension is not equal, when the dimension is greater than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> fill_value = 9.9
>>> fill_diagonal = ops.FillDiagonal(fill_value)
>>> y = fill_diagonal(x)
>>> print(y)
[[9.9 2.  3. ]
 [4.  9.9 6. ]
 [7.  8.  9.9]]
>>> x = Tensor(np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4], [5, 5, 5]]).astype(np.int32))
>>> fill_value = 9.0
>>> fill_diagonal = ops.FillDiagonal(fill_value)
>>> y = fill_diagonal(x)
>>> print(y)
[[9 0 0]
 [1 9 1]
 [2 2 9]
 [3 3 3]
 [4 4 4]
 [5 5 5]]
>>> x = Tensor(np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3],
...                      [4, 4, 4], [5, 5, 5], [6, 6, 6]]).astype(np.int64))
>>> fill_value = 9.0
>>> wrap = True
>>> fill_diagonal = FillDiagonal(fill_value, wrap)
>>> y = fill_diagonal(x)
>>> print(y)
[[9 0 0]
 [1 9 1]
 [2 2 9]
 [3 3 3]
 [9 4 4]
 [5 9 5]
 [6 6 9]]
class tinyms.primitives.FillV2[source]

Creates a tensor with shape described by shape and fills it with values in value .

Inputs:
  • shape (Union[Tuple[int], Tensor[int]]) - 1-D Tensor or Tuple, specify the shape of output tensor. Its dtype must be int32 or int64.

  • value (Tensor) - A 0-D Tensor, the value to fill the output tensor y .

Outputs:
  • y (Tensor) - A tensor, its shape and value are described above.

Raises:
  • TypeError – If shape is not a 1-D tensor or tuple.

  • TypeError – If the data type of shape is not int32 or int64.

  • ValueError – If value is not a 0-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> fillV2 = ops.FillV2()
>>> output = fillV2(Tensor([2, 3], mindspore.int32), Tensor(1, mindspore.float32))
>>> print(output)
[[1. 1. 1.]
 [1. 1. 1.]]
>>> output = fillV2(Tensor([3, 3], mindspore.int64), Tensor(0, mindspore.int32))
>>> print(output)
[[0 0 0]
 [0 0 0]
 [0 0 0]]
class tinyms.primitives.Fills[source]

The Fills primitive is deprecated. Please use mindspore.ops.fill() instead.

Supported Platforms:

Deprecated

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> a = Tensor(np.arange(4).reshape((2,2)).astype('float32'))
>>> fills = ops.Fills()
>>> output = fills(a, float(1))
>>> print(output)
[[1. 1.]
 [1. 1.]]
class tinyms.primitives.Flatten[source]

Flattens a tensor without changing its batch size on the 0-th axis.

Refer to mindspore.ops.flatten() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> flatten = ops.Flatten()
>>> output = flatten(input_x)
>>> print(output.shape)
(1, 24)
class tinyms.primitives.FloatStatus[source]

Determines if the elements contain Not a Number(NaN), infinite or negative infinite. 0 for normal, 1 for overflow.

Inputs:
  • x (Tensor) - The input tensor. The data type must be float16, float32 or float64. \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the shape of \((1,)\), and the dtype is mindspore.dtype.float32.

Raises:

TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

GPU

Examples

>>> float_status = ops.FloatStatus()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = float_status(x)
>>> print(result)
[1.]
class tinyms.primitives.Floor[source]

Rounds a tensor down to the closest integer element-wise.

Refer to mindspore.ops.floor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> floor = ops.Floor()
>>> output = floor(x)
>>> print(output)
[ 1.  2. -2.]
class tinyms.primitives.FloorDiv[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Refer to mindspore.ops.floor_div() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_div = ops.FloorDiv()
>>> output = floor_div(x, y)
>>> print(output)
[ 0  1 -1]
class tinyms.primitives.FloorMod[source]

Computes the remainder of division element-wise, and it’s a flooring divide.

Refer to mindspore.ops.floor_mod() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_mod = ops.FloorMod()
>>> output = floor_mod(x, y)
>>> print(output)
[2 1 2]
class tinyms.primitives.Fmax[source]

Computes the maximum of input tensors element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.fmax() for more detail.

Supported Platforms:

CPU

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> fmax = ops.Fmax()
>>> output = fmax(x1, x2)
>>> print(output)
[4. 5. 6.]
class tinyms.primitives.Fmin[source]

Computes the minimum of input tensors element-wise.

Refer to mindspore.ops.fmin() for more detail.

Supported Platforms:

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mstype.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mstype.float32)
>>> fmin = ops.Fmin()
>>> output = fmin(x1, x2)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.FractionalAvgPool(pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0)[source]

Performs fractional avg pooling on the input.

Fractional avg pooling is similar to regular avg pooling, but with the added flexibility of allowing the overall reduction ratio N to be a non-integer value. In regular avg pooling, an input set is reduced in size by taking the average value of N x N (usually 2x2) subsections of the set, with the goal of reducing the set by a factor of N, where N is an integer.

Warning

“pooling_ratio”, currently only supports row and col dimension and should be >= 1.0, the first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions.

Parameters:
  • pooling_ratio (list(float)) – Decide the shape of output, is a list of floats that has length >= 4. Pooling ratio for each dimension of value should be >=0, currently only support for row and col dimension. The first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions.

  • pseudo_random (bool, optional) – Generate the pooling sequence either randomly or pseudo-randomly. If the pseudo_random parameter is set to True, the sequence will be generated in a pseudo-random fashion, otherwise it will be generated randomly. Refer to Fractional Max-Pooling by Benjamin Graham to understand the distinction between the two. Default: False.

  • overlapping (bool, optional) – When set to True, the values at the boundary of adjacent pooling cells will be shared by both cells during pooling process. When set to False, the values are not reused. Default: False.

  • deterministic (bool, optional) – If deterministic is set to True, a fixed pooling region will be used in the computation graph, ensuring that the FractionalAvgPool is deterministic. This is often used in unit tests. When set to False, fixed pool regions will not be used. Default: False.

  • seed (int, optional) – If either seed or seed2 are set to a non-zero value, the random number generator will be seeded using the specified seed. If neither seed nor seed2 are set, the generator will be seeded by a random seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • x (Tensor) -The data type must be one of the following types: float32, float64, int32, int64. Tensor of shape \((N, H_{in}, W_{in}, C_{in})\).

Outputs:
  • y (Tensor) - A tensor, the output of FractionalAvgPool, has the same data type with x. Tensor of shape \((N, H_{out}, W_{out}, C_{out})\).

  • row_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary rows.

  • col_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary cols.

Raises:
  • TypeError – If data type of x is not float32, float64, int32, int64.

  • TypeError – If x is not a 4D tensor.

  • ValueError – If element of x equals 0 or is less than 0.

  • ValueError – If pooling_ratio is a list whose length is not equal to 4.

  • ValueError – If the first and last element of pooling_ratio is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]).reshape([1,4,4,1]).astype(np.int64)
>>> pooling_ratio=[1.0,1.5,1.5,1.0]
>>> fractionalavgpool_op = ops.FractionalAvgPool(pooling_ratio=pooling_ratio)
>>> output = fractionalavgpool_op(Tensor(x))
>>> print(output)
(Tensor(shape=[1, 2, 2, 1], dtype=Int64, value=
[[[[ 3],
   [ 5]],
  [[11],
   [13]]]]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]))
class tinyms.primitives.FractionalMaxPool(pooling_ratio, pseudo_random=False, overlapping=False, deterministic=False, seed=0, seed2=0)[source]

Performs fractional max pooling on the input.

Fractional max pooling is similar to regular max pooling, but with the added flexibility of allowing the overall reduction ratio N to be a non-integer value. In regular max pooling, an input set is reduced in size by taking the maximum value of N x N (usually 2x2) subsections of the set, with the goal of reducing the set by a factor of N, where N is an integer.

In contrast, fractional max pooling uses randomly generated pool sizes that are fairly uniform in size.

Warning

“pooling_ratio”, currently only supports row and col dimension and should be >= 1.0, the first and last elements must be 1.0 because pooling on batch and channels dimensions is not allowed.

Parameters:
  • pooling_ratio (list(float)) – Decide the shape of output, is a list of float numbers has length >= 4. Pooling ratio for each dimension of value should not be less than 0, currently only support for row and col dimension.

  • pseudo_random (bool, optional) –

    Generate the pooling sequence either randomly or pseudo-randomly. If the pseudo_random parameter is set to True, the sequence will be generated in a pseudo-random fashion, otherwise it will be generated randomly. Refer to Fractional Max-Pooling by Benjamin Graham to understand the distinction between the two. Default: False.

  • overlapping (bool, optional) – When set to True, the values at the boundary of adjacent pooling cells will be shared by both cells during pooling process. When set to False, the values are not reused. Default: False.

  • deterministic (bool, optional) – If deterministic is set to True, a fixed pooling region will be used in the computation graph, ensuring that the FractionalMaxPool is deterministic. This is often used in unit tests. When set to False, fixed pool regions will not be used. Default: False.

  • seed (int, optional) – If either seed or seed2 are set to a non-zero value, the random number generator will be seeded using the specified seed. If neither seed nor seed2 are set, the generator will be seeded by a random seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • x (Tensor) -The data type must be one of the following types: float32, float64, int32, int64. Tensor of shape \((N, H_{in}, W_{in}, C_{in})\).

Outputs:
  • y (Tensor) - the output of FractionalMaxPool, has the same data type with x. Tensor of shape \((N, H_{out}, W_{out}, C_{out})\).

  • row_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary rows.

  • col_pooling_sequence (Tensor) - A tensor of type int64, the result list of pool boundary cols.

Raises:
  • TypeError – If data type of x is not float32, float64, int32, int64.

  • TypeError – If x is not a 4D tensor.

  • ValueError – If element of x equals 0 or is less than 0.

  • ValueError – If pooling_ratio is a list whose length is not equal to 4.

  • ValueError – If the first and last element of pooling_ratio is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]).reshape([1,4,4,1]).astype(np.int64)
>>> pooling_ratio=[1.0,1.5,1.5,1.0]
>>> fractionalmaxpool_op = ops.FractionalMaxPool(pooling_ratio=pooling_ratio)
>>> output = fractionalmaxpool_op(Tensor(x))
>>> print(output)
(Tensor(shape=[1, 2, 2, 1], dtype=Int64, value=
[[[[ 6],
   [ 8]],
  [[14],
   [16]]]]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]), Tensor(shape=[3], dtype=Int64, value= [0, 2, 4]))
class tinyms.primitives.FractionalMaxPool3DWithFixedKsize(ksize, output_shape, data_format='NCDHW')[source]

Applies a 3D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in \((kD, kH, kW)\) regions by a stochastic step size determined by the target output size output_shape.

The number of output features is equal to the number of input planes.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

The input and output data format can be “NCDHW” and “NDHWC”. N is the batch size, C is the number of channels, D the feature depth, H is the feature height, and W is the feature width.

Parameters:
  • ksize (Union[float, tuple]) – Size of the pooling window. ksize can be a tuple of three values specify a shape \((k_D, k_H, k_W)\), or a single int K for \((K, K, K)\).

  • output_shape (Union[int, tuple]) – The target output shape. output_shape can be a tuple of three values specify a shape \((D_{out}, H_{out}, W_{out})\), or a single float S for \((S, S, S)\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCDHW’ and ‘NHDWC’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - The input of FractionalMaxPool3DWithFixedKsize, which is a 4D or 5D tensor. Tensor of data type : float16, float32, double, int32, int64. Supported shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((N, D_{in}, H_{in}, W_{in}, C)\).

  • random_samples (Tensor) - The random step of FractionalMaxPool3DWithFixedKsize, which is a 3D tensor. Tensor of data type : float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 3)\)

Outputs:
  • y (Tensor) - A tensor, the output of FractionalMaxPool3DWithFixedKsize. Has the same data type with x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((N, D_{out}, H_{out}, W_{out}, C)\).

  • argmax (Tensor) - A tensor, the indices along with the outputs. Has the same shape as the y and int32 or int64 data type.

Raises:
  • TypeError – If input_x is not a 4D or 5D tensor.

  • TypeError – If random_samples is not a 3D tensor.

  • TypeError – If data type of x is not float16, float32, double, int32, int64.

  • TypeError – If dtype of random_samples is not float16, float32, double.

  • TypeError – If dtype of argmax is not int32, int64.

  • ValueError – If output_shape is a tuple and if output_shape length is not 3.

  • ValueError – If ksize is a tuple and if ksize length is not 3.

  • ValueError – If numbers in output_shape or ksize is not positive.

  • ValueError – If data_format is neither ‘NCDHW’ nor ‘NDHWC’.

  • ValueError – If the first dimension size of input_x and random_samples is not equal.

  • ValueError – If the second dimension size of input_x and random_samples is not equal.

  • ValueError – If the third dimension size of random_samples is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...       .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> random_samples = Tensor(np.array([0.7, 0.7, 0.7]).reshape([1, 1, 3]), mstype.float32)
>>> ksize = (1, 1, 1)
>>> output_shape = (1, 1, 2)
>>> net = ops.FractionalMaxPool3DWithFixedKsize(ksize = ksize, output_shape = output_shape)
>>> output, argmax = net(x, random_samples)
>>> print(output)
>>> print(argmax)
[[[[[13. 16.]]]]]
[[[[[12 15]]]]]
class tinyms.primitives.FractionalMaxPoolWithFixedKsize(ksize, output_shape, data_format='NCHW')[source]

Applies a 2D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in \((kH, kW)\) regions by a stochastic step size determined by the target output size output_shape.

The number of output features is equal to the number of input planes.

Fractional MaxPooling is described in the paper Fractional Max-Pooling.

Parameters:
  • ksize (Union[int, tuple[int]]) – Size of the pooling window. ksize can be a tuple of two values specify a shape \((k_H, k_W)\), or a single int K for \((K, K)\).

  • output_shape (Union[int, tuple[int]]) – The target output shape. output_shape can be a tuple of two values specify a shape \((H_{out}, W_{out})\), or a single float S for \((S, S)\).

  • data_format (str, optional) – The optional value for data format, is ‘NCHW’. Default: “NCHW”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C, H_{in}, W_{in})\), with float16, float32, float64, int32, int64 data type.

  • random_samples (Tensor) - Tensor of shape \((N, C, 2)\). with float16, float32, float64 data type.

Outputs:
  • y (Tensor) - Has the same type as the input_x. Has the shape \((N, C, H_{out}, W_{out})\).

  • argmax (Tensor) -A tensor whose data type must be int64. Has the same shape as the y.

Raises:
  • TypeError – If data type of input_x is not one of the following: float16, float32, float64, int32, int64.

  • TypeError – If data type of random_samples is not one of the following: float16, float32, float64.

  • ValueError – If ksize is not a number and ksize is not a tuple of length 2.

  • ValueError – If output_shape is not a number and output_shape is not a tuple of length 2.

  • ValueError – If the sum of ksize , output_shape and -1 is larger than the corresponding dimension of input_x.

  • ValueError – If the dimension of random_samples is not 3.

  • ValueError – If the first dimension size of input_x and random_samples is not equal.

  • ValueError – If the second dimension size of input_x and random_samples is not equal.

  • ValueError – If the third dimension size of random_samples is not 2.

Supported Platforms:

CPU

Examples

>>> # the ksize is an int number and the output_shape is a tuple.
>>> ksize = 2
>>> output_shape = (2,2)
>>> data_format = "NCHW"
>>> input_x = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.9581, 0.4760, 0.9014, 0.8522, 0.3664,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([1, 1, 5, 5]), mstype.float32)
>>> random_samples = Tensor(np.array([[[0.8, 0.8]]]), mstype.float32)
>>> net = ops.FractionalMaxPoolWithFixedKsize(ksize, output_shape, data_format)
>>> y, argmax = net(input_x, random_samples)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
class tinyms.primitives.FusedAdaFactor(enable_scale_parameter=False, enable_first_moment=False, enable_weight_decay=False)[source]

Updates gradients by the Adaptive Learning Rates with Sublinear Memory Cost (Adafactor) algorithm.

The Adafactor algorithm is proposed in Adafactor: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost.

Warning

This is an experimental API that is subject to change or deletion.

Adafactor for weight vector are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ \hat{V}_{t}=\hat{\beta}_{2} \hat{V}_{t-1}+\left(1-\hat{\beta}_{2_{t}}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n}\right) \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} \hat{U}_{t} \end{array}\end{split}\]

Adafactor for weight matrices are as follows,

\[\begin{split}\begin{array}{l} \\ \alpha_{t}=\max \left(\epsilon_{2}, \operatorname{RMS}\left(X_{t-1}\right)\right) \rho_{t} \\ G_{t}=\nabla f_{t}\left(X_{t-1}\right) \\ R_{t}=\hat{\beta}_{2 t} R_{t-1}+\left(1-\hat{\beta}_{2 t}\right)\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) 1_{m} \\ C_{t}=\hat{\beta}_{2 t} C_{t-1}+\left(1-\hat{\beta}_{2 t}\right) 1_{n}^{\top}\left(G_{t}^{2}+ \\ \epsilon_{1} 1_{n} 1_{m}^{\top}\right) \\ \hat{V}_{t}=R_{t} C_{t} / 1_{n}^{\top} R_{t} \\ U_{t}=G_{t} / \sqrt{\hat{V}_{t}} \\ \hat{U}_{t}=U_{t} / \max \left(1, \operatorname{RMS}\left(U_{t}\right) / d\right) \\ X_{t}=X_{t-1}-\alpha_{t} U_{t} \end{array}\end{split}\]

Where RMS is:

\[\begin{split}\operatorname{RMS}\left(U_{t}\right)=\operatorname{RMS}_{x \in X}\left(u_{x t}\right)= \\ \sqrt{\operatorname{Mean}_{x \in X}\left(\frac{\left(g_{x t}\right)^{2}}{\hat{v}_{x t}}\right)}\end{split}\]

\(x\) is each individual parameter, \(t\) is assumed to be the current number of steps, \(a_{t}\) is the learning rate, \(f(X)\) is the loss function, \(\epsilon1\) and \(\epsilon2\) is a small positive number to prevent errors, \(d\) is the clipping threshold, \(\beta_{2}\) is the moment decay, \(\rho\) is the relative step size, \(R\) is the running averages of the row sums of the squared gradient, \(C\) is the running averages of the column sums of the squared gradient.

Parameters:
  • enable_weight_decay (bool) – If True, enable weight decay. default: False

  • enable_first_moment (bool) – If True, enable first moment. default: False

  • enable_scale_parameter (bool) – If True, enable scale learning rate using parameter. default: False

Inputs:
  • epsilon (Tensor) - input epsilon pair.

  • clip_threshold (float) - The threshold of root mean square of final gradient update.

  • beta1 (float) - The exponential decay rate for the 1nd moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • weight_decay (float) - The weight decay value, must be a scalar tensor with float data type.

  • learning_rate (float) - The learning rate value.

  • gradient (Tensor) - Gradient.

  • param (Tensor) - Weights to be updated.

  • exp_avg (Tensor) - The exponential moving average of 1st moment optimizer state.

  • exp_avg_sq_row (Tensor) - The exponential moving average of square of gradient square row factor.

  • exp_avg_sq_col (Tensor) - The exponential moving average of square of gradient square col factor.

  • exp_avg_sq (Tensor) - The exponential moving average of square of gradient square.

Outputs:
  • dummy_param (Tensor) - The same shape and data type as param.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, Parameter
>>> from mindspore import dtype as mstype
>>> param_shape = [2, 3, 2]
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.opt = ops.FusedAdaFactor()
...         self.param = Parameter(Tensor(np.ones(param_shape), mstype.float32), name="param")
...         self.exp_avg = Parameter(Tensor(np.zeros(param_shape), mstype.float32), name="exp_avg")
...         self.exp_avg_sq = Parameter(Tensor(np.zeros(param_shape), mstype.float32), name="exp_avg_sq")
...         self.exp_avg_sq_row = Parameter(Tensor(np.zeros([2, 3]), mstype.float32), name="exp_avg_sq_row")
...         self.exp_avg_sq_col = Parameter(Tensor(np.zeros([2, 2]), mstype.float32), name="exp_avg_sq_col")
...
...     def construct(self, epsilon, clip_threshold, beta1, beta2, weight_decay, lr, grad):
...         out = self.opt(epsilon, clip_threshold, beta1, beta2, weight_decay, lr, grad, self.param,
...                        self.exp_avg, self.exp_avg_sq_row, self.exp_avg_sq_col, self.exp_avg_sq)
...         return out
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> gradient = Tensor(np.ones(param_shape), mstype.float32)
>>> output = net((1e-30, 1e-3), 1.0, 0.9, 0.8, 1e-2, 0.03, gradient)
class tinyms.primitives.FusedAdaFactorWithGlobalNorm(enable_scale_parameter=False, enable_first_moment=False, enable_weight_decay=False)[source]

Divide global norm for gradient in FusedAdaFactor, and refer to super class for FusedAdaFactor details

class tinyms.primitives.FusedCastAdamWeightDecay(use_locking=False)[source]

Updates gradients by the Adaptive Moment Estimation (AdamWeightDecay) algorithm with weight decay. This operator incorporates type conversion when parameters are initialized with dtype of float16.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization. The AdamWeightDecay variant was proposed in Decoupled Weight Decay Regularization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ update = \frac{m}{\sqrt{v} + \epsilon} \\ update = \begin{cases} update + weight\_decay * w & \text{ if } weight\_decay > 0 \\ update & \text{ otherwise } \end{cases} \\ w = w - lr * update \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(\beta_1, \beta_2\) represent beta1 and beta2, \(lr\) represents learning_rate, \(w\) represents var, \(decay\) represents weight_decay, \(\epsilon\) represents epsilon.

Parameters:

use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated with the type float16 or float32.

  • m (Tensor) - The 1st moment vector in the updating formula with the type float32.

  • v (Tensor) - the 2nd moment vector in the updating formula with the type float32.

  • lr (float) - \(lr\) in the updating formula.

  • beta1 (float) - The exponential decay rate for the 1st moment estimations.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimations.

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • decay (float) - The weight decay value, must be a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the type float16.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, Parameter
>>> from mindspore import dtype as mstype
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.opt = ops.FusedCastAdamWeightDecay()
...         self.var = Parameter(Tensor(np.ones([2, 2]), mstype.float16), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]), mstype.float32), name="v")
...     def construct(self, lr, beta1, beta2, epsilon, decay, grad, norm):
...         out = self.opt(self.var, self.m, self.v, lr, beta1, beta2, epsilon, decay, grad, norm)
...         return out
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target="CPU")
>>> net = Net()
>>> gradient = Tensor(np.ones([2, 2]), mstype.float16)
>>> output = net(0.001, 0.9, 0.999, 1e-8, 0.0, gradient, 1.0)
infer_dtype(var_dtype, m_dtype, v_dtype, lr_dtype, beta1_dtype, beta2_dtype, epsilon_dtype, decay_dtype, grad_dtype, global_norm)[source]

infer dtype

class tinyms.primitives.FusedSparseAdam(use_locking=False, use_nesterov=False)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(\beta_1^t\) and \(\beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula. With float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient, has the same data type as var and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If neither use_locking nor use_neserov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient or indices is not float32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adam = ops.FusedSparseAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,
...                                      epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[0.99971527 0.99971527]]]
class tinyms.primitives.FusedSparseFtrl(lr, l1, l2, lr_power, use_locking=False)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme.

All inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same type and shape as var.

  • linear (Parameter) - the linear coefficient to be updated, must be same type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

  • linear (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • ValueError – If shape of lr_power less than or equal to zero.

  • TypeError – If dtype of var is not float32.

  • TypeError – If dtype of indices is not int32.

  • TypeError – If shape of accum, linear or grad is not same as var.

  • TypeError – If shape of indices is not same as shape of first dimension of grad.

  • RuntimeError – If the data type of all of inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.FusedSparseFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[-0.00598256 -0.00598256]]
 [[-0.00598256 -0.00598256]]
 [[ 1.          1.        ]]]
class tinyms.primitives.FusedSparseLazyAdam(use_locking=False, use_nesterov=False)[source]

Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. This operator is used when the gradient is sparse. The behavior is not equivalent to the original Adam algorithm, as only the current indices parameters will be updated.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(\beta_1^t\) and \(\beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
  • var (Parameter) - Parameters to be updated with float32 data type. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • m (Parameter) - The 1st moment vector in the updating formula, has the same shape and data type as var.

  • v (Parameter) - The 2nd moment vector in the updating formula, has the same shape and data type as var. Mean square gradients, has the same type as var with float32 data type.

  • beta1_power (Tensor) - \(beta_1^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta2_power (Tensor) - \(beta_2^t\) in the updating formula with float32 data type. The shape is \((1, )\).

  • lr (Tensor) - \(l\) in the updating formula with float32 data type. The shape is \((1, )\).

  • beta1 (Tensor) - The exponential decay rate for the 1st moment estimations with float32 data type. The shape is \((1, )\).

  • beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations with float32 data type. The shape is \((1, )\).

  • epsilon (Tensor) - Term added to the denominator to improve numerical stability with float32 data type. The shape is \((1, )\).

  • gradient (Tensor) - Gradient value with float32 data type and gradient.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - Gradient indices with int32 data type and indices.shape[0] = gradient.shape[0].

Outputs:

Tuple of 3 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • m (Tensor) - A Tensor with shape \((1, )\).

  • v (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If neither use_locking nor use_nestrov is a bool.

  • TypeError – If dtype of var, m, v, beta1_power, beta2_power, lr, beta1, beta2, epsilon or gradient is not float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_lazyadam = ops.FusedSparseLazyAdam()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, indices):
...         out = self.sparse_apply_lazyadam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1,
...                                          beta2, epsilon, grad, indices)
...         return out
...
>>> net = Net()
>>> beta1_power = Tensor(0.9, mindspore.float32)
>>> beta2_power = Tensor(0.999, mindspore.float32)
>>> lr = Tensor(0.001, mindspore.float32)
>>> beta1 = Tensor(0.9, mindspore.float32)
>>> beta2 = Tensor(0.999, mindspore.float32)
>>> epsilon = Tensor(1e-8, mindspore.float32)
>>> gradient = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]), mindspore.float32)
>>> indices = Tensor([0, 1], mindspore.int32)
>>> output = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient, indices)
>>> print(net.var.asnumpy())
[[[0.9997121  0.9997121 ]]
 [[0.9997121  0.9997121 ]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedSparseProximalAdagrad(use_locking=False)[source]

Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the variable and accumulation tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and data type as var.

  • lr (Tensor) - The learning rate value. The data type must be float32. The shape is \((1, )\).

  • l1 (Tensor) - l1 regularization strength. The data type must be float32. The shape is \((1, )\).

  • l2 (Tensor) - l2 regularization strength. The data type must be float32. The shape is \((1, )\).

  • grad (Tensor) - A tensor of the same data type as var and grad.shape[1:] = var.shape[1:] if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 2 Tensors, this operator will update the input parameters directly, the outputs are useless.

  • var (Tensor) - A Tensor with shape \((N, *)\).

  • accum (Tensor) - A Tensor with shape \((1, )\).

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2 or grad is not float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of all inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.FusedSparseProximalAdagrad()
...         self.var = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.ones([3, 1, 2]).astype(np.float32)), name="accum")
...         self.lr = Tensor(0.01, mindspore.float32)
...         self.l1 = Tensor(0.0, mindspore.float32)
...         self.l2 = Tensor(0.0, mindspore.float32)
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[[0.1, 0.1]], [[0.1, 0.1]]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1]).astype(np.int32))
>>> output = net(grad, indices)
>>> print(net.var.asnumpy())
[[[0.99900496 0.99900496]]
 [[0.99900496 0.99900496]]
 [[1.         1.        ]]]
class tinyms.primitives.FusedWeightScaleApplyMomentum[source]

Optimizer that implements the Momentum algorithm with weight decay and loss scale.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Refer to mindspore.nn.Momentum for more details about the formula and usage.

Inputs of variable, accumulation and gradient comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. Data type conversion of Parameter is not supported. RuntimeError exception will be thrown.

Inputs:
  • weight_decay (Tensor) - The weight decay value, must be a scalar tensor with float data type. Default: 0.0.

  • loss_scale (Tensor) - The loss scale value, must be a scalar tensor with float data type. Default: 1.0.

  • variable (Parameter) - Weights to be updated. data type must be float.

  • accumulation (Parameter) - Accumulated gradient value by moment weight. Has the same data type with variable.

  • learning_rate (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float data type.

  • gradient (Tensor) - Gradient, has the same data type as variable.

  • momentum (Union[Number, Tensor]) - Momentum, must be a float number or a scalar tensor with float data type.

Outputs:

Tensor, parameters to be updated.

Supported Platforms:

GPU

Examples

Please refer to the usage in mindspore.nn.Momentum, and add weight_decay and loss_scale as inputs.

infer_dtype(d_dtype, s_dtype, v_dtype, a_dtype, l_dtype, g_dtype, m_dtype)[source]

infer dtype

class tinyms.primitives.GLU(axis=-1)[source]

Computes GLU (Gated Linear Unit activation function) of input tensors.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.glu() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> import numpy as np
>>> axis = 0
>>> x = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([2, 2, 5]), mstype.float32)
>>> glu = ops.GLU(axis=axis)
>>> y = glu(x)
>>> print(y)
[[[0.20028052 0.6916126  0.57412136 0.06512236 0.26307625]
  [0.3682598  0.3093122  0.17306386 0.10212085 0.63814086]]]
class tinyms.primitives.Gamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

\[\text{P}(x|α,β) = \frac{\exp(-x/β)}{{β^α}\cdot{\Gamma(α)}}\cdot{x^{α-1}}\]

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int) – The global random seed and it will combile with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • alpha (Tensor) - α is the shape parameter of Gamma distribution, which mainly determines the shape of the curve. It must be greater than 0. The data type is float32.

  • beta (Tensor) - β is the inverse scale parameter of the Gamma distribution, which mainly determines how steep the curve is. It must be greater than 0. The data type is float32.

Outputs:

Tensor. The shape must be the broadcasted shape of Input “shape” and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If alpha or beta is not a Tensor.

  • TypeError – If data type of alpha or beta is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend

Examples

>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> beta = Tensor(np.array([1.0]), mstype.float32)
>>> gamma = ops.Gamma(seed=3)
>>> output = gamma(shape, alpha, beta)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
class tinyms.primitives.Gather(batch_dims=0)[source]

Returns the slice of the input tensor corresponding to the elements of input_indices on the specified axis.

The following figure shows the calculation process of Gather commonly:

tinyms/Gather.png

where params represents the input input_params, and indices represents the index to be sliced input_indices.

Refer to mindspore.ops.gather() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: input_indices is a Tensor with shape (5, ).
>>> input_params = Tensor(np.array([1, 2, 3, 4, 5, 6, 7]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 4, 2, 6]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[1. 3. 5. 3. 7.]
>>> # case2: input_indices is a Tensor with shape (2, 2). When the input_params has one dimension,
the output shape is equal to the input_indices shape.
>>> input_indices = Tensor(np.array([[0, 2], [2, 6]]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[ 1. 3.]
 [ 3. 7.]]
>>> # case3: input_indices is a Tensor with shape (2, ). input_params is a Tensor with shape (3, 4) and axis is 0.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> axis = 0
>>> output = ops.Gather()(input_params, input_indices, axis)
>>> print(output)
[[1.  2.  3.  4.]
 [9. 10. 11. 12.]]
>>> # case4: input_indices is a Tensor with shape (2, ).
>>> # input_params is a Tensor with shape (3, 4) and axis is 1, batch_dims is 1.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 1]), mindspore.int32)
>>> axis = 1
>>> batch_dims = 1
>>> output = ops.Gather(batch_dims)(input_params, input_indices, axis)
>>> print(output)
[ 1.  7. 10.]
class tinyms.primitives.GatherD[source]

Gathers elements along an axis specified by dim.

Refer to mindspore.ops.gather_elements() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.GatherD()(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
class tinyms.primitives.GatherNd[source]

Gathers slices from a tensor by indices.

Refer to mindspore.ops.gather_nd() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.GatherNd()
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> output = op(input_x, indices)
>>> print(output)
[-0.1  0.5]
class tinyms.primitives.GatherV2[source]

Same as operator Gather. GatherV2 will be deprecated in the future. Please use Gather instead.

class tinyms.primitives.Gcd[source]

Computes greatest common divisor of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> gcd_ = ops.Gcd()
>>> y = gcd_(x1, x2)
>>> print(y)
[7 2 3]
class tinyms.primitives.GeLU[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

GeLU is defined as follows:

\[GELU(x_i) = x_i*P(X < x_i)\]

where \(P\) is the cumulative distribution function of the standard Gaussian distribution, \(x_i\) is the input element.

Inputs:
  • x (Tensor) - The input of the activation function GeLU, the data type is float16, float32 or float64.

Outputs:

Tensor, with the same type and shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> gelu = ops.GeLU()
>>> result = gelu(x)
>>> print(result)
[0.841192  1.9545976  2.9963627]
class tinyms.primitives.GeSwitch[source]

Adds control switch to data.

Switch data flows into false or true branch depending on the condition. If the condition is true, the true branch will be activated, or vise verse.

Inputs:
  • data (Union[Tensor, Number]) - The data to be used for switch control.

  • pred (Tensor) - It must be a scalar whose type is bool and shape is (), It is used as condition for switch control.

Outputs:

tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data. The false_output connects with the false_branch and the true_output connects with the true_branch.

Raises:
  • TypeError – If data is neither a Tensor nor a Number.

  • TypeError – If pred is not a Tensor.

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.square = ops.Square()
...         self.add = ops.Add()
...         self.value = Tensor(np.full((1), 3), mindspore.float32)
...         self.switch = ops.GeSwitch()
...         self.merge = ops.Merge()
...         self.less = ops.Less()
...
...     def construct(self, x, y):
...         cond = self.less(x, y)
...         st1, sf1 = self.switch(x, cond)
...         st2, sf2 = self.switch(y, cond)
...         add_ret = self.add(st1, st2)
...         st3, sf3 = self.switch(self.value, cond)
...         sq_ret = self.square(sf3)
...         ret = self.merge((add_ret, sq_ret))
...         return ret[0]
...
>>> x = Tensor(10.0, dtype=mindspore.float32)
>>> y = Tensor(5.0, dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
>>> print(output)
class tinyms.primitives.Gelu[source]

Same as operator GeLU. Gelu will be deprecated in the future. Please use GeLU instead.

class tinyms.primitives.Geqrf[source]

Decomposes a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R. The process is called QR decomposition: \(A = QR\).

Both Q and R matrices are stored in the same output tensor y. The elements of R are stored on and above the diagonal, whereas elementary reflectors (or Householder vectors) implicitly defining matrix Q are stored below the diagonal.

This function returns two tensors (y, tau).

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - Tensor of shape \((*, m, n)\), input must be a matrix greater than or equal to 2D, with dtype of float32, float64, complex64, complex128.

Outputs:
  • y (Tensor) - Tensor of shape \((*, m, n)\), has the same dtype as the x.

  • tau (Tensor) - Tensor of shape \((*, p)\) and \(p = min(m, n)\), has the same dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If the dtype of x is neither float32, float64, complex64, complex128.

  • ValueError – If x dimension is less than 2

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-2.0, -1.0], [1.0, 2.0]]).astype(np.float32))
>>> geqrf = ops.Geqrf()
>>> y, tau = geqrf(input_x)
>>> print(y)
[[ 2.236068   1.7888544]
 [-0.236068   1.3416407]]
>>> print(tau)
[1.8944271 0.       ]
class tinyms.primitives.Ger[source]

Ger product of x1 and x2. Calculate the outer product of two arrays. If x1 is a 1D Tensor of shape \((m,)\) and x2 is a 1D Tensor of shape \((n,)\), then output must be a 2D Tensor of shape \((m, n)\).

Refer to mindspore.ops.ger() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([1., 2., 3., 4.], mindspore.float32)
>>> x2 = Tensor([1., 2., 3.], mindspore.float32)
>>> ger = ops.Ger()
>>> output = ger(x1, x2)
>>> print(output)
[[ 1.  2.  3.]
 [ 2.  4.  6.]
 [ 3.  6.  9.]
 [ 4.  8. 12.]]
class tinyms.primitives.GetNext(types, shapes, output_num, shared_name)[source]

Returns the next element in the dataset queue.

Note

The GetNext operation needs to be associated with network and it also depends on the ‘dataset’ interface, For example, please refer to mindspore.dataset.MnistDataset . it can’t be used directly as a single operation. For details, please refer to mindspore.connect_network_with_dataset source code.

Parameters:
  • types (list[mindspore.dtype]) – The type of the outputs.

  • shapes (list[tuple[int]]) – The dimensionality of the outputs.

  • output_num (int) – The output number, length of types and shapes.

  • shared_name (str) – Queue name to fetch the data.

Inputs:

No inputs.

Outputs:

tuple[Tensor], the output of dataset. The shape is described in shapes and the type is described in types.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> from mindspore import ops
>>> from mindspore import dataset as ds
>>> from mindspore.common import dtype as mstype
>>> data_path = "/path/to/MNIST_Data/train/"
>>> train_dataset = ds.MnistDataset(data_path, num_samples=10)
>>> dataset_helper = mindspore.DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> dataset = dataset_helper.iter.dataset
>>> dataset_types, dataset_shapes = dataset_helper.types_shapes()
>>> queue_name = dataset.__transfer_dataset__.queue_name
>>> get_next = ops.GetNext(dataset_types, dataset_shapes, len(dataset_types), queue_name)
>>> data, label = get_next()
>>> relu = ops.ReLU()
>>> result = relu(data.astype(mstype.float32))
>>> print(result.shape)
(28, 28, 1)
class tinyms.primitives.Greater[source]

Compare the value of the input parameters \(x,y\) element-wise, and the output result is a bool value.

Refer to mindspore.ops.gt() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater = ops.Greater()
>>> output = greater(x, y)
>>> print(output)
[False True False]
infer_value(x, y)[source]

Infer value for Greater.

class tinyms.primitives.GreaterEqual[source]

Computes the boolean value of \(x >= y\) element-wise.

Refer to mindspore.ops.ge() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater_equal = ops.GreaterEqual()
>>> output = greater_equal(x, y)
>>> print(output)
[True True False]
class tinyms.primitives.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)[source]

This operation samples 2d input_x by using interpolation based on flow field grid, which is usually gennerated by mindspore.ops.affine_grid().

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • interpolation_mode (str, optional) – An optional string specifying the interpolation method. The optional values are “bilinear” or “nearest”. Default: “bilinear”.

  • padding_mode (str, optional) –

    An optional string specifying the pad method. The optional values are “zeros”, “border” or “reflection”. Default: “zeros”. When the sampling grid is outside input’s bounds, effects of various padding modes are as follows:

    • ”zeros”: Pads the input tensor with zeros.

    • ”border”: Pads the input tensor with the values of the pixels on the border of the tensor.

    • ”reflection”: Pads the input tensor by reflecting the values of the pixels at the boundary of the tensor.

  • align_corners (bool, optional) – An optional bool. When set to True, the centers of the corner pixels of the input and output tensors are aligned. When set to False, it is not aligned. Defaults to False.

Inputs:
  • input_x (Tensor) - A 4-D tensor with dtype of float16 or float32 and shape of \((N, C, H_{in}, W_{in})\).

  • grid (Tensor) - A 4-D tensor whose dtype is the same as input_x and whose shape is \((N, H_{out}, W_{out}, 2)\). Used to specify the sampling pixel locations normalized by the input spatial dimensions.

Outputs:

A 4-D Tensor whose dtype is the same as input_x and whose shape is \((N, C, H_{out}, W_{out})\).

Raises:
  • TypeError – If input_x or grid is not a Tensor.

  • TypeError – If the dtypes of input_x and grid are inconsistent.

  • TypeError – If the dtype of input_x or grid is not a valid type.

  • TypeError – If align_corners is not a boolean value.

  • ValueError – If the rank of input_x or grid is not equal to 4.

  • ValueError – If the first dimension of input_x is not equal to that of grid.

  • ValueError – If the forth dimension of grid is not equal to 2.

  • ValueError – If interpolation_mode is not “bilinear”, “nearest” or a string value.

  • ValueError – If padding_mode is not “zeros”, “border”, “reflection” or a string value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> gridsampler = ops.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=True)
>>> input_x = Tensor(np.arange(16).reshape((2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(-9, 9, 0.5).reshape((2, 3, 3, 2)).astype(np.float32))
>>> output = gridsampler(input_x, grid)
>>> print(output)
[[[[ 0.     0.     0.   ]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.5  ]]
  [[ 0.     0.     0.   ]
   [ 0.     0.     0.   ]
   [ 0.     1.5    4.5  ]]]
 [[[10.     8.25   1.375]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.   ]]
  [[14.    11.25   1.875]
   [ 0.     0.     0.   ]
   [ 0.     0.     0.   ]]]]
class tinyms.primitives.GridSampler3D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)[source]

Given an input and a grid, the output is calculated using the input values and pixel positions in the grid. Only volume (5-D) input is supported.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.grid_sample() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> gridsampler = ops.GridSampler3D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=True)
>>> input_x = Tensor(np.arange(32).reshape((2, 2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(-0.2, 1, 0.1).reshape((2, 2, 1, 1, 3)).astype(np.float32))
>>> output = gridsampler(input_x, grid)
>>> print(output)
[[[[[ 3.3     ]]
   [[ 4.35    ]]]
  [[[11.300001]]
   [[12.349999]]]]
 [[[[21.4     ]]
   [[22.449999]]]
  [[[29.4     ]]
   [[30.449999]]]]]
class tinyms.primitives.HSVToRGB[source]

Transform one single or a batch of images from HSV to RGB color space. Each pixel’s HSV value is converted to its corresponding RGB value. Note that the function is only well-defined for input pixel values in the range [0, 1]. Image format should be “NHWC”.

Inputs:
  • x (Tensor) - The input image must be a 4-D tensor of shape \((batch, image\_height, image\_width, channel)\). Number of channel must be 3. Types allowed: float16, float32, float64.

Outputs:

A 4-D tensor of shape \((batch, image\_height, image\_width, channel)\) with same type of input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If the dtype of x is not float16, float32, float64.

  • ValueError – If rank of the x is not equal to 4.

  • ValueError – If the last dimension of x is not equal to 3.

Supported Platforms:

GPU CPU

Examples

>>> image = np.array([0.5, 0.5, 0.5]).astype(np.float32).reshape([1, 1, 1, 3])
>>> hsv_to_rgb = ops.HSVToRGB()
>>> output = hsv_to_rgb(Tensor(image))
>>> print(output)
[[[[0.25 0.5  0.5 ]]]]
class tinyms.primitives.HShrink(lambd=0.5)[source]

Hard Shrink activation function.

Refer to mindspore.ops.hardshrink() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> input_x = Tensor(np.array([[0.5,  1,  2.0], [0.0533, 0.0776, -2.1233]]), ms.float32)
>>> hshrink = ops.HShrink()
>>> output = hshrink(input_x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
class tinyms.primitives.HSigmoid[source]

Hard sigmoid activation function.

Refer to mindspore.ops.hardsigmoid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> hsigmoid = ops.HSigmoid()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hsigmoid(input_x)
>>> print(result)
[0.3333 0.1666 0.5    0.8335 0.6665]
class tinyms.primitives.HSwish[source]

Hard swish activation function.

Refer to mindspore.ops.hardswish() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> hswish = ops.HSwish()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hswish(input_x)
>>> print(result)
[-0.3333  -0.3333  0  1.666  0.6665]
class tinyms.primitives.HammingWindow(periodic=True, alpha=0.54, beta=0.46, dtype=mindspore.float32)[source]

Computes the hamming window function with input window length.

\[w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),\]

where \(N\) is the full window size.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • periodic (bool, optional) –

    a flag determines whether the returned window trims off the last duplicate value from the symmetric window. Default: True.

    • If True, returns a window to be used as periodic function, in above formula, \(N = \text{length} + 1\).

    • If False, return a symmetric window, \(N = \text{length}\).

  • alpha (float, optional) – The coefficient \(\alpha\) in the equation above. Default: 0.54.

  • beta (float, optional) – The coefficient \(\beta\) in the equation above. Default: 0.46.

  • dtype (mindspore.dtype, optional) – An optional data type of mstype.float16, mstype.float32 and mstype.float64. Default: mstype.float32.

Inputs:
  • length (Tensor) - a positive integer tensor controlling the returned window size, must be 1D.

Outputs:

Tensor, A 1-D tensor containing the window, whose shape is \((\text{length},)\).

Raises:
  • TypeError – If length is not a Tensor.

  • TypeError – If dtype of length is not integer data type.

  • TypeError – If periodic is not a bool.

  • TypeError – If alpha is not a float.

  • TypeError – If beta is not a float.

  • TypeError – If dtype is not mindspore.float16, mindspore.float32 or mindspore.float64.

  • ValueError – If dimension of length is not 1.

  • ValueError – If data of length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: periodic=True.
>>> length = Tensor(np.array([6]).astype(np.int32))
>>> hamming_window = ops.HammingWindow(periodic=True)
>>> y = hamming_window(length)
>>> print(y)
[0.08000001 0.31       0.77000004 1.         0.77000004 0.31      ]
>>> # case 2: periodic=False.
>>> length = Tensor(np.array([7]).astype(np.int32))
>>> hamming_window = ops.HammingWindow(periodic=False)
>>> y = hamming_window(length)
>>> print(y)
[0.08000001 0.31       0.77000004 1.         0.77000004 0.31       0.08000001]
class tinyms.primitives.Heaviside[source]

Applies the Heaviside step function for input x element-wise.

\[\begin{split}\text { heaviside }(\text { x, values })=\left\{\begin{array}{ll} 0, & \text { if x }<0 \\ \text { values, } & \text { if x }==0 \\ 1, & \text { if x }>0 \end{array}\right.\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - The input tensor. With real number data type.

  • values (Tensor) - The values to use where x is zero. It should be able to broadcast with x have the same dtype as x.

Outputs:

Tensor, has the same type as x and values.

Raises:
  • TypeError – If x or values is not Tensor.

  • TypeError – If data type x and values is different.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1.5, 0., 2.]))
>>> values = Tensor(np.array([0.5]))
>>> heaviside = ops.Heaviside()
>>> y = heaviside(x, values)
>>> print(y)
[0.  0.5 1. ]
class tinyms.primitives.Histogram(bins=100, min=0.0, max=0.0)[source]

Computes the histogram of Tensor element distribution.

The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Elements lower than min and higher than max are ignored.

Parameters:
  • bins (int, optional) – Number of histogram bins, optional. Default 100. If specified, must be positive.

  • min (float, optional) – An optional float of the lower end of the range (inclusive). Default value is 0.0.

  • max (float, optional) – An optional float of the upper end of the range (inclusive). Default value is 0.0.

Inputs:
  • x (Tensor) - the input tensor, type support list: [float16, float32, int32].

Outputs:

Tensor, 1-D Tensor with type int32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([1., 2, 1])
>>> op = ops.Histogram(bins=4, min=0.0, max=3.0)
>>> y = op(x)
>>> print(y)
[0 2 1 0]
class tinyms.primitives.HistogramFixedWidth(nbins, dtype='int32')[source]

Returns a rank 1 histogram counting the number of entries in values that fall into every bin. The bins are equal width and determined by the inputs range and the arguments nbins.

Parameters:
  • nbins (int) – The number of histogram bins, the type is a positive integer.

  • dtype (str, optional) – An optional attribute. The dtype must be str. Default: “int32”.

Inputs:
  • x (Tensor) - Numeric Tensor. Must be one of the following types: int32, float32, float16.

  • range (Tensor) - Must have the same data type as x, and the shape is \((2,)\). x <= range[0] will be mapped to histogram[0], x >= range[1] will be mapped to histogram[-1].

Outputs:

1-D Tensor, whose length is the type is nbins with dtype of int32.

Raises:
  • TypeError – If dtype is not a str or nbins is not an int.

  • ValueError – If nbins is less than 1.

  • ValueError – If dtype is not ‘int32’.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor([-1.0, 0.0, 1.5, 2.0, 5.0, 15], mindspore.float16)
>>> range_op = Tensor([0.0, 5.0], mindspore.float16)
>>> hist = ops.HistogramFixedWidth(5)
>>> output = hist(x, range_op)
>>> print(output)
[2 1 1 0 2]
class tinyms.primitives.HistogramSummary[source]

This operator will calculate the histogram of a tensor and put it to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.HistogramSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor([1, 2]), Tensor([3, 4]))
>>> print(summary)
[4 6]
class tinyms.primitives.HookBackward(hook_fn, cell_id='')[source]

This operation is used as a tag to hook gradient in intermediate variables. Note that this function is only supported in pynative mode.

Note

The hook function must be defined like hook_fn(grad) -> new gradient or None, where the ‘grad’ is the gradient passed to the primitive. The ‘grad’ may be modified by returning a new gradient and passed to next primitive. The difference between a hook function and callback of InsertGradientOf is that the hook function is executed in the python environment while callback will be parsed and added to the graph.

Parameters:
  • hook_fn (Function) – Python function. hook function.

  • cell_id (str, optional) – Used to identify whether the function registered by the hook is actually registered on the specified cell object. For example, ‘nn.Conv2d’ is a cell object. The default value of cell_id is empty string(“”), in this case, the system will automatically register a value of cell_id. The value of cell_id currently does not support custom values.

Inputs:
  • input (Tensor) - The variable to hook.

Outputs:
  • output (Tensor) - Returns input directly. HookBackward does not affect the forward result.

Raises:
  • TypeError – If input is not a tensor.

  • TypeError – If hook_fn is not a function of python.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> from mindspore import Tensor
>>> from mindspore.ops import GradOperation
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> def hook_fn(grad):
...     print(grad)
...
>>> hook = ops.HookBackward(hook_fn)
>>> def hook_test(x, y):
...     z = x * y
...     z = hook(z)
...     z = z * y
...     return z
...
>>> grad_all = GradOperation(get_all=True)
>>> def backward(x, y):
...     return grad_all(hook_test)(x, y)
...
>>> output = backward(Tensor(1, ms.float32), Tensor(2, ms.float32))
(Tensor(shape=[], dtype=Float32, value= 2),)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4), Tensor(shape=[], dtype=Float32, value= 4))
class tinyms.primitives.Hypot[source]

Computes hypotenuse of input tensors element-wise as legs of a right triangle. The shape of two inputs should be broadcastable, and data type of them should be one of: float32, float64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not float32 or float64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([3., 5., 7.]))
>>> x2 = Tensor(np.array([4., 12., 24.]))
>>> hypot_ = ops.Hypot()
>>> y = hypot_(x1, x2)
>>> print(y)
[ 5. 13. 25.]
class tinyms.primitives.IOU(mode='iou')[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

Refer to mindspore.ops.iou() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> iou = ops.IOU(mode='iou')
>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> output = iou(anchor_boxes, gt_boxes)
>>> print(output.shape)
(3, 3)
class tinyms.primitives.Identity[source]

Returns a Tensor with the same shape and contents as input.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

Tensor, the shape of tensor and the data type are the same as input_x, \((x_1, x_2, ..., x_R)\).

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> output = ops.Identity()(x)
>>> print(output)
[1 2 3 4]
class tinyms.primitives.IdentityN[source]

Return a tuple of tensors with the same shapes and contents as the input.

This op can be used to override the gradient for complicated functions. For example, suppose \(y = f(x)\) and we wish to apply a custom function g for backprop such that \(dx=g(dy)\).

Inputs:
  • x (Union[tuple[Tensor], list[Tensor]]) - Input, the data type is RealNumber.

Outputs:

Tensors - tuple(Tensor), the shape of tensor and the data type are the same as input x.

Raises:
  • TypeError – If x is not tuple(Tensor) or List(Tensor).

  • TypeError – If input x type is not RealNumber.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = [Tensor(np.array([1, 2, 3, 4]), mstype.int64), Tensor(np.array([4, 3, 1, 1]), mstype.int64)]
>>> output = ops.IdentityN()(x)
>>> print(np.allclose(output[0].asnumpy(), x[0].asnumpy()))
True
>>> print(np.allclose(output[1].asnumpy(), x[1].asnumpy()))
True
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 2, 3, 4]), Tensor(shape=[4], dtype=Int64, value= [4, 3, 1, 1]))
class tinyms.primitives.Igamma[source]

Calculates lower regularized incomplete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.igamma() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igamma = ops.Igamma()
>>> output = igamma(a, x)
>>> print (output)
[0.593994  0.35276785  0.21486944  0.13337152]
class tinyms.primitives.Igammac[source]

Compute the upper regularized incomplete Gamma function Q(a, x).

Refer to mindspore.ops.igammac() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igammac = ops.Igammac()
>>> output = igammac(a, x)
>>> print (output)
[0.40600586 0.6472318  0.7851304  0.8666283 ]
class tinyms.primitives.Im2Col(ksizes, strides=1, dilations=1, pads=0)[source]

Extracts sliding local blocks from a batched input tensor.

Consider a batched input tensor of shape \((N, C, *)\), where \(N\) is the batch dimension, \(C\) is the channel dimension, and \(*\) represent arbitrary spatial dimensions. This operation flattens each sliding ksizes- sized block within the spatial dimensions of input x into a column (i.e., last dimension) of a 4-D output tensor of shape \((N, C, \prod(\text{kernel_size}), L)\), where \(C \times \prod(\text{kernel_size})\) is the total number of values within each block (a block has \(\prod(\text{kernel_size})\) spatial locations each containing a C-channeled vector), and \(L\) is the total number of such blocks:

\[L = \prod_d \left\lfloor\frac{\text{spatial_size}[d] + 2 \times \text{pads}[d] % - \text{dilations}[d] \times (\text{kernel_size}[d] - 1) - 1}{\text{strides}[d]} + 1\right\rfloor,\]

where \(\text{spatial_size}\) is formed by the spatial dimensions of input x (\(*\) above), and \(d\) is over all spatial dimensions.

Therefore, indexing output at the last dimension (column dimension) gives all values within a certain block.

The pads, strides and dilations arguments specify how the sliding blocks are retrieved.

Note

Currently, only 4-D input tensors (batched image-like tensors) are supported.

Parameters:
  • ksizes (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • strides (Union[int, tuple[int], list[int]], optional) – The stride of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • dilations (Union[int, tuple[int], list[int]], optional) – The dilation of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • pads (Union[int, tuple[int], list[int]], optional) –

    The pad of the window, that must be a tuple of one or two int for height and width. Default: 0.

    • If one int, \(pad\_height = pad\_width\).

    • If two int, \(pad\_height = pads[0]\), \(pad\_width = pads[1]\).

    • If four int, \(pads = [pad\_height\_top, pad\_height\_bottom, pad\_width\_left, pad\_width\_right]\).

Inputs:
  • x (Tensor) - input tensor, only 4-D input tensors (batched image-like tensors) are supported. support all real number data type.

Outputs:

Tensor, a 4-D Tensor with same type of input x.

Raises:
  • TypeError – If ksizes data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If strides data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If dilations data type is not in Union[int, tuple[int], list[int]].

  • TypeError – If pads data type isnot in Union[int, tuple[int], list[int]].

  • ValueError – If ksizes value is not greater than zero or elements number more than 2.

  • ValueError – If strides value is not greater than zero or elements number more than 2.

  • ValueError – If dilations value is not greater than zero or elements number more than 2.

  • ValueError – If pads value is not greater than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(4, 4, 32, 32), dtype=mstype.float64)
>>> im2col = ops.Im2Col(ksizes=3, strides=1, dilations=1)
>>> y = im2col(x)
>>> print(y.shape)
(4, 4, 9, 900)
class tinyms.primitives.Imag[source]

Returns a new tensor containing imaginary value of the input. If input is real, it is returned zeros.

Inputs:
  • input (Tensor) - The input tensor to compute to.

Outputs:

Tensor, the shape is the same as the input.

Raises:

TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> imag = ops.Imag()
>>> output = imag(x)
>>> print(output)
0.4
class tinyms.primitives.ImageSummary[source]

This operator will put an image tensor to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of image, the rank of tensor must be 4.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>>
>>>
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.summary = ops.ImageSummary()
...
...     def construct(self, x):
...         name = "image"
...         self.summary(name, x)
...         return x
...
class tinyms.primitives.InTopK(k)[source]

Determines whether the targets are in the top k predictions.

Refer to mindspore.ops.intopk() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> in_top_k = ops.InTopK(3)
>>> output = in_top_k(x1, x2)
>>> print(output)
[ True  False]
class tinyms.primitives.IndexAdd(axis, use_lock=True, check_index_bound=True)[source]

Adds tensor y to specified axis and indices of tensor x. The axis should be in [-len(x.dim), len(x.dim) - 1], and indices should be in [0, the size of x - 1] at the axis dimension.

Parameters:
  • axis (int) – The dimension along which to index.

  • use_lock (bool) – Whether to enable a lock to protect the updating process of variable tensors. If true, when updating the value of x, this process will be protected by a lock by using atomic operation. If false, the result may be unpredictable. Default: True.

  • check_index_bound (bool) – If true, check index boundary. If false, don’t check index boundary. Default: True.

Inputs:
  • x (Parameter) - The input Parameter to add to.

  • indices (Tensor) - Add the value of x and y along the dimension of the axis according to the specified index value, with data type int32. The indices must be 1D with the same size as the size of y in the axis dimension. The values of indices should be in [0, b), where the b is the size of x in the axis dimension.

  • y (Tensor) - The input tensor with the value to add. Must have same data type as x. The shape must be the same as x except the axis th dimension.

Outputs:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a Parameter.

  • TypeError – If neither indices nor y is a Tensor.

  • ValueError – If axis is out of x rank’s range.

  • ValueError – If x rank is not the same as y rank.

  • ValueError – If shape of indices is not 1D or size of indices is not equal to dimension of y[axis].

  • ValueError – If y’s shape is not the same as x except the axis th dimension.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.index_add = ops.IndexAdd(axis=1)
...         self.x = Parameter(Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32),
...                 name="name_x")
...         self.indices = Tensor(np.array([0, 2]), mindspore.int32)
...
...     def construct(self, y):
...         return self.index_add(self.x, self.indices, y)
...
>>> y = Tensor(np.array([[0.5, 1.0], [1.0, 1.5], [2.0, 2.5]]), mindspore.float32)
>>> net = Net()
>>> output = net(y)
>>> print(output)
[[ 1.5  2.   4. ]
 [ 5.   5.   7.5]
 [ 9.   8.  11.5]]
class tinyms.primitives.IndexFill[source]

Fills the elements under the dim dimension of the input Tensor x with the input value by selecting the indices in the order given in index.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.index_fill() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> index_fill = ops.IndexFill()
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> index = Tensor([0, 2], mindspore.int32)
>>> value = Tensor(-2.0, mindspore.float32)
>>> y = index_fill(x, 1, index, value)
>>> print(y)
[[-2. 2. -2.]
 [-2. 5. -2.]
 [-2. 8. -2.]]
class tinyms.primitives.InplaceAdd(indices)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Refer to mindspore.ops.inplace_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceAdd = ops.InplaceAdd(indices)
>>> output = inplaceAdd(x, input_v)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceIndexAdd(axis)[source]

Adds Tensor updates to specified axis and indices of Tensor var element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.inplace_index_add() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> var = Parameter(Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32))
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceIndexAdd = ops.InplaceIndexAdd(axis=0)
>>> var = inplaceIndexAdd(var, indices, updates)
>>> print(var)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
class tinyms.primitives.InplaceSub(indices)[source]

Subtracts v into specified rows of x. Computes \(y = x\); \(y[i,] -= input\_v\).

Refer to mindspore.ops.inplace_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplaceSub = ops.InplaceSub(indices)
>>> output = inplaceSub(x, input_v)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
class tinyms.primitives.InplaceUpdate(indices)[source]

The InplaceUpdate interface is deprecated. Please use the mindspore.ops.InplaceUpdateV2 instead.

Supported Platforms:

Deprecated

class tinyms.primitives.InplaceUpdateV2[source]

Updates specified values in x to v according to indices.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.inplace_update() for more details.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> inplace_update_v2 = ops.InplaceUpdateV2()
>>> output = inplace_update_v2(x, indices, v)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
class tinyms.primitives.InsertGradientOf(f)[source]

Attaches callback to the graph node that will be invoked on the node’s gradient.

Parameters:

f (Function) – MindSpore’s Function. Callback function.

Inputs:
  • input_x (Any) - The graph node to attach to.

Outputs:

Tensor, returns input_x directly. InsertGradientOf does not affect the forward result.

Raises:

TypeError – If f is not a function of MindSpore.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops, jit
>>> a = Tensor(np.array([1.0]).astype(np.float32))
>>> b = Tensor(np.array([0.2]).astype(np.float32))
>>> def clip_gradient(dx):
...     ret = dx
...     if ret > a:
...         ret = a
...
...     if ret < b:
...         ret = b
...
...     return ret
...
>>> clip = ops.InsertGradientOf(clip_gradient)
>>> grad_all = ops.GradOperation(get_all=True)
>>> def InsertGradientOfClipDemo():
...     def clip_test(x, y):
...         x = clip(x)
...         y = clip(y)
...         c = x * y
...         return c
...
...     @jit
...     def f(x, y):
...         return clip_test(x, y)
...
...     def fd(x, y):
...         return grad_all(clip_test)(x, y)
...
...     print("forward: ", f(Tensor(np.array([1.1]).astype(np.float32)),
...         Tensor(np.array([0.1]).astype(np.float32))))
...     print("clip_gradient:", fd(Tensor(np.array([1.1]).astype(np.float32)),
...         Tensor(np.array([0.1]).astype(np.float32))))
>>> InsertGradientOfClipDemo()
forward: [0.11000001]
clip_gradient: (Tensor(shape=[1], dtype=Float32, value= [ 2.00000003e-01]),
                Tensor(shape=[1], dtype=Float32, value= [ 1.00000000e+00]))
class tinyms.primitives.Inv[source]

Computes Reciprocal of input tensor element-wise.

Refer to mindspore.ops.inv() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inv = ops.Inv()
>>> x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = inv(x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
class tinyms.primitives.Invert[source]

Flips all bits of input tensor element-wise.

Refer to mindspore.ops.invert() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.Invert()
>>> x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = invert(x)
>>> print(output)
[-26 -5 -14 -10]
class tinyms.primitives.InvertPermutation[source]

Computes the inverse of an index permutation.

This operator is mainly used to calculate the inverse of index permutation. It requires a 1-dimensional integer tensor x, which represents the index of a zero-based array, and exchanges each value with its index position. In other words, For output tensor y and input tensor x, this operation calculates the following values:

\(y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]\).

Note

These values must include 0. There must be no duplicate values and the values can not be negative.

Inputs:
  • input_x (Union(tuple[int], list[int])) - The input is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\) representing the indices. The values must include 0. There can be no duplicate values or negative values. Only constant value is allowed. The maximum value must be equal to length of input_x.

Outputs:

tuple[int]. It has the same length as the input.

Raises:
  • TypeError – If input_x is neither tuple nor list.

  • TypeError – If element of input_x is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> invert = ops.InvertPermutation()
>>> input_data = (3, 4, 0, 2, 1)
>>> output = invert(input_data)
>>> print(output)
(2, 4, 3, 0, 1)
class tinyms.primitives.IsClose(rtol=1e-05, atol=1e-08, equal_nan=True)[source]

Returns a tensor of Boolean values indicating whether two input tensors are element-wise equal within a given tolerance.

Refer to mindspore.ops.isclose() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import IsClose
>>> input = Tensor(np.array([1.3, 2.1, 3.2, 4.1, 5.1]), mindspore.float16)
>>> other = Tensor(np.array([1.3, 3.3, 2.3, 3.1, 5.1]), mindspore.float16)
>>> isclose = IsClose()
>>> output = isclose(input, other)
>>> print(output)
[ True False False False  True]
class tinyms.primitives.IsFinite[source]

Determines which elements are finite for each position.

Refer to mindspore.ops.isfinite() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_finite = ops.IsFinite()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_finite(x)
>>> print(output)
[False  True False]
class tinyms.primitives.IsInf[source]

Determines which elements are inf or -inf for each position.

Refer to mindspore.ops.isinf() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_inf = ops.IsInf()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_inf(x)
>>> print(output)
[False False True]
class tinyms.primitives.IsNan[source]

Determines which elements are NaN for each position.

Refer to mindspore.ops.isnan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> is_nan = ops.IsNan()
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = is_nan(x)
>>> print(output)
[ True False False]
class tinyms.primitives.KLDivLoss(reduction='mean')[source]

Computes the Kullback-Leibler divergence between the logits and the labels.

For tensors of the same shape \(x\) and \(target\), the updating formulas of KLDivLoss algorithm are as follows,

\[L(x, target) = target \cdot (\log target - x)\]

Then,

\[\begin{split}\ell(x, target) = \begin{cases} L(x, target), & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L(x, target)), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L(x, target)) / x.\operatorname{shape}[0], & \text{if reduction} = \text{'batchmean';}\\ \operatorname{sum}(L(x, target)), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(x\) represents logits, \(target\) represents labels, and \(\ell(x, target)\) represents output.

Note

  • On Ascend, float64 dtype is not currently supported.

  • The output aligns with the mathematical definition of Kullback-Leibler divergence only when reduction is set to ‘batchmean’.

Parameters:

reduction (str) –

Specifies the reduction to be applied to the output. Default: ‘mean’.

  • On Ascend, the value of reduction must be one of ‘batchmean’, ‘none’ or ‘sum’.

  • On GPU, the value of reduction must be one of ‘mean’, ‘none’ or ‘sum’.

  • On CPU, the value of reduction must be one of ‘mean’, ‘batchmean’, ‘none’ or ‘sum’.

Inputs:
  • logits (Tensor) - The input Tensor. The data type must be float16, float32 or float64.

  • labels (Tensor) - The label Tensor which has the same shape and data type as logits.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise it is a scalar.

Raises:
  • TypeError – If reduction is not a str.

  • TypeError – If neither logits nor labels is a Tensor.

  • TypeError – If dtype of logits or labels is not currently supported.

  • ValueError – If shape of logits is not the same as labels.

  • RuntimeError – If logits or labels is a scalar when reduction is ‘batchmean’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.kldiv_loss = ops.KLDivLoss(reduction='sum')
...     def construct(self, logits, labels):
...         result = self.kldiv_loss(logits, labels)
...         return result
...
>>> net = Net()
>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = net(logits, labels)
>>> print(output)
-0.7
class tinyms.primitives.L2Loss[source]

Calculates half of the L2 norm, but do not square the result.

Set input as x and output as loss.

\[loss = \frac{\sum x ^ 2}{2}\]
Inputs:
  • input_x (Tensor) - Tensor for computing the L2 norm. Data type must be float16, float32 or float64.

Outputs:

Tensor, has a Scalar Tensor with the same data type as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
>>> l2_loss = ops.L2Loss()
>>> output = l2_loss(input_x)
>>> print(output)
7.0
class tinyms.primitives.L2Normalize(axis=0, epsilon=0.0001)[source]

L2 Normalization Operator.

This operator will normalize the input using the given axis. The function is shown as follows:

\[\displaylines{{\text{output} = \frac{x}{\sqrt{\text{max}( \sum_{i}^{}\left | x_i \right | ^2, \epsilon)}}}}\]

where \(\epsilon\) is epsilon and \(\sum_{i}^{}\left | x_i \right | ^2\) calculate the sum of squares of the input x along the dimension axis.

Note

On Ascend, input data type of float64 is currently not supported.

Parameters:
  • axis (Union[list(int), tuple(int), int]) – Specify the axis for calculating the L2 norm. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-4.

Inputs:
  • x (Tensor) - Input to compute the normalization. Tensor of shape \((N, *)\), where \(*\) means any number of additional dimensions. Data type must be float16, float32 or float64.

Outputs:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If axis is not one of the following: list, tuple or int.

  • TypeError – If epsilon is not a float.

  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • ValueError – If dimension of x is not greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> l2_normalize = ops.L2Normalize()
>>> x = Tensor(np.random.randint(-256, 256, (2, 3, 4)), mindspore.float32)
>>> output = l2_normalize(x)
>>> print(output.shape)
(2, 3, 4)
class tinyms.primitives.LARSUpdate(epsilon=1e-05, hyperpara=0.001, use_clip=False)[source]

Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient.

For more details, please refer to mindspore.nn.LARS.

Parameters:
  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • hyperpara (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

Inputs:
  • weight (Tensor) - A tensor, representing the weight. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • gradient (Tensor) - The gradient of weight, which has the same shape and dtype with weight.

  • norm_weight (Tensor) - A scalar tensor, representing the sum of squares of weight.

  • norm_gradient (Tensor) - A scalar tensor, representing the sum of squares of gradient.

  • weight_decay (Union[Number, Tensor]) - Weight decay. It must be a scalar tensor or number.

  • learning_rate (Union[Number, Tensor]) - Learning rate. It must be a scalar tensor or number.

Outputs:

Tensor, represents the new gradient.

Raises:
  • TypeError – If neither epsilon nor hyperpara is a float.

  • TypeError – If use_clip is not a bool.

  • TypeError – If weight, gradient, norm_weight or norm_gradient is not a Tensor.

  • TypeError – If weight_decay or learning_rate is neither a Number nor a Tensor.

  • TypeError – If shape of gradient is not the same as weight.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.lars = ops.LARSUpdate()
...         self.reduce = ops.ReduceSum()
...         self.square = ops.Square()
...     def construct(self, weight, gradient):
...         w_square_sum = self.reduce(self.square(weight))
...         grad_square_sum = self.reduce(self.square(gradient))
...         grad_t = self.lars(weight, gradient, w_square_sum, grad_square_sum, 0.0, 1.0)
...         return grad_t
...
>>> weight = Tensor(np.array([[0.5, 0.8, 0.2], [0.6, 0.4, 0.2]]).astype(np.float32))
>>> gradient = Tensor(np.array([[0.4, 0.4, 0.5], [0.2, 0.4, 0.3]]).astype(np.float32))
>>> net = Net()
>>> output = net(Tensor(weight), Tensor(gradient))
>>> print(output)
[[0.0005265  0.0005265 0.00065813]
 [0.00026325 0.0005265 0.00039488]]
class tinyms.primitives.LRN(depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]

where the \(a_{c}\) indicates the specific value of the pixel corresponding to \(c\) in feature map; where the \(n/2\) indicates the depth_radius; where the \(k\) indicates the bias; where the \(\alpha\) indicates the alpha; where the \(\beta\) indicates the beta.

Parameters:
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default: 5.

  • bias (float) – An offset (usually positive to avoid dividing by 0). Default: 1.0.

  • alpha (float) – A scale factor, usually positive. Default: 1.0.

  • beta (float) – An exponent. Default: 0.5.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

Inputs:
  • x (Tensor) - A 4-D Tensor with float16 or float32 data type.

Outputs:

Tensor, with the same shape and data type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> lrn = ops.LRN()
>>> output = lrn(x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
class tinyms.primitives.LSTM(input_size, hidden_size, num_layers, has_bias, bidirectional, dropout)[source]

Performs the Long Short-Term Memory (LSTM) on the input.

For detailsed information, please refer to mindspore.nn.LSTM.

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked LSTM.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh.

  • bidirectional (bool) – Specifies whether it is a bidirectional LSTM.

  • dropout (float) – If not 0, append Dropout layer on the outputs of each LSTM layer except the last layer. The range of dropout is [0.0, 1.0].

Inputs:
  • input (Tensor) - Tensor of shape \((seq\_len, batch\_size, input\_size)\) or \((batch\_size, seq\_len, input\_size)\).

  • h (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • c (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • w (Tensor) - A weight Tensor.

Outputs:

Tuple, a tuple contains (output, h_n, c_n, reserve, state).

  • output (Tensor) - Tensor of shape \((seq\_len, batch\_size, num\_directions * hidden\_size)\).

  • h_n (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • c_n (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\).

  • reserve (Tensor) - Tensor of shape \((r, 1)\).

  • state (Tensor) - Random number generator state and its shape is \((s, 1)\).

Raises:
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0].

Supported Platforms:

GPU CPU

Examples

>>> input_size = 10
>>> hidden_size = 2
>>> num_layers = 1
>>> seq_len = 5
>>> batch_size = 2
>>>
>>> net = ops.LSTM(input_size, hidden_size, num_layers, True, False, 0.0)
>>> input_tensor = Tensor(np.ones([seq_len, batch_size, input_size]).astype(np.float32))
>>> h0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> c0 = Tensor(np.ones([num_layers, batch_size, hidden_size]).astype(np.float32))
>>> w = Tensor(np.ones([112, 1, 1]).astype(np.float32))
>>> output, hn, cn, _, _ = net(input_tensor, h0, c0, w)
>>> print(output)
[[[0.9640267  0.9640267 ]
  [0.9640267  0.9640267 ]]
 [[0.9950539  0.9950539 ]
  [0.9950539  0.9950539 ]]
 [[0.99932843 0.99932843]
  [0.99932843 0.99932843]]
 [[0.9999084  0.9999084 ]
  [0.9999084  0.9999084 ]]
 [[0.9999869  0.9999869 ]
  [0.9999869  0.9999869 ]]]
class tinyms.primitives.LayerNorm(begin_norm_axis=1, begin_params_axis=1, epsilon=1e-07)[source]

Applies the Layer Normalization to the input tensor.

This operator will normalize the input tensor on given axis. LayerNorm is described in the paper Layer Normalization.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters:
  • begin_norm_axis (int) – The begin axis of the input_x to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • begin_params_axis (int) – The begin axis of the parameter input (gamma, beta) to apply LayerNorm, the value must be in [-1, rank(input)). Default: 1.

  • epsilon (float) – A value added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\). The input of LayerNorm.

  • gamma (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter \(\gamma\) as the scale on norm.

  • beta (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter \(\beta\) as the scale on norm.

Outputs:

tuple[Tensor], tuple of 3 tensors, the normalized input and the updated parameters.

  • output_x (Tensor) - The normalized input, has the same type and shape as the input_x. The shape is \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Raises:
  • TypeError – If begin_norm_axis or begin_params_axis is not an int.

  • TypeError – If epsilon is not a float.

  • TypeError – If input_x, gamma or beta is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [1, 2, 3]]), mindspore.float32)
>>> gamma = Tensor(np.ones([3]), mindspore.float32)
>>> beta = Tensor(np.ones([3]), mindspore.float32)
>>> layer_norm = ops.LayerNorm()
>>> output, mean, variance = layer_norm(input_x, gamma, beta)
>>> print(output)
[[-0.2247448  1.         2.2247448]
 [-0.2247448  1.         2.2247448]]
>>> print(mean)
[[2.]
 [2.]]
>>> print(variance)
[[0.6666667]
 [0.6666667]]
class tinyms.primitives.Lcm[source]

Computes least common multiplier of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The first input tensor.

  • x2 (Tensor) - The second input tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type x1 or x2 is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> lcm_ = ops.Lcm()
>>> y = lcm_(x1, x2)
>>> print(y)
[14 24 36]
class tinyms.primitives.LeftShift[source]

Shift the value of each position of the tensor to the left several bits. The inputs are two tensors, dtypes of them must be consistent, and the shapes of them could be broadcast. The output does not support implicit type conversion.

\[\begin{aligned} &out_{i} =x_{i} << y_{i} \end{aligned}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The target tensor whose dtype supports int8, int16, int32, int64, uint8, uint16, uint32, uint64, will be shifted to the left by x2 in element-wise.

  • x2 (Tensor) - The tensor must have the same dtype as x1. And the tensor must have the same shape as x1 or could be broadcast with x1.

Outputs:
  • output (Tensor) - The output tensor, has the same dtype as x1. And the shape of the output tensor is the same shape as x1, or the same shape as x1 and x2 after broadcasting.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> left_shift = ops.LeftShift()
>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.int8))
>>> x2 = Tensor(np.array([0, 1, -1]).astype(np.int8))
>>> output = left_shift(x1, x2)
>>> print(output)
[1 4 3]
class tinyms.primitives.Lerp[source]

Does a linear interpolation of two tensors start and end based on a float or tensor weight.

Refer to mindspore.ops.lerp() for more details.

Inputs:
  • start (Tensor) - The tensor with the starting points. Data type must be float16 or float32.

  • end (Tensor) - The tensor with the ending points. Data type must be the same as start.

  • weight (Union[float, Tensor]) - The weight for the interpolation formula. Must be a float or a scalar tensor with float16 or float32 data type.

Outputs:

Tensor, has the same type and shape as input start.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> end = Tensor(np.array([10., 10., 10., 10.]), mindspore.float32)
>>> lerp = ops.Lerp()
>>> output = lerp(start, end, 0.5)
>>> print(output)
[5.5 6. 6.5 7. ]
class tinyms.primitives.Less[source]

Computes the boolean value of \(x < y\) element-wise.

Refer to mindspore.ops.less() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less = ops.Less()
>>> output = less(x, y)
>>> print(output)
[False False True]
class tinyms.primitives.LessEqual[source]

Computes the boolean value of \(x <= y\) element-wise.

Refer to mindspore.ops.le() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less_equal = ops.LessEqual()
>>> output = less_equal(x, y)
>>> print(output)
[ True False  True]
class tinyms.primitives.Lgamma[source]

Computes the natural logarithm of the absolute value of the gamma function on input.

Refer to mindspore.ops.lgamma() for more details.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 3.2, 8.5]), mindspore.float32)
>>> lgamma = ops.Lgamma()
>>> output = lgamma(x)
>>> print(output)
[0.5723649 0.8854049 9.549267 ]
class tinyms.primitives.LinSpace[source]

Returns a Tensor whose value is num evenly spaced in the interval start and stop (including start and stop), and the length of the output Tensor is num.

Refer to mindspore.ops.linspace() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> linspace = ops.LinSpace()
>>> start = Tensor(1, mindspore.float32)
>>> stop = Tensor(10, mindspore.float32)
>>> num = 5
>>> output = linspace(start, stop, num)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
class tinyms.primitives.ListDiff(out_idx=mindspore.int32)[source]

This function calculates the disparity between two numerical lists.

It generates a list of all elements that are present in list x but not in list y. The output list out retains the same order as the original x including duplicate elements.

Additionally, this class outputs a list idx that identifies the position of each element in out within the original x. That is to say: out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1] .

Parameters:

out_idx (mindspore.dtype, optional) – The dtype of idx, an optioanal datatype of mstype.int32 and mstype.int64. Default: mstype.int32.

Inputs:
  • x - Values to keep. A 1-D Tensor.

  • y - Values to remove. A 1-D Tensor. Must have the same type as x. 1-D.

Outputs:
  • out - The kept values. A 1-D Tensor. Has the same type as x.

  • idx - The original index of kept values. A 1-D Tensor of type out_idx.

Raises:
  • ValueError – If x or y shape is not 1D.

  • TypeError – If x or y is not a Tensor.

  • TypeError – If x or y date type is not int or uint.

  • TypeError – If x has different data type with y.

  • TypeError – If attr out_idx not in [mstype.int32, mstype.int64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1, 7, 1), dtype=mindspore.dtype.int32) # [1, 2, 3, 4, 5, 6]
>>> y = Tensor([1, 3, 5], dtype=mindspore.dtype.int32)
>>> op = ops.ListDiff() # out_idx default is mindspore.dtype.int32
>>> out, idx = op(x, y)
>>> print(out)
[2 4 6]
>>> print(idx)
[1 3 5]
class tinyms.primitives.Log[source]

Returns the natural logarithm of a tensor element-wise.

Refer to mindspore.ops.log() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log = ops.Log()
>>> output = log(x)
>>> print(output)
[0.        0.6931472 1.3862944]
class tinyms.primitives.Log1p[source]

Returns the natural logarithm of one plus the input tensor element-wise.

Refer to mindspore.ops.log1p() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log1p = ops.Log1p()
>>> output = log1p(x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
class tinyms.primitives.LogMatrixDeterminant[source]

Calculates the sign and logarithm of the determinant of one or more square matrices.

Refer to mindspore.ops.slogdet() for more details.

Supported Platforms:

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> op = ops.LogMatrixDeterminant()
>>> sign, output = op(input_x)
>>> print(sign)
[-1.   1.]
>>> print(output)
[2.80336046e+00    3.04452229e+00]
class tinyms.primitives.LogNormalReverse(mean=1.0, std=2.0)[source]

Fills the elements of the input tensor with log normal values initialized by given mean and std:

\[\text{f}(x;1.0,2.0)=\frac{1}{x\delta \sqrt[]{2\pi} }e^{-\frac{(\ln x-\mu )^2}{2\delta ^2} }\]

where mu, delta is mean and standard deviation of lognormal distribution respectively.

Parameters:
  • mean (float, optional) – the mean of normal distribution. With float data type. Default: 1.0.

  • std (float, optional) – the std of normal distribution. With float data type. Default: 2.0.

Inputs:
  • input (Tensor) - The tensor to be generated with log-normal distribution. Must be one of the following types: float16, float32, float64.

Outputs:

Tensor. A Tensor with the same type and shape of input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3,4),mstype.float64)
>>> mean = 2.0
>>> std = 1.0
>>> lognormalreverse = ops.LogNormalReverse(mean, std)
>>> output = lognormalreverse(x)
>>> result = output.shape
>>> print(result)
(3, 4)
class tinyms.primitives.LogSoftmax(axis=-1)[source]

Log Softmax activation function.

Refer to mindspore.ops.log_softmax() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> log_softmax = ops.LogSoftmax()
>>> output = log_softmax(logits)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
class tinyms.primitives.LogSpace(steps=10, base=10, dtype=mindspore.float32)[source]

Generates a 1-D Tensor with a length of steps. The tensor’s values are uniformly distributed on a logarithmic scale, ranging from \(base^{start}\) to \(base^{end}\), including both endpoints. The logarithmic scale is based on the specified base.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [base^{start}, base^{start + 1 * step}, ... , base^{start + (steps-2) * step}, base^{end}] \end{aligned}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • steps (int, optional) – The steps must be a non-negative integer. Default: 10.

  • base (int, optional) – The base must be a non-negative integer. Default: 10.

  • dtype (mindspore.dtype, optional) – The dtype of output, include mindspore.float16, mindspore.float32 or mindspore.float64. Default: mindspore.float32.

Inputs:
  • start (Tensor) - Start value of interval, with shape of 0-D, dtype is float16, float32 or float64.

  • end (Tensor) - End value of interval, with shape of 0-D, dtype is float16, float32 or float64.

Outputs:

Tensor has the shape as \((step, )\). Its datatype is set by the attr ‘dtype’.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If steps is not an int.

  • TypeError – If base is not an int.

  • TypeError – If dtype is not mindspore.float16, mindspore.float32 or mindspore.float64.

  • ValueError – If steps is not a non-negative integer.

  • ValueError – If base is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logspace = ops.LogSpace(steps = 10, base = 10, dtype=mindspore.float32)
>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> output = logspace(start, end)
>>> print(output)
[1.e+01 1.e+02 1.e+03 1.e+04 1.e+05 1.e+06 1.e+07 1.e+08 1.e+09 1.e+10]
class tinyms.primitives.LogUniformCandidateSampler(num_true=1, num_sampled=5, unique=True, range_max=5, seed=0)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Randomly samples a tensor of sampled classes from the range of integers [0, range_max).

Refer to mindspore.ops.log_uniform_candidate_sampler() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> sampler = ops.LogUniformCandidateSampler(2, 5, True, 5)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1, 7], [0, 4], [3, 3]])))
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
class tinyms.primitives.LogicalAnd[source]

Computes the “logical AND” of two tensors element-wise.

Refer to mindspore.ops.logical_and() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_and = ops.LogicalAnd()
>>> output = logical_and(x, y)
>>> print(output)
[ True False False]
class tinyms.primitives.LogicalNot[source]

Computes the “logical NOT” of a tensor element-wise.

Refer to mindspore.ops.logical_not() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> logical_not = ops.LogicalNot()
>>> output = logical_not(x)
>>> print(output)
[False  True False]
class tinyms.primitives.LogicalOr[source]

Computes the “logical OR” of two tensors element-wise.

Refer to mindspore.ops.logical_or() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_or = ops.LogicalOr()
>>> output = logical_or(x, y)
>>> print(output)
[ True  True  True]
class tinyms.primitives.LogicalXor[source]

Computes the “logical XOR” of two tensors element-wise.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.logical_xor() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_xor = ops.LogicalXor()
>>> output = logical_xor(x, y)
>>> print(output)
[ False True True]
class tinyms.primitives.Logit(eps=-1.0)[source]

Calculate the logit of a tensor element-wise. Element in x is clamped to [eps, 1-eps].

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.logit() for more details.

Parameters:

eps (float, optional) – The epsilon. The input clamp bound is defined as [eps, 1-eps]. Default: -1.0.

Inputs:
  • x (Tensor) - The input tensor.

Outputs:

Tensor, with the same shape and dtype as the x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.1, 0.2, 0.3]).astype(np.float32))
>>> op = ops.Logit(eps=1e-5)
>>> output = op(x)
>>> print(output)
[-2.1972246 -1.3862944 -0.8472978]
class tinyms.primitives.LowerBound(out_type=mindspore.int32)[source]

Find the index of the lower bound of values in sorted sequence sorted_x element-wise.

Parameters:

out_type (mindspore.dtype, optional) – An optional data type of mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • sorted_x (Tensor) - The input tensor whose dtype is real number and the data of each row must be sorted in ascending order. The rank must be 2.

  • values (Tensor) - The input tensor whose dtype is the same as sorted_x and the first dimension of the shape of values must be equal to that of sorted_x . The rank must be 2.

Outputs:

Tensor, whose dtype is determined by out_type and whose shape is the same as that of values.

Raises:
  • TypeError – If sorted_x is not a Tensor.

  • TypeError – If values is not a Tensor.

  • TypeError – If out_type is invalid.

  • TypeError – If the type of sorted_x is not the same as that of values.

  • ValueError – If rank of the sorted_x is not equal to 2.

  • ValueError – If rank of the values is not equal to 2.

  • ValueError – If the first dimension of the shape of sorted_x is not equal to that of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> lowerbound = ops.LowerBound(out_type = mindspore.int32)
>>> sorted_x = Tensor(np.arange(12).reshape(3, 4).astype(np.int8))
>>> values = Tensor(np.array([[3], [4], [8]]).astype(np.int8))
>>> output = lowerbound(sorted_x, values)
>>> print(output)
[[3]
 [0]
 [0]]
class tinyms.primitives.LpNorm(axis, p=2, keep_dims=False, epsilon=1e-12)[source]

Returns the matrix norm or vector norm of a given tensor.

\[output = sum(abs(input)**p)**(1/p)\]
Parameters:
  • axis (int,list,tuple) – Specifies which dimension or dimensions of input to calculate the norm across.

  • p (int, optional) – The order of norm. Default: 2.

  • keep_dims (bool, optional) – Whether the output tensors have dim retained or not. Default: False.

  • epsilon (float, optional) – A value added to the denominator for numerical stability. Default: 1e-12.

Inputs:
  • input (Tensor) - Input tensor.

Outputs:

Tensor, has the same dtype as input, its shape depends on axis. For example, if the shape of input is \((2, 3, 4)\), axis is \([0, 1]\), output shape will be \((4,)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not one of: float16, float32.

  • TypeError – If p is not an int.

  • TypeError – If axis is not an int, a tuple or a list.

  • TypeError – If axis is a tuple or a list, but the element of axis is not an int.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If the element of axis is out of the range \([-r, r)\), where \(r\) is the rank of input.

  • ValueError – If the length of shape of axis is bigger than the length of shape of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32))
>>> op = ops.LpNorm(axis=[0, 1], p=2, keep_dims=False)
>>> output = op(input_x)
>>> print(output)
[ 9.165152 10.954452]
class tinyms.primitives.Lstsq(fast=True, l2_regularizer=0.0)[source]

Computes the solutions of the least squares and minimum norm problems of full-rank matrix x of size \((m \times n)\) and matrix a of size \((m \times k)\).

If \(m \geq n\), Lstsq solves the least-squares problem:

\[\begin{array}{ll} \min_y & \|xy-a\|_2 \end{array}\]

If \(m < n\), Lstsq solves the least-norm problem:

\[\begin{array}{llll} \min_y & \|y\|_2 & \text{subject to} & xy = a \end{array}\]
Parameters:
  • fast (bool, optional) –

    Solving algorithm. Default: True.

    • If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition.

    • If fast is False, an algorithm based on numerically robust completed orthogonal decomposition is used.

  • l2_regularizer (float, optional) – L2 regularization coefficient. Default: 0.0.

Inputs:
  • x (Tensor) - \((m \times n)\) matrix x. The input tensor whose data type is float16, float32 or float64.

  • a (Tensor) - \((m \times k)\) matrix a. The input tensor whose data type is float16, float32 or float64.

Outputs:

Tensor, the least squares or minimum norm problems solution, which has shape \((n \times k)\). The data type is the same with x.

Raises:
  • TypeError – If the input x or a is not a Tensor.

  • TypeError – If dtype of x or a is not one of: float16, float32, float64.

  • TypeError – If the dtypes of x and a are not the same.

  • ValueError – If the dimension of x is not equal to 2.

  • ValueError – If the dimension of a is not equal to 2 or 1.

  • ValueError – If the length of x_dims[0] is not equal to the length of a_dims[0].

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([[2,1,5],[3,5,1],[1,1,1]]),mindspore.float32)
>>> a = Tensor(np.array([[10,5],[15,8],[7,4]]),mindspore.float32)
>>> op = ops.Lstsq()
>>> output = op(x, a)
>>> print(output)
[[17.000002  11.000002 ]
 [-6.5000005 -4.500001 ]
 [-3.500002  -2.5000017]]
class tinyms.primitives.LuSolve[source]

Computes the solution y to the system of linear equations \(Ay = b\) , given LU decomposition A and column vector b.

LU decomposition of a matrix can be generated from mindspore.scipy.linalg.lu() .

Note

The batch dimensions of lu_pivots must match the batch dimensions of lu_data, the size of the dimension and the number of each dimension must be the same. For example, lu_data is \((3, 3, 2, 2)\) lu_pivots is \((3, 3, 2)\), lu_data’s batch dimensions is \((3, 3)\), lu_pivots’s batch dimensions is \((3, 3)\).

The batch dimensions of lu_data must match the batch dimensions of x, the batch dimensions may have different sizes, from right to left, the corresponding dimensions must be equal. For example, lu_data is \((3, 3, 2, 2)\) x is \((2, 3, 3, 2, 1)\), lu_data’s batch dimensions is \((3, 3)\), x’s batch dimensions is \((2, 3, 3)\).

Inputs:
  • x (Tensor) - Column vector b in the above equation. It has shape \((*, m, k)\), where \(*\) is batch dimensions, with data type float32, float16.

  • lu_data (Tensor) - LU decomposition. It has shape \((*, m, m)\), where * is batch dimensions, that can be decomposed into an upper triangular matrix U and a lower triangular matrix L, with data type float32, float16.

  • lu_pivots (Tensor) - Permutation matrix P of LU decomposition. It has shape \((*, m)\), where \(*\) is batch dimensions, that can be converted to a permutation matrix P, with data type int32.

Outputs:

Tensor, the same data type as the x and lu_data.

Raises:
  • TypeError – If dtype of x or lu_data is not one of: float32, float16.

  • TypeError – If dtype of lu_pivots is not: int32.

  • TypeError – If x, lu_data or lu_pivots is not Tensor.

  • TypeError – If dtype of x is not same as dtype of lu_data.

  • ValueError – If the batch dimensions of lu_pivots does not match the batch dimensions of lu_data.

  • ValueError – If x dimension less than 2, lu_data dimension less than 2 or lu_pivots dimension less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1], [3], [3]]), mindspore.float32)
>>> lu_data = Tensor(np.array([[2, 1, 1], [0.5, 1, 1.5], [0.5, 0, 2.5]]), mindspore.float32)
>>> lu_pivots = Tensor(np.array([2, 2, 3]), mindspore.int32)
>>> net = ops.LuSolve()
>>> y = net(x, lu_data, lu_pivots)
>>> print(y)
[[ 1.9000002]
 [-1.4000001]
 [ 0.6      ]]
class tinyms.primitives.LuUnpack(unpack_data=True, unpack_pivots=True)[source]

Converts LU_data and LU_pivots back into P, L and U matrices, where P is a permutation matrix, L is a lower triangular matrix, and U is an upper triangular matrix. Typically, LU_data and LU_pivots are generated from the LU decomposition of a matrix.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.lu_unpack() for more details.

Supported Platforms:

GPU CPU

Examples

>>> LU_data = Tensor(np.array([[[-0.3806, -0.4872,  0.5536],
...                             [-0.1287,  0.6508, -0.2396],
...                             [ 0.2583,  0.5239,  0.6902]],
...                             [[ 0.6706, -1.1782,  0.4574],
...                             [-0.6401, -0.4779,  0.6701],
...                             [ 0.1015, -0.5363,  0.6165]]]), mstype.float32)
>>> LU_pivots = Tensor(np.array([[1, 3, 3],
...                              [2, 3, 3]]), mstype.int32)
>>> lu_unpack = ops.LuUnpack()
>>> pivots, L, U = lu_unpack(LU_data, LU_pivots)
>>> print(pivots)
[[[1. 0. 0.]
  [0. 0. 1.]
  [0. 1. 0.]]

 [[0. 0. 1.]
  [1. 0. 0.]
  [0. 1. 0.]]]
>>> print(L)
[[[ 1.      0.      0.    ]
  [-0.1287  1.      0.    ]
  [ 0.2583  0.5239  1.    ]]

 [[ 1.      0.      0.    ]
  [-0.6401  1.      0.    ]
  [ 0.1015 -0.5363  1.    ]]]
>>> print(U)
[[[-0.3806 -0.4872  0.5536]
  [ 0.      0.6508 -0.2396]
  [ 0.      0.      0.6902]]

 [[ 0.6706 -1.1782  0.4574]
  [ 0.     -0.4779  0.6701]
  [ 0.      0.      0.6165]]]
class tinyms.primitives.MapCacheIdx[source]

MapCacheIdx merge SearchCacheIdx, CacheSwapHashmap, UpdateCache together. When input an indices tensor, it will output the cache indices which search in hashmap.

class tinyms.primitives.MapUniform[source]

Map a tensor by using formula : value = key % group_num * per_group_size + key // group_num.

Inputs:
  • input (Tensor) - Input Tensor.

  • per_group_size (int) - The size of each group.

  • group_num (int) - The number of group.

Outputs:

Tensor, has the same dtype and shape as the input.

Supported Platforms:

CPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4, 5, 6, 7]))
>>> per_group_size = 4
>>> group_num = 2
>>> map_uniform = ops.MapUniform()
>>> output = map_uniform(input_x, per_group_size, group_num)
>>> print(output)
[0, 4, 1, 5, 2, 6, 3, 7]
class tinyms.primitives.MaskedFill[source]

Fills elements with value where mask is True.

Note

If value is a floating-point number of Python, it will be converted to float32 later by default. In this case, if input_x is a float16 Tensor, it will be converted to float32 for calculation, and the result type will be converted back to float16 on the CPU and Ascend platforms, which may cause the performance penalty. A TypeError may be raised on the GPU platform. Therefore, it is recommended that ‘value’ should use a Tensor with the same dtype as input_x.

Refer to mindspore.ops.masked_fill() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
>>> output = ops.MaskedFill()(input, mask, 0.5)
>>> print(output)
[0.5 0.5 3.  0.5]
class tinyms.primitives.MaskedSelect[source]

Returns a new 1-D Tensor which indexes the x tensor according to the boolean mask. The shapes of the mask tensor and the x tensor don’t need to match, but they must be broadcastable.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • mask (Tensor[bool]) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

A 1-D Tensor, with the same type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int32)
>>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
>>> output = ops.MaskedSelect()(x, mask)
>>> print(output)
[1 3]
class tinyms.primitives.MatMul(transpose_a=False, transpose_b=False)[source]

Multiplies matrix a and matrix b.

\[(Output)_{i j}=\sum_{k=1}^{p} a_{i k} b_{k j}=a_{i 1} b_{1 j}+a_{i 2} b_{2 j}+\cdots+a_{i p} b_{p j}, p\in N\]

where the \(i,j\) indicates the output of the i-th row and j-th column element.

Note

If \(N * M\) cannot be divided by 16, the performance will be poor in ascend environment.

Parameters:
  • transpose_a (bool) – If true, a is transposed before multiplication. Default: False.

  • transpose_b (bool) – If true, b is transposed before multiplication. Default: False.

Inputs:
  • a (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((N, C)\). If transpose_a is True, its shape must be \((C, N)\) after transpose.

  • b (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((C, M)\). If transpose_b is True, its shape must be \((M, C)\) after transpose.

Outputs:

Tensor, the shape of the output tensor is \((N, M)\).

Raises:
  • TypeError – If transpose_a or transpose_b is not a bool.

  • ValueError – If the column of matrix dimensions of a is not equal to the row of matrix dimensions of b.

  • ValueError – If length of shape of a or b is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
>>> b = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> matmul = ops.MatMul()
>>> output = matmul(a, b)
>>> print(output)
[[3. 3. 3. 3.]]
class tinyms.primitives.MatrixBandPart[source]

Extracts the central diagonal band of each matrix in a tensor, with all values outside the central band set to zero.

Refer to mindspore.ops.matrix_band_part() for more details.

Supported Platforms:

Examples

>>> matrix_band_part = ops.MatrixBandPart()
>>> x = np.ones([2, 4, 4]).astype(np.float32)
>>> output = matrix_band_part(Tensor(x), 2, 1)
>>> print(output)
[[[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]
 [[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]]
class tinyms.primitives.MatrixDeterminant[source]

Calculates the value of the determinant for one or more square matrices.

Refer to mindspore.ops.det() for more details.

Supported Platforms:

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> op = ops.MatrixDeterminant()
>>> output = op(input_x)
>>> print(output)
[-16.5 21. ]
class tinyms.primitives.MatrixDiagPartV3(align='RIGHT_LEFT')[source]

Returns the diagonal part of a tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.matrix_diag_part() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3, 4],
...                      [5, 6, 7, 8],
...                      [9, 8, 7, 6]]), mindspore.float32)
>>> k =Tensor(np.array([1, 3]), mindspore.int32)
>>> padding_value = Tensor(np.array(9), mindspore.float32)
>>> matrix_diag_part_v3 = ops.MatrixDiagPartV3(align='RIGHT_LEFT')
>>> output = matrix_diag_part_v3(x, k, padding_value)
>>> print(output)
[[9. 9. 4.]
 [9. 3. 8.]
 [2. 7. 6.]]
>>> print(output.shape)
(3, 3)
class tinyms.primitives.MatrixDiagV3(align='RIGHT_LEFT')[source]

Constructs a diagonal matrix or a batch of diagonal matrices from a given input Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.matrix_diag() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 9, 0],
...                      [1, 2, 3],
...                      [0, 4, 5]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 1]), mindspore.int32)
>>> num_rows = Tensor(np.array(3), mindspore.int32)
>>> num_cols = Tensor(np.array(3), mindspore.int32)
>>> padding_value = Tensor(np.array(11), mindspore.float32)
>>> matrix_diag_v3 = ops.MatrixDiagV3(align='LEFT_RIGHT')
>>> output = matrix_diag_v3(x, k, num_rows, num_cols, padding_value)
>>> print(output)
[[ 1.  8. 11.]
 [ 4.  2.  9.]
 [11.  5.  3.]]
>>> print(output.shape)
(3, 3)
class tinyms.primitives.MatrixExp[source]

Computes the matrix exponential of a square matrix. Supports batched inputs.

Refer to mindspore.ops.matrix_exp() for more details.

Supported Platforms:

Examples

>>> matrix_exp = ops.MatrixExp()
>>> x = Tensor(np.array([[1, 2], [0, 1]]), mindspore.float32)
>>> output = matrix_exp(x)
>>> print(output)
[[2.7182817 5.436563 ]
[0.        2.7182817]]
class tinyms.primitives.MatrixInverse(adjoint=False)[source]

Returns the inverse of the input matrix. If the matrix is irreversible, an error may be reported or an unknown result may be returned.

Note

The parameter ‘adjoint’ is only supporting False right now, because complex number is not supported at present.

Parameters:

adjoint (bool) – An optional bool. Default: False.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be at least two dimensions, and the last two dimensions must be the same size.

Outputs:

Tensor, has the same type and shape as input x.

Raises:
  • TypeError – If adjoint is not a bool.

  • TypeError – If x is not a Tensor.

  • ValueError – If the last two dimensions of x is not same size.

  • ValueError – If the dimension of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[-0.710504  , -1.1207525],
...                       [-1.7651395 , -1.7576632]],
...                      [[ 0.52412605,  1.9070215],
...                       [ 1.3384849 ,  1.4274558]]]), mindspore.float32)
>>> matrix_inverse = ops.MatrixInverse(adjoint=False)
>>> output = matrix_inverse(x)
>>> print(output)
[[[ 2.4095478  -1.5364188 ]
  [-2.419797    0.9740167 ]]
 [[-0.79111797  1.0569006 ]
  [ 0.74180895 -0.2904787 ]]]
class tinyms.primitives.MatrixLogarithm[source]

Return the matrix logarithm of one or more square matrices.

Inputs:
  • x (Tensor) - x is a tensor. The shape of tensor is \([..., M, M]\). Must be one of the following types:complex64, complex128. And shape must be 2D-7D.

Outputs:
  • y (Tensor) - has the same shape and type as input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of: complex64, complex128.

  • ValueError – If the dimension of x is less to 2.

  • ValueError – If the size of last two dimensions are not equal.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([[1 + 2j, 2 + 1j], [4 + 1j, 5 + 2j]])
>>> matrix_logarithm = ops.MatrixLogarithm()
>>> y = matrix_logarithm(x)
>>> print(y)
[[0.69155775+1.71618359j 0.64665196-0.34928196j]
 [1.02426074-0.88736831j 1.44677531+0.6400109j ]]
class tinyms.primitives.MatrixPower(n)[source]

Calculates the n-th power of a batch of square matrices. When n equals 0, it returns a group of identity matrices. If n is negative, it computes the inverse of each matrix (if possible) raised to the power of abs(n).

Parameters:

n (int) – The exponent, a required int.

Inputs:
  • x (Tensor) - A 3-D Tensor. Supported data types are float16 and float32. The shape is \((b, m, m)\), represents b m-D square matrices.

Outputs:
  • y (Tensor) - A 3-D Tensor. Data type and shape are the same as x’s.

Raises:
  • TypeError – If the data type of n is not int.

  • TypeError – If the data type of x is neither float32 nor float16.

  • TypeError – If x is not a Tensor.

  • ValueError – If x is not a 3-D tensor.

  • ValueError – If shape[1] and shape[2] of x are not the same.

  • ValueError – If n is negative but got input x has singular matrices.

Supported Platforms:

Examples

>>> x = Tensor([[[0, 1], [-1, 0]], [[1, 0], [0, -1]]], dtype=ms.float32)
>>> matrix_power = ops.MatrixPower(n=2)
>>> y = matrix_power(x)
>>> print(y)
[[[-1.  0.]
  [-0. -1.]]
 [[ 1.  0.]
  [ 0.  1.]]]
class tinyms.primitives.MatrixSetDiagV3(align='RIGHT_LEFT')[source]

Updates the diagonal part of a batched tensor. It takes an Tensor x and diagonal as input and returns a Tensor in which the specified diagonal values in the innermost matrices will be replaced by the values in the diagonal.

Diagonals shorter than max_diag_len need to be padded, where max_diag_len is the longest diagonal value. The dimension of diagonal is \(shape[-2]\) must be equal to num_diags calculated by \(num\_diags = k[1] - k[0] + 1\). The dimension of diagonal is \(shape[-1]\) must be equal to the longest diagonal value max_diag_len calculated by \(max\_diag\_len = min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\).

Assume x is an n-D Tensor with shape \((d_1, d_2, ..., d_{n-2}, d_{n-1}, d_n)\). If k is an integer or \(k[0] == k[1]\), diagonal is an (n-1)-D Tensor with shape \((d_1, d_2, ..., d_{n-2}, max\_diag\_len)\) Otherwise, it has the same rank as x with shape \((d_1, d_2, ..., d_{n-2}, num\_diags, max\_diag\_len)\).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

align (str, optional) –

specifies how superdiagonals and subdiagonals should be aligned. Supported values:”RIGHT_LEFT”, “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Default: “RIGHT_LEFT”.

  • When set to “RIGHT_LEFT”, the alignment of superdiagonals will be towards the right side (padding the row on the left), while subdiagonals will be towards the left side (padding the row on the right)

  • When set to “LEFT_RIGHT”, the alignment of superdiagonals will be towards the left side (padding the row on the right), while subdiagonals will be towards the right side (padding the row on the left)

  • When set to “LEFT_LEFT”, the alignment of both superdiagonals and subdiagonals will be towards the left side(padding the row on the right).

  • When set to “RIGHT_RIGHT”, the alignment of both superdiagonals and subdiagonals will be towards the right side(padding the row on the left).

Inputs:
  • x (Tensor) - A n-D Tensor, where \(n >= 2\).

  • diagonal (Tensor) - A Tensor with the same dtype as x. Its rank depends on k. If k is an integer or \(k[0] == k[1]\), its dimension is \(n-1\). Otherwise, it has dimension \(n\).

  • k (Tensor) - Diagonal offset(s), Tensor of type int32. k can either be a single integer, which represents a single diagonal, or a pair of integers that specify the low and high ends of a matrix band. In this case, k[0] should not be greater than k[1]. The value of k has restructions, which means that value of k must be in range \((-x.shape[-2], x.shape[-1])\). Input k must be const Tensor when taking Graph mode.

    • k > 0 refers to a superdiagonal.

    • k = 0 refers to the main diagonal.

    • k < 0 refers to subdiagonals.

Outputs:

Tensor. The same type and shape as x.

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If input x and diagonal are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the diagonal rank size don’t match with input x rank size.

  • ValueError – If the diagonal shape value don’t match with input x shape value.

  • ValueError – If the diagonal \(shape[-2]\) is not equal to num_diags calculated by \(k[1] - k[0] + 1\) .

  • ValueError – If the value of k is not in \((-x.shape[-2], x.shape[-1])\).

  • ValueError – If the diagonal \(shape[-1]\) is not equal to the max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\) .

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[7, 7, 7, 7],
...                      [7, 7, 7, 7],
...                      [7, 7, 7, 7]]), mindspore.float32)
>>> diagonal = Tensor(np.array([[0, 9, 1],
...                             [6, 5, 8],
...                             [1, 2, 3],
...                             [4, 5, 0]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 2]), mindspore.int32)
>>> matrix_set_diag_v3 = ops.MatrixSetDiagV3(align='RIGHT_LEFT')
>>> output = matrix_set_diag_v3(x, diagonal, k)
>>> print(output)
[[1. 6. 9. 7.]
 [4. 2. 5. 1.]
 [7. 5. 3. 8.]]
>>> print(output.shape)
(3, 4)
class tinyms.primitives.MatrixSolve(adjoint=False)[source]

Solves systems of linear equations.

Parameters:

adjoint (bool, optional) – Indicates whether the adjoint of the matrix is used during the computation. Default: False, use its transpose instead.

Inputs:
  • matrix (Tensor) - A tensor of shape \((..., M, M)\), is a matrix of coefficients for a system of linear equations.

  • rhs (Tensor) - A tensor of shape \((..., M, K)\), is a matrix of the resulting values of a system of linear equations. rhs must have the same type as matrix.

Outputs:

Tensor, a matrix composed of solutions to a system of linear equations, which has the same type and shape as rhs.

Raises:
  • TypeError – If adjoint is not the type of bool.

  • TypeError – If the type of matrix is not one of the following dtype: mstype.float16, mstype.float32, mstype.float64, mstype.complex64, mstype.complex128.

  • TypeError – If the type of matrix is not the same as that of rhs.

  • ValueError – If the rank of matrix less than 2.

  • ValueError – If the dimension of matrix is not the same as rhs .

  • ValueError – If the inner-most 2 dimension of matrix is not the same.

  • ValueError – If the inner-most 2 dimension of rhs does not match matrix .

Supported Platforms:

Ascend CPU

Examples

>>> matrix = Tensor(np.array([[1.0  , 4.0],
...                       [2.0 , 7.0]]), mindspore.float32)
>>> rhs = Tensor(np.array([[1.0]  , [3.0]]), mindspore.float32)
>>> matrix_solve = ops.MatrixSolve(adjoint = False)
>>> output = matrix_solve(matrix, rhs)
>>> print(output)
[[5.0], [-1.0]]
class tinyms.primitives.MatrixSolveLs(fast=True)[source]

Solves one or more linear least-squares problems.

If fast is True,then the solution is computed by solving the normal equations using Cholesky decomposition. If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

Parameters:

fast (bool) – An optional bool. Defaults to True.

Inputs:
  • matrix (Tensor) - A Tensor. Must be one of the following data types: float64, float32, complex64, complex128. Shape is \((*, M, N)\).

  • rhs (Tensor) - A Tensor. Must have the same data type as matrix. Shape is \((*, M, K)\). matrix and rhs should have the same dimensions except the last one.

  • l2_regularizer (Tensor) - A Tensor of type float64. Scalar tensor.

Outputs:

Tensor of shape \((*, N, K)\) with the same data type as matrix.

Raises:
  • TypeError – If matrix, rhs or l2_regularizer is not tensor.

  • TypeError – If either of matrix and rhs is not float32, float64, complex64 or complex128.

  • TypeError – If l2_regularizer is not float64.

  • TypeError – If fast is not bool.

  • ValueError – If dimensions of matrix or rhs is less than 2.

  • ValueError – If shape of matrix dose not match the shape of rhs.

Supported Platforms:

CPU

Examples

>>> matrix_solve_ls = ops.MatrixSolveLs(fast=True)
>>> matrix = Tensor([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]], mstype.float32)
>>> rhs = Tensor(np.array([[4], [2], [4], [2]]), mstype.float32)
>>> l2 = Tensor(0.0, mstype.float64)
>>> output = matrix_solve_ls(matrix, rhs, l2)
>>> print(output)
[[ 1.3333334]
[-0.6666667]
[ 2.6666665]
[-1.3333333]]
class tinyms.primitives.MatrixTriangularSolve(lower=True, adjoint=False)[source]

Returns a new tensor with the solution of a linear equation system with an upper or lower triangular matrix.

Note

Only GPU platforms now support the broadcast mechanism.

Parameters:
  • lower (bool, optional) – If True, the innermost matrices in matrix is are lower triangular. Default: True.

  • adjoint (bool, optional) – Indicates whether the adjoint of the matrix is used during the computation. Default: False, use its transpose instead.

Inputs:
  • matrix (Tensor) - Tensor of shape \((*, M, M)\), with float32, float64, complex64 and complex128 data type.

  • rhs (Tensor) - Tensor of shape \((*, M, N)\), with float32, float64, complex64 and complex128 data type.

Outputs:

Tensor, has the shape of \((*, M, N)\) and the same data type as matrix.

Raises:
  • TypeError – If matrix or rhs is not a Tensor.

  • TypeError – If lower or adjoint is not bool.

  • ValueError – For GPU platform, if the batch sizes of matrix and rhs do not satisfy broadcasting rules. For other platforms, if the batch sizes of matrix and rhs are not equal.

  • ValueError – If the inner-most 2 dimensions of matrix are not equal.

  • ValueError – If the second-last dimensions of matrix and rhs are not equal.

Supported Platforms:

Ascend GPU CPU

Examples

>>> matrix_triangular_solve = ops.MatrixTriangularSolve(lower=True, adjoint=False)
>>> matrix = np.array([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
>>> rhs = np.array([[1, 0],[2, 2],[1, 5],[0, 3]])
>>> output = matrix_triangular_solve(Tensor(matrix, mindspore.float32), Tensor(rhs, mindspore.float32))
>>> print(output)
[[ 0.33333334  0.        ]
 [ 1.3333333   2.        ]
 [ 0.6666666   5.        ]
 [-2.3333333  -4.        ]]
class tinyms.primitives.MaxPool(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

Max pooling operation.

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the height of movement but also the width of movement, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value of pad mode is “same” or “valid”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top, bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of shape of input is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_op = ops.MaxPool(pad_mode="VALID", kernel_size=2, strides=1)
>>> output = maxpool_op(x)
>>> print(output)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPool3D(kernel_size=1, strides=1, pad_mode='VALID', pad_list=0, ceil_mode=None, data_format='NCDHW')[source]

Applies a 3D max pooling over an input Tensor which can be regarded as a composition of 3D planes.

Typically the input is of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows:

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the depth, height of movement but also the width of movement,, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value of pad mode is “same”, “valid” or “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be the same as the input. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top, bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of “pad” will be padded to the input Tensor borders. “pad_list” must be greater than or equal to 0.

  • pad_list (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equals to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • ceil_mode (Union[bool, None]) – Whether to use ceil instead of floor to calculate output shape. Only effective in “pad” mode. When “pad_mode” is “pad” and “ceil_mode” is “None”, “ceil_mode” will be set as “False”. Default: None.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Data type must be float16, float32 or float64.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the data type of x.

Raises:
  • TypeError – If kernel_size or strides is neither an int nor a tuple.

  • TypeError – If pad_mode or data_format is not a string.

  • ValueError – If numbers in kernel_size or strides are not positive.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad_mode is ‘same’ or ‘valid’, ‘ceil_mode’ is not None.

  • ValueError – If kernel_size or strides is a tuple whose length is not equal to 3.

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float32)
>>> max_pool3d = ops.MaxPool3D(kernel_size=2, strides=1, pad_mode="valid")
>>> output = max_pool3d(x)
>>> print(output)
[[[[[10. 11.]]]
  [[[22. 23.]]]]]
class tinyms.primitives.MaxPool3DWithArgmax(ksize, strides, pads, dilation=(1, 1, 1), ceil_mode=False, data_format='NCDHW', argmax_type=mindspore.int64)[source]

Performs a 3D max pooling on the input Tensor and returns both max values and indices.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given ksize \(ks = (d_{ker}, h_{ker}, w_{ker})\) and strides \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]

The output is a Tensor with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\) and its depth, height and width are:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \frac{D_{in} + 2 \times \text{pads}[0] - \text{dilation}[0] \times (\text{ksize}[0] - 1) - 1} {\text{stride}[0]} + 1 \\ H_{out} = \frac{H_{in} + 2 \times \text{pads}[1] - \text{dilation}[1] \times (\text{ksize}[1] - 1) - 1} {\text{stride}[1]} + 1 \\ W_{out} = \frac{W_{in} + 2 \times \text{pads}[2] - \text{dilation}[2] \times (\text{ksize}[2] - 1) - 1} {\text{stride}[2]} + 1 \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • pads (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • dilation (Union[int, tuple[int]]) – Default: ‘(1, 1, 1)’.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’. Default: ‘NCDHW’.

  • argmax_type (mindspore.dtype) – The dtype for argmax. Default: mstype.int64.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int32 or int64.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 5.

  • TypeError – If ksize , strides , pads or dilation is not int or tuple.

  • ValueError – If ksize or strides is less than 1.

  • ValueError – If pads is less than 0.

  • ValueError – If data_format is not ‘NCDHW’.

  • ValueError – If argmax_type is not mindspore.int64 or mindspore.int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(2 * 1 * 2 * 2 * 2).reshape((2, 1, 2, 2, 2)), mindspore.float32)
>>> max_pool3d_with_arg_op = ops.MaxPool3DWithArgmax(ksize=2, strides=1, pads=1)
>>> output_tensor, argmax = max_pool3d_with_arg_op(x)
>>> print(output_tensor.shape)
(2, 1, 3, 3, 3)
>>> print(argmax.shape)
(2, 1, 3, 3, 3)
class tinyms.primitives.MaxPoolWithArgmax(kernel_size=1, strides=1, pad_mode='valid', data_format='NCHW')[source]

ops.MaxPoolWithArgmax is deprecated from version 2.0 and will be removed in a future version, use ops.MaxPoolWithArgmaxV2 instead.

Supported Platforms:

Deprecated

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_arg_op = ops.MaxPoolWithArgmax(pad_mode="VALID", kernel_size=2, strides=1)
>>> output_tensor, argmax = maxpool_arg_op(x)
>>> print(output_tensor)
[[[[ 5.  6.  7.]
   [ 9. 10. 11.]]
  [[17. 18. 19.]
   [21. 22. 23.]]
  [[29. 30. 31.]
   [33. 34. 35.]]]]
class tinyms.primitives.MaxPoolWithArgmaxV2(kernel_size, strides=None, pads=0, dilation=(1, 1), ceil_mode=False, argmax_type=mindspore.int64)[source]

Performs max pooling on the input Tensor and returns both max values and indices.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and argmax value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents not only the height of movement but also the width of movement, or a tuple of two int numbers that represent height and width of movement respectively. Default: None, meaning that strides = kernel_size.

  • pads (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively.

  • dilation (Union[int, tuple[int]]) – Default: ‘(1, 1)’.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • argmax_type (mindspore.dtype) – The dtype for argmax. Default: mstype.int64.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

Outputs:

Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int32 or int64.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 4.

  • TypeError – If kernel_size , strides , pads or dilation is not int or tuple.

  • ValueError – If kernel_size, strides or dilation is less than 1.

  • ValueError – If pads is less than 0.

  • ValueError – If argmax_type is not mindspore.int64 or mindspore.int32.

  • TypeError – If ceil_mode is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(20 * 16 * 50 * 32).reshape((20, 16, 50, 32)), mindspore.float32)
>>> maxpool_arg_v2_op = ops.MaxPoolWithArgmaxV2(kernel_size=(3, 2), strides=(2, 1))
>>> output_tensor, argmax = maxpool_arg_v2_op(x)
>>> print(output_tensor.shape)
(20, 16, 24, 31)
>>> pirnt(argmax.shape)
(20, 16, 24, 31)
class tinyms.primitives.MaxUnpool2D(ksize, strides=0, pads=0, output_shape=(), data_format='NCHW')[source]

Calculates the partial inverse of MaxPool2D operation.

Since MaxPool2D loses non-maximal values, it is not fully invertible. Therefore, MaxUnpool2D takes the output of MaxPool2D, including the indices of the maximal values, and computes a partial inverse where all non-maximal values are set to zero. Typically the input is of shape \((N, C, H_{in}, W_{in})\) , the output is of shape \((N, C, H_{out}, W_{out})\) , the operation is as follows:

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times strides[0] - 2 \times pads[0] + ksize[0] \\ W_{out} = (W{in} - 1) \times strides[1] - 2 \times pads[1] + ksize[1] \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • strides (Union[int, tuple[int]], optional) –

    The strides of kernel moving. If strides is 0 or (0, 0), then strides equal to ksize . Default: 0.

    • An int number that represents the height and width of movement are both strides .

    • A tuple of two int numbers that represent height and width of movement respectively.

  • pads (Union[int, tuple[int]], optional) –

    The pad value to be filled. Default: 0.

    • If pads is an integer, the paddings of height and width are the same, equal to pads.

    • If pads is a tuple of two integers, the padding of height and width equal to pads[0] and pads[1] correspondingly.

  • output_shape (tuple[int], optional) –

    The target output size is an optional input. Default: ().

    • If \(output\_shape == ()\) , then the shape of output computed by kszie, strides and pads .

    • If \(output\_shape != ()\) , then output_shape must be \((N, C, H, W)\) or \((N, H, W, C)\) and output_shape must belong to \([(N, C, H_{out} - strides[0], W_{out} - strides[1]), (N, C, H_{out} + strides[0], W_{out} + strides[1])]\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCHW’ and ‘NHWC’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, H_{in}, W_{in})\) or \((N, H_{in}, W_{in}, C)\).

  • argmax (Tensor) - Max values’ index represented by the argmax. Tensor of shape must be same with input x. Values of argmax must belong to \([0, H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

Outputs:

Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((N, H_{out}, W_{out}, C)\). Has the same data type with x.

Raises:
  • TypeError – If data type of x or argmax is not supported.

  • TypeError – If ksize, strides or pads is neither int nor tuple.

  • ValueError – If numbers in strides (also support 0 and (0, 0)) or ksize is not positive.

  • ValueError – If numbers in pads is negative.

  • ValueError – If ksize, strides or pads is a tuple whose length is not equal to 2.

  • ValueError – If data_format is not a str or is neither NCHW nor NHWC.

  • ValueError – If output_shape whose length is neither 0 or 4.

  • ValueError – If output_shape is not close to output size computed by attr ksize, strides and pads.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0, 1], [8, 9]]]]).astype(np.float32))
>>> argmax = Tensor(np.array([[[[0, 1], [2, 3]]]]).astype(np.int64))
>>> maxunpool2d = ops.MaxUnpool2D(ksize=1, strides=1, pads=0)
>>> output = maxunpool2d(x, argmax)
>>> print(output.asnumpy())
[[[[0. 1.]
    [8. 9.]]]]
class tinyms.primitives.MaxUnpool3D(ksize, strides=0, pads=0, output_shape=(), data_format='NCDHW')[source]

Computes the inverse of mindspore.ops.MaxPool3D.

MaxUnpool3D keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), the output is of shape \((N, C, D_{out}, H_{out}, W_{out})\), the operation is as follows.

\[\begin{split}\begin{array}{ll} \\ D_{out} = (D{in} - 1) \times strides[0] - 2 \times pads[0] + ksize[0] \\ H_{out} = (H{in} - 1) \times strides[1] - 2 \times pads[1] + ksize[1] \\ W_{out} = (W{in} - 1) \times strides[2] - 2 \times pads[2] + ksize[2] \\ \end{array}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • strides (Union[int, tuple[int]], optional) –

    The distance of kernel moving. Default: 0.

    • If it is an int number, the depth, height and width of movement are all equal to strides.

    • If it is a tuple of three int numbers, they represent depth, height and width of movement respectively.

    • If strides is 0 or (0, 0, 0), then strides equal to ksize.

  • pads (Union[int, tuple[int]], optional) –

    The pad value to be filled. Default: 0.

    • If pads is an integer, the paddings of depth, height and width are the same, equal to pads.

    • If pads is a tuple of three integers, the padding of depth, height and width equal to pads[0], pads[1] and pads[2] correspondingly.

  • output_shape (tuple[int], optional) – The target output size. Default: (). If \(output\_shape == ()\), then the shape of output computed by kszie, strides and pads shown above. If \(output\_shape != ()\), then output_shape format must be \((N, C, D, H, W)\) or \((N, D, H, W, C)\) and output_shape must be in range \([(N, C, D_{out} - strides[0], H_{out} - strides[1], W_{out} - strides[2]), (N, C, D_{out} + strides[0], H_{out} + strides[1], W_{out} + strides[2])]\).

  • data_format (str, optional) – The optional value for data format. Currently support ‘NCDHW’ and ‘NDHWC’. Default: ‘NCDHW’.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((N, D_{in}, H_{in}, W_{in}, C)\).

  • argmax (Tensor) - Max values’ index. Tensor that has the same shape as x. Values of argmax must be in range \([0, D_{in} \times H_{in} \times W_{in} - 1]\). Data type must be int32 or int64.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((N, D_{out}, H_{out}, W_{out}, C)\). Has the same data type with x.

Raises:
  • TypeError – If data type of x or argmax is Number.

  • TypeError – If ksize, strides or pads is neither int nor tuple.

  • ValueError – If numbers in strides or ksize is negative.

  • ValueError – If numbers in pads is negative.

  • ValueError – If ksize, strides or pads is a tuple whose length is not equal to 3.

  • ValueError – If data_format is not a str or is neither NCDHW nor NDHWC.

  • ValueError – If output_shape whose length is neither 0 or 5.

  • ValueError – If output_shape is not close to output size range computed by attr ksize, strides, pads.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[[0, 1], [8, 9]]]]]).astype(np.float32))
>>> argmax = Tensor(np.array([[[[[0, 1], [2, 3]]]]]).astype(np.int64))
>>> maxunpool3d = ops.MaxUnpool3D(ksize=1, strides=1, pads=0)
>>> output = maxunpool3d(x, argmax)
>>> print(output.asnumpy())
[[[[[0. 1.]
    [8. 9.]]]]]
class tinyms.primitives.Maximum[source]

Computes the maximum of input tensors element-wise.

Refer to mindspore.ops.maximum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> maximum = ops.Maximum()
>>> output = maximum(x, y)
>>> print(output)
[4. 5. 6.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = maximum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.Merge[source]

Merges all input data to one.

One and only one of the inputs must be selected as the output

Inputs:
  • inputs (Union(Tuple, List)) - The data to be merged. All tuple elements must have the same data type.

Outputs:

tuple. Output is tuple(data, output_index). The data has the same shape of inputs element.

Raises:

TypeError – If inputs is neither Tuple nor list.

Examples

>>> merge = ops.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)
>>> input_y = Tensor(np.random.randint(-4, 4, (2, 4)), mindspore.float32)
>>> result = merge((input_x, input_y))
class tinyms.primitives.Meshgrid(indexing='xy')[source]

Generates coordinate matrices from given coordinate tensors.

Refer to mindspore.ops.meshgrid() for more details.

Parameters:

indexing (str, optional) – Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. Valid options: xy’ or ‘ij’. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing.

Inputs:
  • input (Union[tuple]) - A Tuple of N 1-D Tensor objects. The length of input should be greater than 1. The data type is Number.

Outputs:

Tensors, A Tuple of N N-D Tensor objects. The data type is the same with the Inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> inputs = (x, y, z)
>>> meshgrid = ops.Meshgrid(indexing='xy')
>>> output = meshgrid(inputs)
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
class tinyms.primitives.Minimum[source]

Computes the minimum of input tensors element-wise.

Refer to mindspore.ops.minimum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> minimum = ops.Minimum()
>>> output = minimum(x, y)
>>> print(output)
[1. 2. 3.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = minimum(x, y)
>>> print(output.dtype)
Float32
class tinyms.primitives.MirrorPad(mode='REFLECT')[source]

Pads the input tensor according to the paddings and mode.

Parameters:

mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • paddings (Tensor) - Paddings requires constant tensor. The value of paddings is a matrix(list), and its shape is \((N, 2)\). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input_x.dim_size(D) (or input_x.dim_size(D) - 1) if mode is SYMMETRIC (if REFLECT, respectively).

Outputs:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]]. For a more intuitive understanding, please see the example below.

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]]. For a more intuitive understanding, please see the example below.

Raises:
  • TypeError – If input_x or paddings is not a Tensor.

  • TypeError – If mode is not a str.

  • ValueError – If paddings.size is not equal to 2 * rank of input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, nn, ops
>>> # case1: mode="REFLECT"
>>> class Net(nn.Cell):
...    def __init__(self, mode):
...        super(Net, self).__init__()
...        self.pad = ops.MirrorPad(mode=mode)
...        self.paddings = Tensor([[1, 1], [2, 2]])
...    def construct(self, input_x):
...        return self.pad(input_x, self.paddings)
...
>>> input_x = Tensor([[1,2,3], [4,5,6], [7,8,9]])
>>> pad = Net("REFLECT")
>>> output = pad(input_x)
>>> print(output)
[[6 5 4 5 6 5 4]
 [3 2 1 2 3 2 1]
 [6 5 4 5 6 5 4]
 [9 8 7 8 9 8 7]
 [6 5 4 5 6 5 4]]
>>> # case2: mode="SYMMETRIC"
>>> pad = Net("SYMMETRIC")
>>> output = pad(input_x)
>>> print(output)
[[2 1 1 2 3 3 2]
 [2 1 1 2 3 3 2]
 [5 4 4 5 6 6 5]
 [8 7 7 8 9 9 8]
 [8 7 7 8 9 9 8]]
class tinyms.primitives.Mish[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

The function is shown as follows:

\[\text{output} = x * \tanh(\log(1 + \exp(\text{x})))\]

See more details in A Self Regularized Non-Monotonic Neural Activation Function.

Inputs:
  • x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> mish = ops.Mish()
>>> output = mish(x)
>>> print(output.shape)
(2, 3)
class tinyms.primitives.Mod[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = x_{i} \text{ % } y_{i}\]

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048, the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1,D2... ,Dn)\), then \(D1*D2... *DN<=1000000,n<=8\).

Inputs:
  • x (Union[Tensor, numbers.Number, bool]) - The first input is a number, a bool or a tensor whose data type is number.

  • y (Union[Tensor, numbers.Number, bool]) - When the first input is a tensor, The second input could be a number, a bool or a tensor whose data type is number. When the first input is a number or a bool the second input must be a tensor whose data type is number.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> mod = ops.Mod()
>>> output = mod(x, y)
>>> print(output)
[-1.  1.  0.]
class tinyms.primitives.Mul[source]

Multiplies two tensors element-wise.

Refer to mindspore.ops.mul() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> mul = ops.Mul()
>>> output = mul(x, y)
>>> print(output)
[ 4. 10. 18.]
class tinyms.primitives.MulNoNan[source]

Computes x * y element-wise. If y is zero, no matter what x is, it will return 0.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcasted. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}output_{ij} = \begin{cases} 0, & y_{ij} = 0;\\ x_{ij} * y_{ij}, & otherwise. \end{cases}\end{split}\]

Note

The shapes of x and y should be the same or can be broadcasted. This is noncommutative: if y is NaN or infinite and x is 0, the result will be NaN.

Inputs:
  • x (Union[Tensor]) - The first input is a tensor whose data type is one of int32, int64, float16, float32, float64, complex64, complex128 currently or scalar.

  • y (Union[Tensor]) - The second input is a tensor whose data type is one of int32, int64, float16, float32, float64, complex64, complex128 currently or scalar.

Outputs:

Tensor, the shape is the same as the shape after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type and shape of two inputs, there are some 0 in y.
>>> x = Tensor(np.array([[-1.0, 6.0, np.inf], [np.nan, -7.0, 4.0]]), mindspore.float32)
>>> y = Tensor(np.array([[-1.0, 4.0, 0], [0, -3.0, 1.0]]), mindspore.float32)
>>> mul_no_nan = ops.MulNoNan()
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. 0.]
[ 0. 21. 4.]]
>>> # case 2 : the shape of two inputs is same, there are some 0 in x, y.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.float32)
>>> y = Tensor(np.array([[-1.0, 4.0, np.inf], [np.nan, 0, 1.0]]), mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[ 1. 24. nan]
 [nan  0. 4.]]
>>> print(output.dtype)
Float32
>>> # case 3 : the y is a scalar.
>>> x = Tensor(np.array([[-1.0, 6.0, 0], [0, np.nan, 4.0]]), mindspore.float32)
>>> y = Tensor(0, mindspore.float32)
>>> output = mul_no_nan(x, y)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]]
class tinyms.primitives.MultiMarginLoss(p=1, margin=1.0, reduction='mean')[source]

Creates a loss function that minimizes the hinge loss for multi-class classification tasks. The loss is calculated by comparing the input and output of the function.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.multi_margin_loss() for more details.

Parameters:
  • p (int, optional) – The norm degree for pairwise distance. Should be 1 or 2. Default: 1.

  • margin (int, optional) – A parameter to change pairwise distance. Default: 1.0.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Inputs:
  • inputs (Tensor) - Input , with shape \((N, C)\). Data type only support float32, float16 or float64.

  • target (Tensor) - Ground truth labels, with shape \((N,)\). Data type only support int64. The value of target should be non-negative, less than C.

  • weight (Tensor) - The rescaling weight to each class with shape \((C,)\). Data type only support float16, float32 or float64.

Outputs:

Tensor, When reduction is ‘none’, the shape is \((N,)\). Otherwise, it is a scalar. Has the same data type with inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones(shape=[3, 3]), mindspore.float32)
>>> target = Tensor(np.array([1, 2, 1]), mindspore.int64)
>>> weight = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> loss = ops.MultiMarginLoss()
>>> output = loss(x, target, weight)
>>> print(output)
0.6666667
class tinyms.primitives.MultilabelMarginLoss(reduction='mean')[source]

Creates a loss criterion that minimizes the hinge loss for multi-class classification tasks. It takes a 2D mini-batch Tensor \(x\) as input and a 2D Tensor \(y\) containing target class indices as output.

Refer to mindspore.ops.multilabel_margin_loss() for more details.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.MultilabelMarginLoss()
>>> x = Tensor(np.array([[0.1, 0.2, 0.4, 0.8], [0.2, 0.3, 0.5, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[1, 2, 0, 3], [2, 3, -1, 1]]), mindspore.int32)
>>> output = loss(x, target)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 0.325), Tensor(shape=[2, 4], dtype=Int32, value=
[[1, 1, 1, 1], [0, 0, 1, 1]]))
class tinyms.primitives.Multinomial(seed=0, seed2=0, dtype=mindspore.int32)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

  • dtype (dtype) – The type of output, must be int32 or int64. Default: int32.

Inputs:
  • x (Tensor) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • num_samples (int) - number of samples to draw, must be a nonnegative number.

Outputs:

Tensor with the same rows as x, each row has num_samples sampled indices.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If dtype of num_samples is not int.

  • TypeError – If dtype is not int32 or int64.

  • ValueError – If seed or seed2 is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> multinomial = ops.Multinomial(seed=10)
>>> output = multinomial(x, 2)
>>> print(output)
[[1 1]]
class tinyms.primitives.MultinomialWithReplacement(numsamples, replacement=False)[source]

Returns a tensor where each row contains numsamples indices sampled from the multinomial distribution with replacement. It diffs from Multinomial in that it allows the same outcome to be chosen multiple times.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.multinomial_with_replacement() for more details.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • numsamples (int) – number of samples to draw, must be a nonnegative number.

  • replacement (bool, optional) – Whether to draw with replacement or not. Default: False.

Inputs:
  • x (Tensor) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions.

  • seed (Tensor) - If seed is set to -1, and offset is set to 0, the random number generator is seeded by a random seed. Otherwise, it is seeded by the given seed. Supported dtype: int64.

  • offset (Tensor) - Offset used to avoid seed collision. Supported dtype: int64.

Outputs:

Tensor with the same rows as x, each row has numsamples sampled indices.

Supported Platforms:

CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> seed = Tensor(2, mstype.int64)
>>> offset = Tensor(5, mstype.int64)
>>> multinomialwithreplacement = ops.MultinomialWithReplacement(numsamples=2,replacement=True)
>>> output = multinomialwithreplacement(x, seed, offset)
>>> print(output)
[[1 1]]
class tinyms.primitives.Mvlgamma(p)[source]

Calculates the multivariate log-gamma function element-wise for a given dimension p.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.mvlgamma() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
>>> op = ops.Mvlgamma(p=3)
>>> y = op(x)
>>> print(y)
[[ 2.694925   5.402975   9.140645 ]
 [ 5.402975   1.5963125 13.640454 ]]
class tinyms.primitives.NLLLoss(reduction='mean')[source]

Gets the negative log likelihood loss between logits and labels.

The nll loss with reduction=none can be described as:

\[\ell(x, t)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{t_{n}} x_{n, t_{n}}, \quad w_{c}=\text { weight }[c] \cdot 1\]

where \(x\) is the logits, \(t\) is the labels, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

If reduction is not ‘none’ (default ‘mean’), then

\[\begin{split}\ell(x, t)=\left\{\begin{array}{ll} \sum_{n=1}^{N} \frac{1}{\sum_{n=1}^{N} w_{t n}} l_{n}, & \text { if reduction }=\text { 'mean'; } \\ \sum_{n=1}^{N} l_{n}, & \text { if reduction }=\text { 'sum' } \end{array}\right.\end{split}\]
Parameters:

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type only supports float32 or float16.

  • labels (Tensor) - Ground truth labels, with shape \((N,)\), where each value belong to \([0, C-1]\). Data type only supports int32 or int64.

  • weight (Tensor) - The rescaling weight to each class, with shape \((C,)\) and data type only supports float32 or float16.

Outputs:

Tuple of 2 tensors composed with loss and total_weight.

  • loss (Tensor) - When reduction is ‘none’ and logits is a 2D tensor, the loss shape is \((N,)\). Otherwise, the loss is a scalar. The data type is the same with input’s.

  • total_weight (Tensor) - The total_weight is a scalar. The data type is the same with weight’s.

Raises:
  • TypeError – If dtype of logits or weight is neither float16 nor float32.

  • TypeError – If dtype of labels is neither int32 nor int64.

  • ValueError – If logits is not a one or two dimension tensor, labels and weight are not one dimension tensors. When logits is a two dimension tensor, the first dimension of logits is not equal to labels, and second dimension of logits is not equal to weight. When logits is a one dimension tensor, the dimensions of logits, labels and weight should be equal to each other.

  • ValueError – If the value of labels exceed \([0, C-1]\), where \(C\) is the number of classes.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[0.5488135, 0.71518934],
...                           [0.60276335, 0.5448832],
...                           [0.4236548, 0.6458941]]).astype(np.float32))
>>> labels = Tensor(np.array([0, 0, 0]).astype(np.int32))
>>> weight = Tensor(np.array([0.3834415, 0.79172504]).astype(np.float32))
>>> nll_loss = ops.NLLLoss(reduction="mean")
>>> loss, weight = nll_loss(logits, labels, weight)
>>> print(loss)
-0.52507716
>>> print(weight)
1.1503246
class tinyms.primitives.NMSWithMask(iou_threshold=0.5)[source]

Non-maximum Suppression. When object detection problem is performed in the computer vision field, object detection algorithm generates a plurality of bounding boxes. Use the box with the highest score, calculate the overlap between other boxes and the current box, and delete the box based on a certain threshold(IOU). On Ascend platform, the input box score is ignored, which only selects boexs based on the IOU between boxes, which means if you want to remove boxes that has lower scores, you need to sort the input boxes by score in descending order in advance. The IOU is as follows:

\[\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\]

Warning

Only supports up to 2864 input boxes at one time.

Parameters:

iou_threshold (float) – Specifies the threshold of overlap boxes with respect to IOU. Default: 0.5.

Inputs:
  • bboxes (Tensor) - The shape of tensor is \((N, 5)\). Input bounding boxes. N is the number of input bounding boxes. Every bounding box contains 5 values, the first 4 values are the coordinates(x0, y0, x1, y1) of bounding box which represents the point of top-left and bottom-right, and the last value is the score of this bounding box. The data type must be float16 or float32.

Outputs:

tuple[Tensor], tuple of three tensors, they are output_boxes, output_idx and selected_mask.

  • output_boxes (Tensor) - The shape of tensor is \((N, 5)\). On GPU and CPU platform, it is a sorted list of bounding boxes by sorting the input bboxes in descending order of score. On Ascend platform, it is same as input bboxes.

  • output_idx (Tensor) - The shape of tensor is \((N,)\). The indexes list of output_boxes.

  • selected_mask (Tensor) - The shape of tensor is \((N,)\). A mask list of valid output bounding boxes. Apply this mask on output_boxes to get the list of bounding boxes after non-max suppression calculation, or apply this mask on output_idx to get the indexes list of bounding boxes after non-max suppression calculation.

Raises:
  • ValueError – If the iou_threshold is not a float number.

  • ValueError – if the first dimension of input Tensor is less than or equal to 0.

  • TypeError – if the dtype of the bboxes is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bbox = np.array([[100.0, 100.0, 50.0, 68.0, 0.63], [150.0, 75.0, 165.0, 115.0, 0.55],
...                  [12.0, 190.0, 288.0, 200.0, 0.9], [28.0, 130.0, 106.0, 172.0, 0.3]])
>>> bbox[:, 2] += bbox[:, 0]
>>> bbox[:, 3] += bbox[:, 1]
>>> inputs = Tensor(bbox, mindspore.float32)
>>> nms = ops.NMSWithMask(0.1)
>>> output_boxes, indices, mask = nms(inputs)
>>> indices_np = indices.asnumpy()
>>> print(indices_np[mask.asnumpy()])
[0 1 2]
class tinyms.primitives.NPUAllocFloatStatus[source]

Allocates a flag to store the overflow status.

The flag is a tensor whose shape is \((8,)\) and data type is mindspore.dtype.float32.

Note

Please refer to the Examples of mindspore.ops.NPUGetFloatStatus.

Outputs:

Tensor, has the shape of \((8,)\).

Supported Platforms:

Ascend

Examples

>>> alloc_status = ops.NPUAllocFloatStatus()
>>> output = alloc_status()
>>> print(output)
[0. 0. 0. 0. 0. 0. 0. 0.]
class tinyms.primitives.NPUClearFloatStatus[source]

Clears the flag which stores the overflow status.

Note

The flag is in the register on the Ascend device. It will be reset and can not be reused again after the NPUClearFloatStatus is called. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use mindspore.ops.Depend on ensure the execution order.

Please refer to the Examples of mindspore.ops.NPUGetFloatStatus.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import ops
>>> from mindspore.common import dtype as mstype
>>> from mindspore.common.tensor import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.alloc_status = ops.NPUAllocFloatStatus()
...         self.get_status = ops.NPUGetFloatStatus()
...         self.clear_status = ops.NPUClearFloatStatus()
...         self.sub = ops.Sub()
...         self.neg = ops.Neg()
...
...     def construct(self, x):
...         init = self.alloc_status()
...         clear_status = self.clear_status(init)
...         x = ops.depend(x, clear_status)
...         res = self.sub(x, self.neg(x))
...         init = ops.depend(init, res)
...         get_status = self.get_status(init)
...         res = ops.depend(res, get_status)
...         return res
>>>
>>> value = 5
>>> data = np.full((2, 3), value, dtype=np.float16)
>>> x = Tensor(data, dtype=mstype.float16)
>>> net = Net()
>>> res = net(x)
>>> print(res)
[[10. 10. 10.]
 [10. 10. 10.]]
class tinyms.primitives.NPUGetFloatStatus[source]

mindspore.ops.NPUGetFloatStatus updates the flag which is the output tensor of mindspore.ops.NPUAllocFloatStatus with the latest overflow status.

Note

The flag is a tensor whose shape is \((8,)\) and data type is mindspore.dtype.float32. If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use mindspore.ops.Depend to ensure the execution order.

Inputs:
  • x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

Outputs:

Tensor, has the same shape as x. All the elements in the tensor will be zero.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import ops
>>> from mindspore.common import dtype as mstype
>>> from mindspore.common.tensor import Tensor
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.alloc_status = ops.NPUAllocFloatStatus()
...         self.get_status = ops.NPUGetFloatStatus()
...         self.clear_status = ops.NPUClearFloatStatus()
...         self.sub = ops.Sub()
...         self.neg = ops.Neg()
...
...     def construct(self, x):
...         init = self.alloc_status()
...         clear_status = self.clear_status(init)
...         x = ops.depend(x, clear_status)
...         res = self.sub(x, self.neg(x))
...         init = ops.depend(init, res)
...         get_status = self.get_status(init)
...         res = ops.depend(res, get_status)
...         return res
>>>
>>> value = 5
>>> data = np.full((2, 3), value, dtype=np.float16)
>>> x = Tensor(data, dtype=mstype.float16)
>>> net = Net()
>>> res = net(x)
>>> print(res)
[[10. 10. 10.]
 [10. 10. 10.]]
class tinyms.primitives.NanToNum(nan=0.0, posinf=None, neginf=None)[source]

Replaces NaN, positive infinity and negative infinity values in the input Tensor with the values specified by nan, posinf and neginf respectively.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.nan_to_num() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> nan_to_num = ops.NanToNum()
>>> x = Tensor(np.array([float('nan'), float('inf'), -float('inf'), 3.14]), mindspore.float32)
>>> output = nan_to_num(x)
>>> print(output)
[ 0.0000000e+00  3.4028235e+38 -3.4028235e+38  3.1400001e+00]
class tinyms.primitives.Neg[source]

Returns a tensor with negative values of the input tensor element-wise.

Refer to mindspore.ops.neg() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> neg = ops.Neg()
>>> x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = neg(x)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
class tinyms.primitives.NeighborExchange(send_rank_ids, recv_rank_ids, recv_shapes, send_shapes, recv_type, group='hccl_world_group')[source]

NeighborExchange is a collective operation.

NeighborExchange sends data from the local rank to ranks in the send_rank_ids, as while receive data from recv_rank_ids.

Note

The user needs to preset communication environment variables before running the following example, please check the details on the official website of MindSpore.

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details.

Parameters:
  • send_rank_ids (list(int)) – Ranks which the data is sent to.

  • recv_rank_ids (list(int)) – Ranks which the data is received from.

  • recv_shapes (tuple(list(int))) – Data shape which received from recv_rank_ids.

  • send_shapes (tuple(list(int))) – Data shape which send to the send_rank_ids.

  • recv_type (type) – Data type which received from recv_rank_ids

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (tuple[Tensor]) - Shapes are same as args of send_shapes.

Outputs:

Tuple tensor, shapes are same as args of recv_shapes.

Supported Platforms:

Ascend

Examples

>>> # This example should be run with 2 devices. Refer to the tutorial > Distributed Training on mindspore.cn
>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.neighborexchange = ops.NeighborExchange(send_rank_ids=[1], recv_rank_ids=[1],
...                                                      recv_shapes=([2, 2],), send_shapes=([3, 3],),
...                                                      recv_type=ms.float32)
...
...
...     def construct(self, x):
...         out = self.neighborexchange((x,))
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> net = Net()
>>> input_x = Tensor(np.ones([3, 3]), dtype = ms.float32)
>>> output = net(input_x)
>>> print(output)
[[2. 2.], [2. 2.]]
class tinyms.primitives.NeighborExchangeV2(send_rank_ids, send_lens, recv_rank_ids, recv_lens, data_format, group='hccl_world_group')[source]

NeighborExchangeV2 is a collective communication operation.

NeighborExchangeV2 sends data from the local rank to ranks in the send_rank_ids, as while receive data from recv_rank_ids. Please refer to Distributed Set Communication Primitives - NeighborExchangeV2 to learn about how the data is exchanged between neighborhood devices.

Note

This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are in the same subnet, please check the details .

Parameters:
  • send_rank_ids (list(int)) – Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one direction is not send to , set it -1.

  • recv_rank_ids (list(int)) – Ranks which the data is received from. 8 rank_ids represents 8 directions, if one direction is not recv from , set it -1.

  • send_lens (list(int)) – Data lens which send to the send_rank_ids, 4 numbers represent the lens of [send_top, send_bottom, send_left, send_right].

  • recv_lens (list(int)) – Data lens which received from recv_rank_ids, 4 numbers represent the lens of [recv_top, recv_bottom, recv_left, recv_right].

  • data_format (str) – Data format, only support NCHW now.

  • group (str, optional) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”, which means “hccl_world_group” in Ascend, and “nccl_world_group” in GPU.

Inputs:
  • input_x (Tensor) - The Tensor before being exchanged. It has a shape of \((N, C, H, W)\).

Outputs:

The Tensor after being exchanged. If input shape is \((N, C, H, W)\), output shape is \((N, C, H+recv\_top+recv\_bottom, W+recv\_left+recv\_right)\).

Raises:
  • TypeError – If group is not a string or any one of send_rank_ids, recv_rank_ids, send_lens, recv_lens is not a list.

  • ValueError – If send_rank_ids or recv_rank_ids has value less than -1 or has repeated values.

  • ValueError – If send_lens, recv_lens has value less than 0.

  • ValueError – If data_format is not “NCHW”.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import os
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.neighborexchangev2 = ops.NeighborExchangeV2(send_rank_ids=[-1, -1, -1, -1, 1, -1, -1, -1],
...                                                          send_lens=[0, 1, 0, 0],
...                                                          recv_rank_ids=[-1, -1, -1, -1, 1, -1, -1, -1],
...                                                          recv_lens=[0, 1, 0, 0],
...                                                          data_format="NCHW")
...
...     def construct(self, x):
...         out = self.neighborexchangev2(x)
...         return out
...
>>> ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend')
>>> init()
>>> input_x = Tensor(np.ones([1, 1, 2, 2]), dtype = ms.float32)
>>> net = Net()
>>> output = net(input_x)
>>> print(output)
[[[[1. 1.], [1. 1.], [2. 2.]]]]
class tinyms.primitives.NextAfter[source]

Returns the next representable floating-point value after x1 towards x2 element-wise.

Say there are two float32 numbers \(a, b\), and let the representable delta of float32 datatype is \(eps\). If \(a < b\), then the next representable of \(a\) towards \(b\) is \(a+eps\), If \(a > b\), the next representable of \(b\) towards \(a\) is \(b-eps\).

\[out_{i} = nextafter({x1_{i}, x2_{i}})\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x1 (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • x2 (Tensor) - The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

Outputs:

Tensor, has the same shape and data type as x1.

Raises:
  • TypeError – If neither x1 nor x2 is a Tensor.

  • TypeError – If the dtype of x1 and x2 is not one of: float32, float64.

  • TypeError – If the dtypes of x1 and x2 are not same.

  • ValueError – If x1’s shape is not the same as x2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> nextafter = ops.NextAfter()
>>> x1 = Tensor(np.asarray([0.0]), mindspore.float32)
>>> x2 = Tensor(np.asarray([0.1]), mindspore.float32)
>>> output = nextafter(x1, x2)
>>> print(output)
[1.e-45]
class tinyms.primitives.NoRepeatNGram(ngram_size=1)[source]

Updates the probability of occurrence of words with its corresponding n-grams.

During beam search, if consecutive ngram_size words exist in the generated word sequence, the consecutive ngram_size words will be avoided during subsequent prediction. For example, when ngram_size is 3, the generated word sequence is [1, 2, 3, 2, 3], the next predicted word will not be 2 and the value of log_probs will be replaced with -FLOAT_MAX. Because 3 consecutive words [2, 3, 2] do not appear twice in the word sequence.

Parameters:

ngram_size (int) – Size of n-grams, must be greater than 0. Default: 1.

Inputs:
  • state_seq (Tensor) - n-gram word series, a 3-D tensor with shape: \((batch\_size, beam\_width, m)\).

  • log_probs (Tensor) - Probability of occurrence of n-gram word series, a 3-D tensor with shape: \((batch\_size, beam\_width, vocab\_size)\). The value of log_probs will be replaced with -FLOAT_MAX when n-grams repeated.

Outputs:
  • log_probs (Tensor) - The output Tensor with same shape and type as original log_probs.

Raises:
  • TypeError – If ngram_size is not an int.

  • TypeError – If neither state_seq nor log_probs is a Tensor.

  • TypeError – If the dtype of state_seq is not int.

  • TypeError – If the dtype of log_probs is not float.

  • ValueError – If ngram_size is less than zero.

  • ValueError – If ngram_size is greater than m.

  • ValueError – If neither state_seq nor log_probs is not a 3-D Tensor.

  • ValueError – If the batch_size of state_seq and log_probs are not equal.

  • ValueError – If the beam_width of state_seq and log_probs are not equal.

Supported Platforms:

Ascend GPU CPU

Examples

>>> no_repeat_ngram = ops.NoRepeatNGram(ngram_size=3)
>>> state_seq = Tensor([[[1, 2, 1, 2, 5, 1, 2],
...                      [9, 3, 9, 5, 4, 1, 5]],
...                     [[4, 8, 6, 4, 5, 6, 4],
...                      [4, 8, 8, 4, 3, 4, 8]]], dtype=mindspore.int32)
>>> log_probs = Tensor([[[0.7, 0.8, 0.6, 0.9, 0.2, 0.8, 0.4, 0.6, 0.2, 0.7],
...                      [0.4, 0.5, 0.6, 0.7, 0.8, 0.1, 0.9, 0.8, 0.7, 0.1]],
...                     [[0.9, 0.7, 0.6, 0.3, 0.5, 0.3, 0.5, 0.4, 0.8, 0.6],
...                      [0.5, 0.8, 0.8, 0.7, 0.7, 0.8, 0.2, 0.7, 0.9, 0.7]]], dtype=mindspore.float32)
>>> output = no_repeat_ngram(state_seq, log_probs)
>>> print(output)
[[[ 6.9999999e-01 -3.4028235e+38  6.0000002e-01  8.9999998e-01
    2.0000000e-01 -3.4028235e+38  4.0000001e-01  6.0000002e-01
    2.0000000e-01  6.9999999e-01]
  [ 4.0000001e-01  5.0000000e-01  6.0000002e-01  6.9999999e-01
    8.0000001e-01  1.0000000e-01  8.9999998e-01  8.0000001e-01
    6.9999999e-01  1.0000000e-01]]
 [[ 8.9999998e-01  6.9999999e-01  6.0000002e-01  3.0000001e-01
    5.0000000e-01 -3.4028235e+38  5.0000000e-01  4.0000001e-01
    8.0000001e-01  6.0000002e-01]
  [ 5.0000000e-01  8.0000001e-01  8.0000001e-01  6.9999999e-01
    6.9999999e-01  8.0000001e-01  2.0000000e-01  6.9999999e-01
   -3.4028235e+38  6.9999999e-01]]]
class tinyms.primitives.NonDeterministicInts(dtype=mindspore.int64)[source]

Generates some integers that match the given type.

Returns the tensor with the given shape, the random numbers in it drawn from the data range that a given type can represent.

Warning

The value of shape must be greater than zero. The number of elements of output can not exceed 1000000.

Parameters:

dtype (mindspore.dtype, optional) – The date type of output. The supported values are: mstype.int32 and mstype.int64. Default: mstype.int64.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. The supported values are: int32 and int64.

Outputs:

Tensor. Its shape is specified by the input shape. Its type is specified by dtype.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If dtype is not mstype.int32 or mstype.int64.

  • ValueError – If shape has negative elements.

  • ValueError – If shape has less than 2 elements.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor((3,4), mstype.int32)
>>> ndints = ops.NonDeterministicInts(dtype=mstype.int32)
>>> output = ndints(shape)
>>> print(output.shape)
(3, 4)
class tinyms.primitives.NonMaxSuppressionV3[source]

Selects a subset of bounding boxes in a greedy manner, based on their descending score. It removes boxes that have high intersection-over-union (IOU) overlap with previously selected boxes, and eliminates boxes with scores lower than a given threshold.

Warning

When input max_output_size is negative, it will be treated as 0.

Note

  • This algorithm does not depend on the location of the origin in the coordinate system.

  • This algorithm remains unaffected by orthogonal transformations and translations of the coordinate system, which means that translating or reflecting the coordinate system will result in the same boxes being chosen by the algorithm.

Inputs:
  • boxes (Tensor) - A 2-D Tensor of shape \((num\_boxes, 4)\).

  • scores (Tensor) - A 1-D Tensor of shape \((num\_boxes)\) where each element represents a single score associated with each box (i.e., each row of the boxes Tensor). It is required that the number of scores in scores must be equal to the number of boxes in boxes. The supported data type is float32.

  • max_output_size (Union[Tensor, Number.Int]) - A scalar integer Tensor representing the maximum number of boxes to be selected by non max suppression. The supported data type is int32.

  • iou_threshold (Union[Tensor, Number.Float]) - A scalar float Tensor represents the threshold used for determining if the intersection over union (IOU) between boxes is too high. Data type of iou_threshold is float32 and must be in range [0, 1].

  • score_threshold (Union[Tensor, Number.Float]) - A scalar float Tensor represents the threshold for determining when to remove boxes based on score. The supported data type is float32.

Outputs:

A 1-D integer Tensor of shape \((M)\) representing the selected indices from the boxes tensor, where M <= max_output_size.

Raises:
  • TypeError – If the dtype of boxes and scores are different.

  • TypeError – If the dtype of iou_threshold and score_threshold are different.

  • TypeError – If boxes is not tensor or its dtype is not float16 or float32.

  • TypeError – If scores is not tensor or its dtype is not float16 or float32.

  • TypeError – If max_output_size is not tensor or scalar or its date type is not int32 or int64.

  • TypeError – If iou_threshold is not tensor or scalar or its type is neither float16 or float32.

  • TypeError – If score_threshold is not tensor or scalar or its type is neither float16 or float32.

  • ValueError – If the size of shape of boxes is not 2 or the second value of its shape is not 4.

  • ValueError – If the size of shape of scores is not 1.

  • ValueError – If any of the size of shape of max_output_size, iou_threshold, score_threshold is not 0.

Supported Platforms:

Ascend GPU

Examples

>>> boxes = Tensor(np.array([[1, 2, 3, 4], [1, 3, 3, 4], [1, 3, 4, 4],
...                          [1, 1, 4, 4], [1, 1, 3, 4]]), mstype.float32)
>>> scores = Tensor(np.array([0.4, 0.5, 0.72, 0.9, 0.45]), mstype.float32)
>>> max_output_size = Tensor(5, mstype.int32)
>>> iou_threshold = Tensor(0.5, mstype.float32)
>>> score_threshold = Tensor(0, mstype.float32)
>>> nonmaxsuppression = ops.NonMaxSuppressionV3()
>>> output = nonmaxsuppression(boxes, scores, max_output_size, iou_threshold, score_threshold)
>>> print(output)
[3 2 0]
class tinyms.primitives.NonMaxSuppressionWithOverlaps[source]

Selects a subset of bounding boxes in a greedy manner by prioritizing those with higher scores and removing those with high overlaps with previously selected boxes. Boxes with scores lower than the score threshold are also removed. The overlap values between boxes are represented as an N-by-N square matrix, which can be customized to define different overlap criteria such as intersection over union or intersection over area.

Note

  • This algorithm does not depend on the location of the origin in the coordinate system.

  • This algorithm remains unaffected by orthogonal transformations and translations of the coordinate system, which means that translating or reflecting the coordinate system will result in the same boxes being chosen by the algorithm.

Inputs:
  • overlaps (Tensor) - A 2-D Tensor of shape \((num\_boxes, num\_boxes)\), representing the n-by-n box overlap values. Types allowed:float16, float32 and float64.

  • scores (Tensor) - A 1-D Tensor of shape \((num\_boxes)\) where each element represents a single score associated with each box (i.e., each row of the boxes Tensor). It is required that the number of scores in scores must be equal to the number of boxes in boxes. The supported data type is float32.

  • max_output_size (Union[Tensor, Number.Int]) - A scalar integer Tensor representing the maximum number of boxes to be selected by non max suppression, and max_output_size must be equal to or greater than 0. Types allowed:int32.

  • overlap_threshold (Union[Tensor, Number.Float]) - A scalar value, represented by a 0-D float Tensor, which is used as a threshold to determine if two boxes overlap too much. Types allowed:float16, float32 and float64.

  • score_threshold (Union[Tensor, Number.Float]) - A 0-D float Tensor representing the threshold for deciding when to remove boxes based on score. It has the same dtype as overlap_threshold.

Outputs:

A 1-D integer Tensor of shape \((M)\) representing the selected indices from the boxes Tensor, where M <= max_output_size. Its data type is int32.

Raises:
  • TypeError – If the dtype of overlaps , scores overlap_threshold and score_threshold is not float16, float32 or float64.

  • TypeError – If overlaps or scores is not Tensor。

  • TypeError – If max_output_size is not Tensor or Scalar.If max_output_size is not int32.

  • TypeError – If overlap_threshold is not Tensor or scalar. If its type is not float16, float32 or float64.

  • TypeError – If score_threshold is not Tensor or scalar. If its type is not float16, float32 or float64.

  • ValueError – If the size of shape of overlaps is not 2 or the second value of its shape is not equal to the first value of its shape.

  • ValueError – If the size of shape of scores is not 1.

  • ValueError – If any of the size of shape of max_output_size, overlap_threshold, score_threshold is not 0.

  • ValueError – If max_output_size is negative.

  • ValueError – If the shape of scores is not equal to the shape of the dim0 or dim1 of overlaps.

Supported Platforms:

Ascend GPU CPU

Examples

>>> overlaps = Tensor(np.array([[0.6964692, 0.28613934, 0.22685145, 0.5513148],
...                     [0.71946895, 0.42310646, 0.9807642, 0.6848297],
...                     [0.4809319, 0.39211753, 0.343178, 0.7290497],
...                     [0.43857226, 0.059677895, 0.39804426, 0.7379954]
...                     ]), mstype.float32)
>>> scores = Tensor(np.array([0.18249173, 0.17545176, 0.53155136, 0.53182757]), mstype.float32)
>>> max_output_size = Tensor(4, mstype.int32)
>>> overlap_threshold = Tensor(0.1, mstype.float32)
>>> score_threshold = Tensor(0.2, mstype.float32)
>>> nonmaxsuppression = ops.NonMaxSuppressionWithOverlaps()
>>> output = nonmaxsuppression(overlaps, scores, max_output_size, overlap_threshold, score_threshold)
>>> print(output)
[3]
class tinyms.primitives.NonZero[source]

Return a tensor of the positions of all non-zero values.

Refer to mindspore.ops.nonzero() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import NonZero
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> nonzero = NonZero()
>>> output = nonzero(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
>>> x = Tensor(np.array([1, 0, 2, 0, 3]), mindspore.int32)
>>> nonzero = NonZero()
>>> output = nonzero(x)
>>> print(output)
[[0]
 [2]
 [4]]
class tinyms.primitives.NotEqual[source]

Computes the non-equivalence of two tensors element-wise.

Refer to mindspore.ops.ne() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> not_equal = ops.NotEqual()
>>> output = not_equal(x, y)
>>> print(output)
[False False  True]
class tinyms.primitives.NthElement(reverse=False)[source]

Computes the n-th smallest values for the last dimension of the input Tensor.

  • When input is a 1-D Tensor (i.e. Vector), it finds the nth-smallest value in the vector and outputs its value as a scalar Tensor.

  • When input is matrices or has higher rank, it finds the nth-smallest value in each row (or vector along the last dimension) and outputs these values in a Tensor with shape of values.shape = input.shape[:-1].

Parameters:

reverse (bool, optional) – An optional bool. If set to True, it find the \(n\)-th largest value in the vector instead of the nth-smallest. Default: False.

Inputs:
  • input (Tensor) - Input Tensor with 1-D or higher dimension.

  • n (Union[int, Tensor]) - If the n is a Tensor, it should be a 0-D Tensor, dtype is int32. Valid range of n is \([0, input.shape[-1])\) where \(input.shape[-1]\) is last dimension size of input.

Outputs:
  • values (Tensor) - Its shape satisfies: values.shape = input.shape[:-1]. The dtype is the same as input.

Raises:
  • TypeError** – If the type of input is out of the valid list.

  • TypeError** – If n is not int32 or not a Tensor.

  • ValueError** – If n is out of \([0, input.shape[-1])\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[1,2,3],[4,5,6]]) , mstype.int8)
>>> n = 1
>>> net = ops.NthElement()
>>> out = net(input, n)
>>> print(out)
[2 5]
class tinyms.primitives.NuclearNorm(dim=None, keepdim=False)[source]

Returns the matrix nuclear norm of a given Tensor.

Attr dim specifies which two dimensions of the input x to calculate the nuclear norm across. If dim is None, the nuclear norm will be calculated across all dimensions of input. Because the nuclear norm is the sum of the singular values of the matrix, the input at this time should be 2-dimensional. That is, if the input is 2-dimensional, we compute the nuclear norm of the input matrix. At this point, dim should be None. If you set dim, it also needs to be in the proper range, otherwise it wonn’t work. If the input is 3-dimensional and above, the attribute dim is required. It specifies which two dimensions of input to calculate the nuclear norm across.

According to the dim list, the input Tensor is reordered by dim. The two dimensions pointed to by the attribute dim are placed at the end, and the order of the other dimensions is relatively unchanged. Perform the SVD of each slice of the adjusted Tensor to obtain the singular value. Sum all of the singular value of each slice/matrix to obtain the nuclear norm.

Parameters:
  • dim (Union[list(int), tuple(int)], optional) – Specifies which two dimensions of x to calculate the matrix nuclear norm across. If dim is None, the nuclear norm will be calculated across all dimensions of x. The length of dim should be 2. The value in dim should be in this range:[-x_rank, x_rank). x_rank is the dimension of Tensor x. The value of dim[0] or dim[1] can not point to the same dimension. Default: None.

  • keepdim (bool, optional) – Whether the output Tensor have dim retained or not. Default: False.

Inputs:
  • x (Tensor) - Input to compute the matrix nuclear norm. The dimension of x should be greater than or equal to 2. Data type must be float32 or float64.

Outputs:

Tensor, output Tensor with dimensions in dim reduced to 1 will be returned if keepdim is True; otherwise a Tensor with dimensions in dim removed is returned. The data type is same as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float32 nor float64.

  • TypeError – If dtype of dim is neither list(int) nor tuple(int).

  • TypeError – If dtype of keepdim is not bool.

  • ValueError – If dimension of Tensor x is less than 2.

  • ValueError – If the length of dim is not 2 when dim is set.

  • ValueError – If the dimension of Tensor x is not 2 when dim is not set.

  • ValueError – If dim[0] or dim[1] point to the same dimension.

  • ValueError – If dim[0] or dim[1] is not in this range:[-x_rank, x_rank). x_rank is the dimension of Tensor x.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
...                           [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]), ms.float32)
>>> dim = [0, 2]
>>> keepdim = True
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[[[15.407588]
[21.711605]]]
>>> keepdim = False
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[15.407588 21.711605]
>>> dim = [0, 1]
>>> keepdim = True
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[[[14.212674 15.81139  17.492853]]]
>>> keepdim = False
>>> nuclearnorm = nn_ops.NuclearNorm(dim = dim,keepdim = keepdim)
>>> output = nuclearnorm(input_x)
>>> print(output)
[14.212674 15.81139  17.492853]
class tinyms.primitives.OneHot(axis=-1)[source]

Computes a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters:

axis (int) – Position to insert the value. e.g. If shape of indices is \((N, C)\), and axis is -1, the output shape will be \((N, C, D)\), If axis is 0, the output shape will be \((D, N, C)\). Default: -1.

Inputs:
  • indices (Tensor) - A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be uint8, int32 or int64.

  • depth (int) - A scalar defining the depth of the one-hot dimension.

  • on_value (Tensor) - A value to fill in output when indices[j] = i. Support uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, bool, complex64, complex128.

  • off_value (Tensor) - A value to fill in output when indices[j] != i. Has the same data type as on_value.

Outputs:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Raises:
  • TypeError – If axis or depth is not an int.

  • TypeError – If dtype of indices is not uint8, int32 or int64.

  • TypeError – If indices, on_value or off_value is not a Tensor.

  • ValueError – If axis is not in range [-1, len(indices_shape)].

  • ValueError – If depth is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> onehot = ops.OneHot()
>>> output = onehot(indices, depth, on_value, off_value)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
class tinyms.primitives.Ones[source]

Creates a tensor filled with value ones.

Refer to mindspore.ops.ones() for more details.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor.

  • type (mindspore.dtype) - The specified type of output tensor.

Outputs:

Tensor, has the same type and shape as input shape value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> ones = ops.Ones()
>>> output = ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ones((3, 3), mindspore.float32)
>>> print(output)
[[1. 1. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
class tinyms.primitives.OnesLike[source]

Returns a Tensor with a value of 1 and its shape and data type is the same as the input.

Refer to mindspore.ops.ones_like() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> oneslike = ops.OnesLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = oneslike(input_x)
>>> print(output)
[[1 1]
 [1 1]]
class tinyms.primitives.Orgqr[source]

Calculates the explicit representation of the orthogonal matrix \(Q\) returned by mindspore.ops.Geqrf.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.orgqr() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-114.6, 10.9, 1.1], [-0.304, 38.07, 69.38], [-0.45, -0.17, 62.]]), mindspore.float32)
>>> tau = Tensor(np.array([1.55, 1.94, 0.0]), mindspore.float32)
>>> net = ops.Orgqr()
>>> y = net(x, tau)
>>> print(y)
[[-0.54999995 -0.2128925   0.8137956 ]
 [ 0.47119996 -0.8752807   0.08240613]
 [ 0.69749993  0.42560163  0.57772595]]
class tinyms.primitives.PReLU[source]

Parametric Rectified Linear Unit activation function.

Refer to mindspore.ops.prelu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.prelu = ops.PReLU()
...     def construct(self, x, weight):
...         result = self.prelu(x, weight)
...         return result
...
>>> x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> output = net(x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]
class tinyms.primitives.Pack(axis=0)[source]

Same as operator Stack. Pack will be deprecated in the future. Please use Stack instead.

class tinyms.primitives.Pad(paddings)[source]

Pads the input tensor according to the paddings.

Refer to mindspore.ops.pad() for more details. Use mindspore.ops.pad() instead if paddings has negative values.

Parameters:

paddings (tuple) – The shape of parameter paddings is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension.

Inputs:
  • input_x (Tensor) - Tensor to be padded. It has shape \((N, *)\), where \(*\) means any number of additional dimensions.

Outputs:

Tensor, the tensor after padding.

Raises:
  • TypeError – If paddings is not a tuple.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If shape of paddings is not \((N, 2)\).

  • ValueError – If paddings.size is not equal to 2 * len(input_x).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> pad_op = ops.Pad(((1, 2), (2, 1)))
>>> output = pad_op(input_x)
>>> print(output)
[[ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.  -0.1  0.3  3.6  0. ]
 [ 0.   0.   0.4  0.5 -3.2  0. ]
 [ 0.   0.   0.   0.   0.   0. ]
 [ 0.   0.   0.   0.   0.   0. ]]
class tinyms.primitives.PadV3(mode='constant', paddings_contiguous=True)[source]

Pads the input Tensor according to the paddings, mode and paddings_contiguous.

Parameters:
  • mode (str, optional) –

    An optional string indicates padding mode, support “constant”, “reflect”, “edge”, “circular”. Default: “constant”. The effects of various padding modes are as follows:

    • ”constant”: Pads the input Tensor with value specified by constant_value.

    • ”reflect”: Pads the input Tensor by reflecting the values of the pixels at the boundary of the Tensor.

    • ”edge”: Pads the input Tensor with the values of the pixels on the border of the Tensor.

    • ”circular”: Circular padding mode. In this mode, the pixels from one edge of the image are wrapped around to the opposite edge, such that the pixel on the right edge of the image is replaced with the pixel on the left edge, and the pixel on the bottom edge is replaced with the pixel on the top edge.

  • paddings_contiguous (bool, optional) – An optional bool value indicates if the padding is paddings_contiguous. If true, paddings is arranged as [begin0, end0, begin1, end1, …] If false, paddings is arranged as [begin0, begin1, …, end1, end2, …] Default:True.

Inputs:
  • x (Tensor) - Tensor to be padded. It has shape \((N, *)\), where \(*\) means any number of additional dimensions.

  • paddings (Tensor) - Specifies the number of zeros to be padded before and after each dimension of the input Tensor x. It’s a 1D Tensor of type int32 or int64.

  • constant_value (Tensor, optional) - Padding value to use in ‘constant’ mode, if not specified, 0 is used instead. It has the same type as x.

Outputs:

Tensor, the tensor after padding.

Raises:
  • TypeError – If x or paddings is not a Tensor.

  • TypeError – If padding_contiguous is not a bool.

  • ValueError – If mode is not a str or not in support modes.

  • ValueError – If mode is “constant”, the element’s number of paddings not be even.

  • ValueError – If mode is “constant”, the element’s number of paddings large than input dim * 2.

  • ValueError – If mode is “edge” “reflect” or “circular”, the element’s number of paddings is not 2, 4 or 6.

  • ValueError – If mode is “edge” “reflect” or “circular”, x dims equals 3, the element’s number of paddings is not 2.

  • ValueError – If mode is “edge” “reflect” or “circular”, x dims equals 4, the element’s number of paddings is not 4.

  • ValueError – If mode is “circular”, x dims equals 5, the element’s number of paddings is not 6.

  • ValueError – If mode is “edge”, “reflect” or “circular”, x dims smaller than 3.

  • ValueError – If mode is “edge” or “circular”, x dims bigger than 5.

  • ValueError – If mode is “reflect”, x dims bigger than 4.

  • ValueError – If mode is “reflect”, padding size bigger than the corresponding x dimension.

  • ValueError – After padding, output’s shape number is not greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: mode="reflect", paddings_contiguous=True
>>> class Net(nn.Cell):
...    def __init__(self, mode, paddings_contiguous):
...        super(Net, self).__init__()
...        self.pad = ops.PadV3(mode=mode, paddings_contiguous=paddings_contiguous)
...        self.paddings = Tensor([1, 1])
...    def construct(self, x):
...        return self.pad(x, self.paddings)
...
>>> x = Tensor([[[0., 1.]]])
>>> pad = Net(mode="reflect", paddings_contiguous=True)
>>> output = pad(x)
>>> print(output)
[[[1., 0., 1., 0.]]]
>>> # case2: mode="constant", padding_contigous=False
>>> class Net(nn.Cell):
...    def __init__(self, mode, paddings_contiguous):
...        super(Net, self).__init__()
...        self.pad = ops.PadV3(mode=mode, paddings_contiguous=paddings_contiguous)
...        self.paddings = Tensor([1, 0, 1, 0])
...        self.value = Tensor(1.5)
...    def construct(self, x):
...        return self.pad(x, self.paddings, self.value)
...
>>> x = Tensor([[0., 1., 2.]])
>>> pad = Net(mode="constant", paddings_contiguous=False)
>>> output = pad(x)
>>> print(output)
[[[1.5, 0., 1., 2., 1.5]]])
class tinyms.primitives.Padding(pad_dim_size=8)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Refer to mindspore.ops.padding() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.Padding(pad_dim_size)(x)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
class tinyms.primitives.ParallelConcat[source]

Concats input tensors along the first dimension.

The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Note

The input tensors are all required to have size 1 in the first dimension.

Inputs:
  • values (tuple, list) - A tuple or a list of input tensors. The data type and shape of these tensors must be the same and their rank should not be less than 1. The supported date type is Number on CPU, the same for Ascend except [float64, complex64, complex128].

Outputs:

Tensor, data type is the same as values.

Raises:
  • TypeError – If any type of the inputs is not a Tensor.

  • TypeError – If the data type of these tensors are not the same.

  • ValueError – If any tensor.shape[0] is not 1.

  • ValueError – If rank of any Tensor in values is less than 1.

  • ValueError – If the shape of these tensors are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
>>> op = ops.ParallelConcat()
>>> output = op((data1, data2))
>>> print(output)
[[0 1]
 [2 1]]
class tinyms.primitives.ParameterizedTruncatedNormal(seed=0, seed2=0)[source]

Returns a tensor of the specified shape filled with truncated normal values. When shape is \((batch\_size, *)\), the shape of mean, stdevs, min and max should be \(()\) or \((batch\_size, )\).

Note

  • The value in tensor min must be strictly less than max at any position after broadcasting.

  • When seed or seed2 is assigned a non-zero value, that value will be used as the seed. Otherwise, a random seed will be used instead.

Parameters:
  • seed (int, optional) – Random number seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It has shape \((batch\_size, *)\) where \(*\) is an additional dimension with a length of no less than 1. Its type must be one of the following types: int32 and int64.

  • mean (Tensor) - The parameter defines the mean of truncated normal distribution. It has shape \(()\) or \((batch\_size, )\). Its type must be one of the following types:float16, float32, float64.

  • stdevs (Tensor) - The parameter defines the standard deviation for truncation of the normal distribution. It must be greater than 0 and have the same shape and type as means.

  • min (Tensor) - The parameter defines the minimum of truncated normal distribution. It must have the same shape and type as means.

  • max (Tensor) - The parameter defines the maximum of truncated normal distribution. It must have the same shape and type as means.

Outputs:

Tensor. Its shape is specified by the input shape and it must have the same type as means.

Raises:
  • TypeError – If data type of shape, mean, stdevs, min and max are not allowed.

  • TypeError – If mean, stdevs, min, max don’t have the same type.

  • TypeError – If any of shape, mean, stdevs, min and max is not Tensor.

  • ValueError – When shape is \((batch\_size, *)\), if the shape of mean, stdevs, min or max is not \(()\) or \((batch\_size, )\).

  • ValueError – If shape elements are not positive.

  • ValueError – If stdevs elements are not positive.

  • ValueError – If shape has less than 2 elements.

  • ValueError – If shape is not a 1-D tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor(np.array([2, 3]), mstype.int32)
>>> mean = Tensor(np.array([0]), mstype.float32)
>>> stdevs = Tensor(np.array([1]), mstype.float32)
>>> min = Tensor(np.array([-100]), mstype.float32)
>>> max = Tensor(np.array([100]),  mstype.float32)
>>> seed = 1
>>> seed2 = 2
>>> parameterized_truncated_normal = ops.ParameterizedTruncatedNormal(seed=seed, seed2=seed2)
>>> output = parameterized_truncated_normal(shape, mean, stdevs, min, max)
>>> print(output)
[[-0.54974616 -1.4028727   1.5827523 ]
 [ 0.25759354 -1.9593946  -1.5078077 ]]
class tinyms.primitives.Partial[source]

Makes a partial function instance. Partial function can be used to derived specialized functions from general functions by fixing the value of certain number of arguments.

Inputs:
  • args (Union[FunctionType, Tensor]) - The function and bind arguments.

Outputs:

FunctionType, partial function bound with arguments.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> def show_input(x, y, z):
...     return x, y, z
>>> partial = ops.Partial()
>>> partial_show_input = partial(show_input, Tensor(1))
>>> output1 = partial_show_input(Tensor(2), Tensor(3))
>>> print(output1)
(Tensor(shape=[], dtype=Int64, value= 1), Tensor(shape=[], dtype=Int64, value= 2), Tensor(shape=[], dtype=Int64,
 value= 3))
>>> output2 = partial_show_input(Tensor(3), Tensor(4))
>>> print(output2)
(Tensor(shape=[], dtype=Int64, value= 1), Tensor(shape=[], dtype=Int64, value= 3), Tensor(shape=[], dtype=Int64,
 value= 4))
class tinyms.primitives.Pdist(p=2.0)[source]

Computes the p-norm distance between each pair of row vectors in the input.

Refer to mindspore.ops.pdist() for more details.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]).astype(np.float32))
>>> op = ops.Pdist(p=2.0)
>>> y = op(x)
>>> print(y)
[1.4142135 2.828427  1.4142135]
class tinyms.primitives.Poisson(seed=0, seed2=0)[source]

Produces random non-negative integer values i. Distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

  • mean (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. It must be greater than 0. With float32 data type.

Outputs:

Tensor. Its shape must be the broadcasted shape of shape and the shape of mean. The dtype is int32.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

Supported Platforms:

deprecated

Examples

>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mstype.float32)
>>> poisson = ops.Poisson(seed=5)
>>> output = poisson(shape, mean)
>>> result = output.shape
>>> print(result)
(4, 2)
class tinyms.primitives.Polar[source]

Converts polar coordinates to Cartesian coordinates.

Refer to mindspore.ops.polar() for more details.

Supported Platforms:

GPU CPU

Examples

>>> polar = ops.Polar()
>>> x1 = Tensor(np.array([1, 2]), mindspore.float64)
>>> x2 = Tensor(np.array([3, 4]), mindspore.float64)
>>> output = polar(x1, x2)
>>> print(output)
[-0.9899925 +0.14112001j -1.30728724-1.51360499j]
class tinyms.primitives.Polygamma[source]

Computes the \(a`th derivative of the polygamma function on `x\).

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.polygamma() for more details.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.0, -0.5]), mindspore.float32)
>>> a = Tensor(np.array(1), mindspore.int64)
>>> polygamma = ops.Polygamma()
>>> output = polygamma(a, x)
>>> print(output)
[1.644934 8.934802]
>>> a = Tensor(np.array(2), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[-2.404114  -0.8287967]
>>> a = Tensor(np.array(3), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[  6.4939404 193.40909  ]
>>> a = Tensor(np.array(4), mindspore.int64)
>>> output = polygamma(a, x)
>>> print(output)
[-24.886265   -3.4742498]
class tinyms.primitives.PopulationCount[source]

Computes element-wise population count(a.k.a bitsum, bitcount).

Refer to mindspore.ops.population_count() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 3], mindspore.int16)
>>> output = ops.PopulationCount()(input_x)
>>> print(output)
[0 1 2]
class tinyms.primitives.Pow[source]

Calculates the y power of each element in x.

Refer to mindspore.ops.pow() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = 3.0
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> pow = ops.Pow()
>>> output = pow(x, y)
>>> print(output)
[ 1. 16. 64.]
class tinyms.primitives.Print[source]

Print the inputs to stdout.

Refer to mindspore.ops.print_() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class PrintDemo(nn.Cell):
...     def __init__(self):
...         super(PrintDemo, self).__init__()
...         self.print = ops.Print()
...
...     def construct(self, x, y):
...         self.print('Print Tensor x and Tensor y:', x, y)
...         return x
...
>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> net = PrintDemo()
>>> result = net(x, y)
Print Tensor x and Tensor y:
Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [1]])
Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [1, 1]])
class tinyms.primitives.Pull[source]

Pulls weight from parameter server.

Inputs:
  • key (Tensor) - The key of the weight.

  • weight (Tensor) - The weight to be updated.

Outputs:

None.

class tinyms.primitives.Push(optim_type='ApplyMomentum', only_shape_indices=None)[source]

Pushes the inputs of the corresponding optimizer to parameter server.

Parameters:
  • optim_type (string) – The optimizer type. Default: ‘ApplyMomentum’.

  • only_shape_indices (list) – The indices of input of which only shape will be pushed to parameter server. Default: None.

Inputs:
  • optim_inputs (tuple) - The inputs for this kind of optimizer.

  • optim_input_shapes (tuple) - The shapes of the inputs.

Outputs:

Tensor, the key of the weight which needs to be updated.

class tinyms.primitives.PyExecute[source]

Execute Python expression.

class tinyms.primitives.PyFunc(fn, in_types, in_shapes, out_types, out_shapes, stateful=True)[source]

Execute Python function.

PyFunc encapsulates Python functions as an operator which could be compiled into computation graph. Unlike normal operators, it cannot be exported to MindIR as it is executed in current Python context. As only the weights of the network is stored in the checkpoint, network include PyFunc could save checkpoint and load to the network again, but will lose any Python function state.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • fn (function) – Python function which inputs and outputs should be Python built-in scalar or numpy ndarray.

  • in_types (list[mindspore.dtype]) – The type of the inputs.

  • in_shapes (list[tuple[int]]) – The dimensionality of the inputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • out_types (list[mindspore.dtype]) – The type of the outputs.

  • out_shapes (list[tuple[int]]) – The dimensionality of the outputs. An empty list represents a scalar, otherwise it represent a numpy array.

  • stateful (bool) – Whether the function is stateful or not. If True, the execution order is same with model definition.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors.

Outputs:

tuple[Tensor], execution results Python functions.

Raises:
  • TypeError – The Python function execution failed.

  • TypeError – The attributes(in_types/in_shapes/out_types/out_shapes) are inconsistent with Python function specifications.

Supported Platforms:

CPU

Examples

>>> def func(x1, x2):
...     return x1 + x2
>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> x2 = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> op = P.PyFunc(func, [x1.dtype, x2.dtype], [x1.shape, x2.shape], [x1.dtype], [x1.shape])
>>> output = op((x1, x2))
>>> print(output[0].asnumpy())
[2. 4. 6.]
class tinyms.primitives.Qr(full_matrices=False)[source]

Returns the QR decomposition of one or more matrices. If full_matrices is true, compute full-sized q and r, If False (the default), compute the P columns of q where P is minimum of the 2 innermost dimensions of x.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

full_matrices (bool, optional) – Whether compute full-sized QR decomposition. Default: False.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be at least two dimensions. types: float16, float32, float64, complex64, complex128. Define the shape of x as \((..., m, n)\) p as the minimum values of m and n.

Outputs:
  • q (Tensor) - The orthonormal matrices of x. If full_matrices is true, the shape is \((m, m)\), else the shape is \((m, p)\). The dtype of q is same as x.

  • r (Tensor) - The upper triangular matrices of x. If full_matrices is true, the shape is \((m, n)\), else the shape is \((p, n)\). The dtype of r is same as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> qr_op = ops.Qr(full_matrices=False)
>>> x = Tensor([[20., -31, 7], [4, 270, -90], [-8, 17, -32]], mstype.float32)
>>> q, r = qr_op(x)
>>> print(q)
[[-0.912871    0.16366126  0.37400758]
 [-0.18257418 -0.9830709  -0.01544376]
 [ 0.36514837 -0.08238228  0.92729706]]
>>> print(r)
[[ -21.908903  -14.788506  -1.6431675]
[    0.       -271.9031    92.25824  ]
[    0.          0.       -25.665514 ]]
class tinyms.primitives.Quantile(dim=None, keep_dims=False, ignore_nan=False)[source]

Computes the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points.

Refer to mindspore.ops.quantile() and mindspore.ops.nanquantile() for more details.

Supported Platforms:

Examples

>>> quantile = ops.Quantile()
>>> input = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = quantile(input, q)
>>> print(output)
[-0.5446  0.07  0.9214]
class tinyms.primitives.RGBToHSV[source]

Transform one single or a batch of images from RGB to HSV color space. Each pixel’s RGB value is converted to its corresponding HSV value. Note that the function is only well-defined for input pixel values in the range [0, 1].

Note

Last dimension of input images must be size 3.

Inputs:
  • images (Tensor) - 1-D or higher rank RGB data Tensor to convert, last dimension must be size 3. Must be one of the following types: float16, float32, float64.

Outputs:

A Tensor, has the same type and shape as input images.

Raises:
  • TypeError – If images is not tensor or its dtype is not float.

  • ValueError – If the rank of images is less than 1.

  • ValueError – If the last value of shape of images is not 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> images =  np.array([0.25, 0.5, 0.5]).astype(np.float32).reshape([1, 1, 1, 3])
>>> rgb_to_hsv = ops.RGBToHSV()
>>> output = rgb_to_hsv(Tensor(images))
>>> print(output)
[[[[0.5, 0.5, 0.5]]]]
class tinyms.primitives.RNNTLoss(blank_label=0)[source]

Computes the RNNTLoss and its gradient with respect to the softmax outputs.

Parameters:

blank_label (int) – blank label. Default: 0.

Inputs:
  • acts (Tensor) - Tensor of shape \((B, T, U, V)\). Data type must be float16 or float32.

  • labels (Tensor) - Tensor of shape \((B, U-1)\). Data type is int32.

  • input_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • label_lengths (Tensor) - Tensor of shape \((B,)\). Data type is int32.

Outputs:
  • costs (Tensor) - Tensor of shape \((B,)\). Data type is int32.

  • grads (Tensor) - Has the same shape and dtype as acts.

Raises:
  • TypeError – If acts, labels, input_lengths or label_lengths is not a Tensor.

  • TypeError – If dtype of acts is neither float16 nor float32.

  • TypeError – If dtype of labels, input_lengths or label_lengths is not int32.

Supported Platforms:

Ascend

Examples

>>> B, T, U, V = 1, 2, 3, 5
>>> blank = 0
>>> acts = np.random.random((B, T, U, V)).astype(np.float32)
>>> labels = np.array([[1, 2]]).astype(np.int32)
>>> input_length = np.array([T] * B).astype(np.int32)
>>> label_length = np.array([len(l) for l in labels]).astype(np.int32)
>>> rnnt_loss = ops.RNNTLoss(blank_label=0)
>>> costs, grads = rnnt_loss(Tensor(acts), Tensor(labels), Tensor(input_length), Tensor(label_length))
>>> print(costs.shape)
(1,)
>>> print(grads.shape)
(1, 2, 3, 5)
class tinyms.primitives.ROIAlign(pooled_height, pooled_width, spatial_scale, sample_num=2, roi_end_mode=1)[source]

Computes the Region of Interest (RoI) Align operator.

The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling points. The details of (RoI) Align operator are described in Mask R-CNN.

Parameters:
  • pooled_height (int) – The output features height.

  • pooled_width (int) – The output features width.

  • spatial_scale (float) – A scaling factor that maps the raw image coordinates to the input feature map coordinates. Suppose the height of a RoI is ori_h in the raw image and fea_h in the input feature map, the spatial_scale must be fea_h / ori_h.

  • sample_num (int) – Number of sampling points. Default: 2.

  • roi_end_mode (int) – Number must be 0 or 1. If roi_end_mode=0, use the legacy implementation. If roi_end_mode=1, end pixel of the roi_box will be shifted by +1*spatial_scale. Default: 1.

Inputs:
  • features (Tensor) - The input features, whose shape must be \((N, C, H, W)\).

  • rois (Tensor) - The shape is \((rois\_n, 5)\). With data type of float16 or float32. rois_n represents the number of RoI. The size of the second dimension must be 5 and the 5 colunms are \((image\_index, top\_left\_x, top\_left\_y, bottom\_right\_x, bottom\_right\_y)\). image_index represents the index of image. top_left_x and top_left_y represent the x, y coordinates of the top left corner of corresponding RoI, respectively. bottom_right_x and bottom_right_y represent the x, y coordinates of the bottom right corner of corresponding RoI, respectively.

Outputs:

Tensor, the shape is \((rois\_n, C, pooled\_height, pooled\_width)\).

Raises:
  • TypeError – If pooled_height, pooled_width, sample_num or roi_end_mode is not an int.

  • TypeError – If spatial_scale is not a float.

  • TypeError – If features or rois is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> features = Tensor(np.array([[[[1., 2.], [3., 4.]]]]), mindspore.float32)
>>> rois = Tensor(np.array([[0, 0.2, 0.3, 0.2, 0.3]]), mindspore.float32)
>>> roi_align = ops.ROIAlign(2, 2, 0.5, 2)
>>> output = roi_align(features, rois)
>>> print(output)
[[[[1.775 2.025]
   [2.275 2.525]]]]
class tinyms.primitives.RaggedRange(Tsplits)[source]

Returns a RaggedTensor containing the specified sequences of numbers.

Parameters:

Tsplits (mindspore.dtype) – An mindspore.dtype from: mindspore.int32, mindspore.int64.

Inputs:
  • starts (Tensor) - The starts of each range, whose type is int32, int64, float32 or float64, and shape is 0D or 1D.

  • limits (Tensor) - The limits of each range, whose type and shape should be same as input starts.

  • deltas (Tensor) - The deltas of each range, whose type and shape should be same as input starts, and each element in the tensor should not be equal to 0.

Outputs:
  • rt_nested_splits (Tensor) - The nested splits of the return RaggedTensor, and type of the tensor is Tsplits, shape of the tensor is equal to shape of input starts plus 1.

  • rt_dense_values (Tensor) - The dense values of the return RaggedTensor, and type of the tensor should be same as input starts. Let size of input starts, input limits and input deltas are i,

    • if type of the input starts, input limits and input deltas are int32 or int64, shape of the output rt_dense_values is equal to \(sum(abs(limits[i] - starts[i]) + abs(deltas[i] - 1) / abs(deltas[i]))\).

    • if type of the input starts, input limits and input deltas are float32 or float64, shape of the output rt_dense_values is equal to \(sum(ceil(abs((limits[i] - starts[i]) / deltas[i])))\).

Raises:
  • TypeError – If any input is not Tensor.

  • TypeError – If the type of starts is not one of the following dtype: int32, int64, float32, float64.

  • TypeError – If the type of starts, limits and deltas are not same.

  • TypeError – If the type of Tsplits is not one of the following dtype: mstype.int32, mstype.int64.

  • ValueError – If the inputs starts, limits, and deltas are not 0D or 1D.

  • ValueError – If the input deltas is equal to 0.

  • ValueError – If the shape of starts, limits and deltas are not same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> raggedrange = ops.RaggedRange(Tsplits=mstype.int64)
>>> starts = Tensor(np.array([2, 5, 8]).astype(np.int32))
>>> limits = Tensor(np.array([3, 5, 12]).astype(np.int32))
>>> deltas = Tensor(np.array([1, 1, 1]).astype(np.int32))
>>> (rt_nested_splits, rt_dense_values) = raggedrange(starts, limits, deltas)
>>> print(rt_nested_splits)
[0 1 1 5]
>>> print(rt_dense_values)
[ 2  8  9 10 11]
class tinyms.primitives.RandomCategorical(dtype=mindspore.int64)[source]

Generates random samples from a given categorical distribution tensor.

Parameters:

dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Inputs:
  • logits (Tensor) - The input tensor. 2-D Tensor with shape \((batch\_size, num\_classes)\).

  • num_sample (int) - Number of sample to be drawn. Only constant values is allowed.

  • seed (int) - Random seed. Default: 0. Only constant values is allowed.

Outputs:
  • output (Tensor) - The output Tensor with shape \((batch_size, num_samples)\).

Raises:
  • TypeError – If dtype is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.

  • TypeError – If logits is not a Tensor.

  • TypeError – If neither num_sample nor seed is an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...   def __init__(self, num_sample):
...     super(Net, self).__init__()
...     self.random_categorical = ops.RandomCategorical(mindspore.int64)
...     self.num_sample = num_sample
...   def construct(self, logits, seed=0):
...     return self.random_categorical(logits, self.num_sample, seed)
...
>>> x = np.random.random((10, 5)).astype(np.float32)
>>> net = Net(8)
>>> output = net(Tensor(x))
>>> result = output.shape
>>> print(result)
(10, 8)
class tinyms.primitives.RandomChoiceWithMask(count=256, seed=0, seed2=0)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

Refer to mindspore.ops.choice_with_mask() for more details.

Parameters:
  • count (int, optional) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: 0.

  • seed2 (int, optional) – Second seed to avoid collision. Default: 0.

Inputs:
  • input_x (Tensor[bool]) - The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

Outputs:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> rnd_choice_mask = ops.RandomChoiceWithMask()
>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = rnd_choice_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
class tinyms.primitives.RandomGamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int, optional) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int, optional) – The global random seed, which combines with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It must be constant value.

  • alpha (Tensor) - α is the shape parameter of RandomGamma distribution, it mainly determines the shape of the graph curve. It must be greater than 0 and have date type float32.

Outputs:

Tensor. The shape should be equal to the concat shape between the input shape and alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If shape or alpha is not a Tensor.

  • TypeError – If data type of alpha is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

CPU

Examples

>>> shape = Tensor(np.array([3, 1, 2]), mstype.int32)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> gamma = ops.RandomGamma(seed=3)
>>> output = gamma(shape, alpha)
>>> result = output.shape
>>> print(result)
(3, 1, 2, 2, 2)
class tinyms.primitives.RandomGamma(seed=0, seed2=0)[source]

Produces random positive floating-point values x, distributed according to probability density function:

Note

  • Random seed: A set of regular random numbers can be obtained through some complex mathematical algorithms, and the random seed is the initial value of this random number. If the random seed is the same, the random number obtained will not change.

  • Global random seed and operator-level random seed are not set: Use the default value as the random seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Parameters:
  • seed (int, optional) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int, optional) – The global random seed, which combines with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. It must be constant value.

  • alpha (Tensor) - α is the shape parameter of RandomGamma distribution, it mainly determines the shape of the graph curve. It must be greater than 0 and have date type float32.

Outputs:

Tensor. The shape should be equal to the concat shape between the input shape and alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If data type of seed or seed2 is not int.

  • TypeError – If shape or alpha is not a Tensor.

  • TypeError – If data type of alpha is not float32.

  • ValueError – If shape is not a constant value.

Supported Platforms:

CPU

Examples

>>> shape = Tensor(np.array([3, 1, 2]), mstype.int32)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mstype.float32)
>>> gamma = ops.RandomGamma(seed=3)
>>> output = gamma(shape, alpha)
>>> result = output.shape
>>> print(result)
(3, 1, 2, 2, 2)
class tinyms.primitives.RandomPoisson(seed=0, seed2=0, dtype=mindspore.int64)[source]

Produces random non-negative values i, distributed according to discrete probability function:

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • seed (int, optional) – Random number seed. If either seed or seed2 are set to be non-zero, the seed is set by the given seed. Otherwise, it is seeded by a random seed. Default: 0.

  • seed2 (int, optional) – A second seed to avoid seed collision. Default: 0.

  • dtype (mindspore.dtype, optional) – The type of output. Default: mstype.int64.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated, 1-D Tensor, whose dtype must be in [int32, int64].

  • rate (Tensor) - μ parameter the distribution was constructed with. The parameter defines mean number of occurrences of the event. Its type must be in [float16, float32, float64, int32, int64].

Outputs:

Tensor. Its shape is \((*shape, *rate.shape)\). Its type is specified by dtype.

Raises:
  • TypeError – If shape is not a Tensor or its dtype is not int32 or int64.

  • TypeError – If dtype is not int32 or int64.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If shape elements are negative.

Supported Platforms:

GPU CPU

Examples

>>> shape = Tensor(np.array([2, 3]), mstype.int32)
>>> rate = Tensor(np.array([2, 2]), mstype.int32)
>>> seed = 0
>>> seed2 = 0
>>> random_poisson = ops.RandomPoisson(seed=seed, seed2=seed2)
>>> output = random_poisson(shape,rate)
>>> print(output.shape)
(2, 3, 2)
class tinyms.primitives.RandomShuffle(seed=0, seed2=0)[source]

Randomly shuffles a Tensor along its first dimension.

Parameters:
  • seed (int, optional) – Random seed. If seed or seed2 is set to non-zero, the random number generator will be seeded by the given seed. Otherwise, it will be seeded randomly. The seed must be non-negative. Default: 0.

  • seed2 (int, optional) – A second seed to avoid seed collision. If seed is 0, the seed2 will be used as the seed of the random generator. It must be non-negative. Default: 0.

Inputs:
  • x (Tensor) - The Tensor need be shuffled.

Outputs:

Tensor. The shape and type are the same as the input x.

Raises:

TypeError – If data type of seed or seed2 is not int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mstype.float32)
>>> shuffle = ops.RandomShuffle(seed=1, seed2=1)
>>> output = shuffle(x)
>>> print(output.shape)
(4,)
class tinyms.primitives.Randperm(max_length=1, pad=-1, dtype=mindspore.int32)[source]

Generates n random samples from 0 to n-1 without repeating. If max_length > n, the last max_length-n elements will be filled with pad.

Parameters:
  • max_length (int) – Number of items expected to get and the number must be greater than 0. Default: 1.

  • pad (int) – The pad value to be filled. Default: -1.

  • dtype (mindspore.dtype) – The type of output. Default: mindspore.int32.

Inputs:
  • n (Tensor) - The input tensor with shape \((1,)\) with and dtype int32 or int64. n must be in range [0, max_length].

Outputs:
  • output (Tensor) - The output Tensor with shape: (max_length,) and type: dtype.

Raises:
Supported Platforms:

Ascend GPU

Examples

>>> # The result of every execution is different because this operator will generate n random samples.
>>> randperm = ops.Randperm(max_length=30, pad=-1)
>>> n = Tensor([20], dtype=mindspore.int32)
>>> output = randperm(n)
>>> print(output)
[15 6 11 19 14 16 9 5 13 18 4 10 8 0 17 2 1 12 3 7
 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
class tinyms.primitives.Range(maxlen=1000000)[source]

Creates a sequence of numbers that begins at start and extlimits by increments of delta up to but not including limit.

Refer to mindspore.ops.range() for more details.

Parameters:

maxlen (int, optional) – Memory that can fit maxlen many elements will be allocated for the output. Optional, must be positive, defaults to 1000000. If the output has more than maxlen elements, a runtime error will occur.

Inputs:
  • start (Tensor) - A scalar Tensor. The first number in the sequence. Must have type: int32 ,int64, float32 or float64.

  • limit (Tensor) - A scalar Tensor. Upper limit of the sequence, exclusive. Must have type: int32 ,int64, float32 or float64.

  • delta (Tensor) - A scalar Tensor. Number that increments start. Must have type: int32 ,int64, float32 or float64.

Outputs:

A 1-D Tensor, with the same type as the inputs.

Supported Platforms:

GPU CPU

Examples

>>> start = Tensor(0, mstype.int32)
>>> limit = Tensor(10, mstype.int32)
>>> delta = Tensor(4, mstype.int32)
>>> output = ops.Range()(start, limit, delta)
>>> print(output)
[0 4 8]
infer_value(start_value, limit_value, delat_value)[source]

Infer the value of input for Range.

class tinyms.primitives.Rank[source]

Returns the rank of a tensor.

Refer to mindspore.ops.rank() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> rank = ops.Rank()
>>> output = rank(input_tensor)
>>> print(output)
2
>>> print(type(output))
<class 'int'>
class tinyms.primitives.ReLU[source]

Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise.

Refer to mindspore.ops.relu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu = ops.ReLU()
>>> output = relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
class tinyms.primitives.ReLU6[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

Refer to mindspore.ops.relu6() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu6 = ops.ReLU6()
>>> result = relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
class tinyms.primitives.ReLUV2[source]

The ReLUV2 interface is deprecated, please use the mindspore.ops.ReLU instead.

Rectified Linear Unit activation function.

It returns element-wise \(\max(0, x)\), specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[\text{ReLU}(x) = (x)^+ = \max(0, x)\]
Inputs:
  • input_x (Tensor) - The input tensor must be a 4-D tensor.

Outputs:
  • output (Tensor) - Has the same type and shape as the input_x.

  • mask (Tensor) - A tensor, but it is meaningless.

Raises:
Supported Platforms:

deprecated

Examples

>>> input_x = Tensor(np.array([[[[1, -2], [-3, 4]], [[-5, 6], [7, -8]]]]), mindspore.float32)
>>> relu_v2 = ops.ReLUV2()
>>> output, _= relu_v2(input_x)
>>> print(output)
[[[[1. 0.]
   [0. 4.]]
  [[0. 6.]
   [7. 0.]]]]
class tinyms.primitives.Real[source]

Returns a Tensor that is the real part of the input. If input is real, it is returned unchanged.

Inputs:
  • input (Tensor) - The input tensor to compute to.

Outputs:

Tensor, the shape is the same as the input.

Raises:

TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> real = ops.Real()
>>> output = real(x)
>>> print(output)
1.3
class tinyms.primitives.RealDiv[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Refer to mindspore.ops.div() for more details.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> realdiv = ops.RealDiv()
>>> output = realdiv(x, y)
>>> print(output)
[0.25 0.4  0.5 ]
class tinyms.primitives.Reciprocal[source]

Returns reciprocal of a tensor element-wise.

\[out_{i} = \frac{1}{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> reciprocal = ops.Reciprocal()
>>> output = reciprocal(x)
>>> print(output)
[1.   0.5  0.25]
class tinyms.primitives.ReduceAll(keep_dims=False)[source]

Reduces a dimension of a tensor by the “logicalAND” of all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical and” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAll(keep_dims=True)
>>> # case 1: Reduces a dimension by the "logicalAND" of all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[False]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True False]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[False]
[ True]]
class tinyms.primitives.ReduceAny(keep_dims=False)[source]

Reduces a dimension of a tensor by the “logical OR” of all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[bool]) - The input tensor. The dtype of the tensor to be reduced is bool. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the “logical or” of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> op = ops.ReduceAny(keep_dims=True)
>>> # case 1: Reduces a dimension by the "logical OR" of all elements in the dimension.
>>> output = op(x)
>>> print(output)
[[ True]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[ True True]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[True]
[ True]]
class tinyms.primitives.ReduceMax(keep_dims=False)[source]

Reduces a dimension of a tensor by the maximum value in this dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the maximum of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMax(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the maximum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[9.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[7. 7. 7. 7. 7. 7.]
  [8. 8. 8. 8. 8. 8.]
  [9. 9. 9. 9. 9. 9.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[3. 3. 3. 3. 3. 3.]]
 [[6. 6. 6. 6. 6. 6.]]
 [[9. 9. 9. 9. 9. 9.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceMean(keep_dims=False)[source]

Reduces a dimension of a tensor by averaging all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the mean of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMean(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2]],
... [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
... [[6, 6, 6, 6, 6, 6], [8, 8, 8, 8, 8, 8], [10, 10, 10, 10, 10, 10]]]),
... mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[5.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along the axis 0
>>> output = op(x, 0)
>>> print(output)
[[[4. 4. 4. 4. 4. 4.]
  [5. 5. 5. 5. 5. 5.]
  [6. 6. 6. 6. 6. 6.]]]
>>> # case 3: Reduces a dimension along the axis 1
>>> output = op(x, 1)
>>> print(output)
[[[2. 2. 2. 2. 2. 2.]]
 [[5. 5. 5. 5. 5. 5.]]
 [[8. 8. 8. 8. 8. 8.]]]
>>> # case 4: Reduces a dimension along the axis 2
>>> output = op(x, 2)
>>> print(output)
[[[ 2.]
  [ 2.]
  [ 2.]]
 [[ 4.]
  [ 5.]
  [ 6.]]
 [[ 6.]
  [ 8.]
  [10.]]]
class tinyms.primitives.ReduceMin(keep_dims=False)[source]

Reduces a dimension of a tensor by the minimum value in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the minimum of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceMin(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the minimum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[1.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]
  [2. 2. 2. 2. 2. 2.]
  [3. 3. 3. 3. 3. 3.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]]
 [[4. 4. 4. 4. 4. 4.]]
 [[7. 7. 7. 7. 7. 7.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
class tinyms.primitives.ReduceOp[source]

Operation options for reducing tensors. This is an enumerated type, not an operator.

The main calling methods are as follows:

  • SUM: ReduceOp.SUM.

  • MAX: ReduceOp.MAX.

  • MIN: ReduceOp.MIN.

  • PROD: ReduceOp.PROD.

There are four kinds of operation options, “SUM”, “MAX”, “MIN”, and “PROD”.

  • SUM: Take the sum.

  • MAX: Take the maximum.

  • MIN: Take the minimum.

  • PROD: Take the product.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> import mindspore
>>> from mindspore.communication import init
>>> from mindspore import Tensor, ops, nn
>>> from mindspore.ops import ReduceOp
>>>
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.allreduce_sum = ops.AllReduce(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.allreduce_sum(x)
...
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceProd(keep_dims=False)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:

keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Must be in the range [-r, r).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceProd(keep_dims=True)
>>> output = op(x, 1)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by multiplying all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[2.2833798e+33]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[ 28.  28.  28.  28.  28.  28.]
  [ 80.  80.  80.  80.  80.  80.]
  [162. 162. 162. 162. 162. 162.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[  6.   6.   6.   6.   6.   6.]]
 [[120. 120. 120. 120. 120. 120.]]
 [[504. 504. 504. 504. 504. 504.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[1.00000e+00]
  [6.40000e+01]
  [7.29000e+02]]
 [[4.09600e+03]
  [1.56250e+04]
  [4.66560e+04]]
 [[1.17649e+05]
  [2.62144e+05]
  [5.31441e+05]]]
class tinyms.primitives.ReduceScatter(op='sum', group='hccl_world_group')[source]

Reduces and scatters tensors from the specified communication group. For more details about it, please refer to Distributed Set Communication Primitives - ReduceScatter .

Note

The tensors must have the same shape and format in all processes of the collection.

Parameters:
  • op (str) – Specifies an operation used for element-wise reductions, like SUM and MAX. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “GlobalComm.WORLD_COMM_GROUP”.

Inputs:
  • input_x (Tensor) - Input Tensor, suppose it has a shape \((N, *)\), where * means any number of additional dimensions. N must be divisible by rank_size. rank_size refers to the number of cards in the communication group.

Outputs:

Tensor, it has the same dtype as input_x with a shape of \((N/rank\_size, *)\).

Raises:
  • TypeError – If any of operation and group is not a string.

  • ValueError – If the first dimension of the input cannot be divided by the rank_size.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with 2 devices.

>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.communication import init
>>> from mindspore.ops import ReduceOp
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.reducescatter = ops.ReduceScatter(ReduceOp.SUM)
...
...     def construct(self, x):
...         return self.reducescatter(x)
...
>>> input_ = Tensor(np.ones([8, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
>>> print(output)
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]
class tinyms.primitives.ReduceStd(axis=(), unbiased=True, keep_dims=False)[source]

Returns the standard-deviation and mean of the input Tensor along dimension(s) specified by axis.

Parameters:
  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed. Let r be rank of input_x, it should be in the range \([-r,r)\).

  • unbiased (bool, optional) – Whether to use Bessel’s correction. If True, will use the Bessel correction unbiased estimation. If False, will through the biased estimation to calculate the standard deviation. Default: True.

  • keep_dims (bool, optional) – Whether the output Tensor has dim retained or not. If True, keep these reduced dimensions specified by axis and the length is 1. If False, don’t keep these dimensions. Default: Fasle.

Inputs:
  • input_x (Tensor[Number]) - The input Tensor, it has dtype Number with shape \((N, *)\) where \(*\) means any number of additional dimensions.

Outputs:

Tuple(output_std, output_mean) containing the standard deviation and mean.

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> op = ops.ReduceStd(axis=1, unbiased=True, keep_dims=False)
>>> output = op(input_x)
>>> output_std, output_mean = output[0], output[1]
>>> print(output_std)
[1.        2.5166113]
>>> print(output_mean)
[2.        1.3333334]
class tinyms.primitives.ReduceSum(keep_dims=False, skip_mode=False)[source]

Reduces a dimension of a tensor by summing all elements in the dimension, by default. And also can reduce a dimension of x along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

  • skip_mode (bool) – If true and axis is empty tuple or empty list, the ReduceSum operation isn’t performed, skip it. If true and axis is other values, the ReduceSum calculation is performed normally. If false, do reduce. Default: False.

Inputs:
  • x (Tensor[Number]) - The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions when skip_mode is false. Only constant value is allowed. Must be in the range [-rank(x), rank(x)).

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), keep_dims is False, and skip_mode is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is (), and skip_mode is True, the ReduceSum operation is not performed, output tensor is equal to the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int) or list(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = ops.ReduceSum(keep_dims=True)
>>> output = op(x, 1)
>>> output.shape
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by summing all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = op(x)
>>> print(output)
[[[270.]]]
>>> print(output.shape)
(1, 1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = op(x, 0)
>>> print(output)
[[[12. 12. 12. 12. 12. 12.]
  [15. 15. 15. 15. 15. 15.]
  [18. 18. 18. 18. 18. 18.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = op(x, 1)
>>> print(output)
[[[ 6.  6.  6.  6.  6.  6.]]
 [[15. 15. 15. 15. 15. 15.]]
 [[24. 24. 24. 24. 24. 24.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = op(x, 2)
>>> print(output)
[[[ 6.]
  [12.]
  [18.]]
 [[24.]
  [30.]
  [36.]]
 [[42.]
  [48.]
  [54.]]]
infer_value(input_x, axis)[source]

return reduce op value

class tinyms.primitives.Renorm(p, dim, maxnorm)[source]

Renormalizes the sub-tensors along dimension dim, and each sub-tensor’s p-norm should not exceed the ‘maxnorm’. The values of current sub-tensor don’t need change if the p-norm of the sub-tensor is less than maxnorm. Otherwise the sub-tensor needs to be modified to the original value of the corresponding position divided by the p-norm of the substensor and then multiplied by maxnorm.

Refer to mindspore.ops.renorm() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
>>> y = ops.Renorm(p=1, dim=0, maxnorm=5.)(x)
>>> print(y)
[[1.       1.        1.        ]
[1.6666666 1.6666666 1.6666666 ]
[1.6666667 1.6666667 1.6666667 ]]
class tinyms.primitives.Reshape[source]

Rearranges the input Tensor based on the given shape.

Refer to mindspore.ops.reshape() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> reshape = ops.Reshape()
>>> output = reshape(input_x, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
infer_value(x, shape)[source]

infer value

class tinyms.primitives.ResizeArea(align_corners=False)[source]

Resize images to a certain size using area interpolation.

The resizing process only changes the two dimensions of images, which represent the width and height of images.

Warning

The values of size must be greater than zero.

Parameters:

align_corners (bool, optional) – A boolean flag that specifies whether to align the centers of the four corner pixels of the input and output tensors. When this flag is set to True, the corner pixels of the output tensor are aligned with the corner pixels of the input tensor, which preserves the values at the corner pixels. Defaults: False.

Inputs:
  • images (Tensor) - Input images must be a 4-D tensor with shape which is \((batch, channels, height, width)\). The format must be “NHWC”. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • size (Tensor) - Input size must be a 1-D tensor of 2 elements: new_height, new_width. The new size of output image. Types allowed: int32.

Outputs:

A 4-D tensor of shape \((batch, new\_height, new\_width, channels)\) with type float32.

Raises:
  • TypeError – If dtype of images is not supported.

  • TypeError – If dtype of size is not int32.

  • TypeError – If dtype of align_corners is not bool.

  • ValueError – If the num of inputs is not 2.

  • ValueError – If the dimension of images is not 4.

  • ValueError – If the dimension of size is not 1.

  • ValueError – If the element num of size is not 2.

  • ValueError – If any value of size is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> images = Tensor([[[[2], [4], [6], [8]], [[10], [12], [14], [16]]]], mindspore.float16)
>>> size = Tensor([1, 2], mindspore.int32)
>>> resizearea = ops.ResizeArea()
>>> output = resizearea(images, size)
>>> print(output.asnumpy())
    [[[[ 7.]
       [11.]]]]
class tinyms.primitives.ResizeBicubic(align_corners=False, half_pixel_centers=False)[source]

Resize images to size using bicubic interpolation.

Parameters:
  • align_corners (bool, optional) – If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels.Default: False.

  • half_pixel_centers (bool, optional) – Whether to use half-pixel center alignment. If set to True, align_corners should be False. Default: False.

Inputs:
  • images (Tensor) - The input image must be a 4-D tensor of shape \((batch, channels, height, width)\). The format must be NCHW. Types allowed: int8, int16, int32, int64, float16, float32, float64, uint8, uint16.

  • size (Tensor) - A 1-D tensor of shape [2], with 2 elements: new_height, new_width. Types allowed: int32.

Outputs:

A 4-D tensor of shape \((batch, channels, new\_height, new\_width)\) with type float32.

Raises:
  • TypeError – If images type is not allowed.

  • TypeError – If size type is not int32.

  • TypeError – If align_corners type is not bool.

  • TypeError – If half_pixel_centers type is not bool.

  • ValueError – If images dim is not 4.

  • ValueError – If size dim is not 1.

  • ValueError – If size size is not 2.

  • ValueError – If any size value is not positive.

  • ValueError – If align_corners and half_pixel_centers value are both True.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class NetResizeBicubic(nn.Cell):
...     def __init__(self):
...         super(NetResizeBicubic, self).__init__()
...         align_corners = False
...         half_pixel_centers = False
...         self.resize = ops.ResizeBicubic(align_corners, half_pixel_centers)
...
...     def construct(self, images, size):
...         return self.resize(images, size)
...
>>> images = Tensor(np.array([1, 2, 3, 4]).reshape(1, 2, 2, 1).astype(np.float32))
>>> size = Tensor([1, 4], mindspore.int32)
>>> resizebicubic = NetResizeBicubic()
>>> output = resizebicubic(images, size)
>>> print(output)
    [[[[1.     ]
    [1.5    ]
    [2.     ]
    [2.09375]]]]
class tinyms.primitives.ResizeBilinear(size, align_corners=False, half_pixel_centers=False)[source]

This API is deprecated, please use the mindspore.ops.ResizeBilinearV2 instead. For general resizing with other interpolation methods, refer to mindspore.ops.interpolate() for more details.

Note

Dynamic shape feature is not supported for now.

Supported Platforms:

Ascend GPU CPU

class tinyms.primitives.ResizeBilinearV2(align_corners=False, half_pixel_centers=False)[source]

Resizes an image to a certain size using the bilinear interpolation.

The resizing only affects the lower two dimensions which represent the height and width.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • align_corners (bool, optional) – If true, rescale input by \((new\_height - 1) / (height - 1)\), which exactly aligns the 4 corners of images and resized images. If false, rescale by \(new\_height / height\). Default: False.

  • half_pixel_centers (bool, optional) – Whether half pixel center. If set to True, align_corners should be False. Default: False.

Inputs:
  • x (Tensor): Image to be resized. Input images must be a 4-D tensor with shape \((batch, channels, height, width)\), with data type of float32 or float16.

  • size (Union[tuple[int], list[int], Tensor]): The new size of the images. A tuple or list or Tensor of 2 int elements \((new\_height, new\_width)\).

Outputs:

Tensor, resized image. 4-D with shape \((batch, channels, new\_height, new\_width)\), with the same data type as input x.

Raises:
  • TypeError – If align_corners is not a bool.

  • TypeError – If half_pixel_centers is not a bool.

  • TypeError – If align_corners and half_pixel_centers are all True.

  • ValueError – If half_pixel_centers is True and device_target is CPU.

  • ValueError – If dim of x is not 4.

  • ValueError – If size is Tensor and its dim is not 1.

  • ValueError – If size contains other than 2 elements.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
>>> output = ops.ResizeBilinearV2()(x, (5, 5))
>>> print(output)
[[[[1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]
   [1. 2. 3. 4. 5.]]]]
class tinyms.primitives.ResizeLinear1D(coordinate_transformation_mode='align_corners')[source]

Using the linear interpolate method resize the input tensor ‘x’.

For general resize, refer to mindspore.ops.interpolate() for more details.

Warning

  • This is an experimental API that is subject to change.

  • Currently, the Ascend platform only supports scenarios where the input size is Tuple or List.

Parameters:

coordinate_transformation_mode (str) – Default is ‘align_corners’. Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Other optional: ‘half_pixel’.

Inputs:
  • x (Tensor) - A 3-D tensor which to resize, with shape [batch, channel, width]. Must be one of the following types: uint8, int8, int16, int32, int64, float16, float32, double.

  • size (Union[Tuple[int], List[int], Tensor[int]]): describes the new width of x . A tuple or list or 1-D tensor with only one int element \((new\_width)\).

Outputs:

A 3-D tensor which shape is [batch, channel, new_width] with the same type as x.

Raises:
  • TypeError – If dtype of x is not in the support list.

  • TypeError – If size is not in Union[Tuple[int], List[int], Tensor[int]].

  • TypeError – If coordinate_transformation_mode is not a string.

  • TypeError – If coordinate_transformation_mode is not in the support list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
>>> size = (6,)
>>> resize_linear_1d = ops.ResizeLinear1D(coordinate_transformation_mode="align_corners")
>>> output = resize_linear_1d(x, size)
>>> print(output)
[[[1. 1.4 1.8 2.2 2.6 3.]
  [4. 4.4 4.8 5.2 5.6 6.]]]
class tinyms.primitives.ResizeNearestNeighbor(size, align_corners=False)[source]

Resizes the input tensor to a given size by using the nearest neighbor algorithm. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters:
  • size (Union[tuple, list]) – The target size. The dimension of size must be 2.

  • align_corners (bool) – Whether the centers of the 4 corner pixels of the input and output tensors are aligned. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of the tensor is \((N, C, H, W)\).

Outputs:

Tensor, the shape of the output tensor is \((N, C, NEW\_H, NEW\_W)\). The data type is the same as the input_x.

Raises:
  • TypeError – If size is neither tuple nor list.

  • TypeError – If align_corners is not a bool.

  • ValueError – If length of size is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input_tensor = Tensor(np.array([[[[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]]]), mindspore.float32)
>>> size = (2, 2)
>>> output = ops.ResizeNearestNeighbor(size=size)(input_tensor)
>>> print(output)
[[[[-0.1  0.3]
   [ 0.4  0.5]]]]
class tinyms.primitives.ResizeNearestNeighborV2(align_corners=False, half_pixel_centers=False, data_format='NHWC')[source]

Resizes the input tensor to specific size by using the nearest neighbor algorithm.

The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters:
  • align_corners (bool, optional) – If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults: False.

  • half_pixel_centers (bool, optional) – Whether half pixel center. If set to True, align_corners should be False. Default: False.

  • data_format (str, optional) – An optional string that describes the format of the input x. Default: NHWC.

Inputs:
  • x (Tensor) - 4-D with shape \((batch, height, width, channels)\) or \((batch, channels, height, width)\) depending on the attr ‘data_format’. Support type [int8, uint8, int16, uint16, int32, int64, float16, float32, float64].

  • size (Tensor) - The new size for the images. A 1-D int32 Tensor of 2 elements: [new_height, new_width].

Outputs:
  • y (Tensor) - The resized images. A 4-D with shape \((batch, new\_height, new\_width, channels)\) or \((batch, channels, new\_height, new\_width)\) depending on the attr data_format. It has the same dtype as x.

Raises:
  • TypeError – If x or size is not a Tensor.

  • TypeError – If the data type of x is not in supported list.

  • TypeError – If the data type of size is not int32.

  • TypeError – If align_corners or half_pixel_centers is not bool.

  • TypeError – If data_format is not string.

  • ValueError – If data_format not in [NHWC, NCHW].

  • ValueError – If any value of size is non positive.

  • ValueError – If the dimension of x is not 4.

  • ValueError – If the dimension of size is not 1.

  • ValueError – If the elements number of size is not 2.

  • ValueError – If attr half_pixel_centers and align_corners are True at the same time.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.ones((1, 4, 4, 1)), mstype.float32)
>>> size = Tensor([2, 2], mstype.int32)
>>> resize = ops.ResizeNearestNeighborV2()
>>> output = resize(input_tensor, size)
>>> print(output)
[[[[1.]
   [1.]]
  [[1.]
   [1.]]]]
>>> print(output.shape)
(1, 2, 2, 1)
class tinyms.primitives.ReverseSequence(seq_dim, batch_dim=0)[source]

Reverses variable length slices.

Parameters:
  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Inputs:
  • x (Tensor) - The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) - Must be a 1-D vector with int32 or int64 types.

Outputs:

Tensor, with the same shape and data type as x.

Raises:
  • TypeError – If seq_dim or batch_dim is not an int.

  • ValueError – If value of batch_dim is equal to or greater than length of shape of x .

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=0, batch_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[1. 5. 9.]
 [4. 2. 6.]
 [7. 8. 3.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([2, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[2. 1. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([3, 2, 3]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[3. 2. 1.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([4, 4]))
>>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
>>> output = reverse_sequence(x, seq_lengths)
>>> print(output)
[[4. 3. 2. 1.]
 [8. 7. 6. 5.]]
class tinyms.primitives.ReverseV2(axis)[source]

Reverses specific dimensions of a tensor.

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters:

axis (Union[tuple(int), list(int)]) – The indices of the dimensions to reverse.

Inputs:
  • input_x (Tensor) - The target tensor. The data type is Number except float64. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If axis is neither list nor tuple.

  • TypeError – If element of axis is not an int.

  • ValueError – There are multiple identical axes in axis.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> op = ops.ReverseV2(axis=[1])
>>> output = op(input_x)
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
>>> op = ops.ReverseV2(axis=[1, 0])
>>> output = op(input_x)
>>> print(output)
[[8 7 6 5]
 [4 3 2 1]]
class tinyms.primitives.RightShift[source]

Shift the value of each position of Tensor input_x to the right by corresponding bits in Tensor input_y. The inputs are two tensors, dtypes of them must be consistent, and the shapes of them could be broadcast.

\[\begin{aligned} &out_{i} =x_{i} >> y_{i} \end{aligned}\]

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • input_x (Tensor) - The target tensor, will be shifted to the right by input_y bits element-wise.

  • input_y (Tensor) - Number of bits shifted, the tensor must have the same type as input_x.

Outputs:
  • output (Tensor) - The output tensor, has the same type as input_x.

Raises:
  • TypeError – If input_x or input_y is not tensor.

  • TypeError – If input_x and input_y could not be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> rightshift = ops.RightShift()
>>> input_x = Tensor(np.array([1, 2, 3]).astype(np.uint8))
>>> input_y = Tensor(np.array([1, 1, 1]).astype(np.uint8))
>>> output = rightshift(input_x, input_y)
>>> print(output)
[0 1 1]
class tinyms.primitives.Rint[source]

Returns an integer that is closest to input_x element-wise.

Inputs:
  • input_x (Tensor) - The target tensor, which must be one of the following types: float16, float32, float64. The shape is \((N,*)\) where \(*\) means any number of additional dimensions.

Outputs:

Tensor, has the same shape and type as input_x.

Raises:

TypeError – If dtype of input_x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([-1.6, -0.1, 1.5, 2.0]), mindspore.float32)
>>> op = ops.Rint()
>>> output = op(input_x)
>>> print(output)
[-2.  0.  2.  2.]
>>> input_x = Tensor(np.array([[-2.0, -1.9, -1.8, -1.7, -1.6],
...                            [-2.0, -1.9, -1.8, -1.7, -1.6]]), mindspore.float32)
>>> output = op(input_x)
>>> print(output)
[[-2. -2. -2. -2. -2.]
 [-2. -2. -2. -2. -2.]]
class tinyms.primitives.Roll(shift, axis)[source]

Rolls the elements of a tensor along an axis.

Refer to mindspore.ops.roll() for more details.

Parameters:
  • shift (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • axis (Union[list(int), tuple(int), int]) – Specifies the dimension indexes of shape to be rolled.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> op = ops.Roll(shift=2, axis=0)
>>> output = op(input_x)
>>> print(output)
[3. 4. 0. 1. 2.]
>>> input_x = Tensor(np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]).astype(np.float32))
>>> op = ops.Roll(shift=-1, axis=0)
>>> output = op(input_x)
>>> print(output)
[[5. 6. 7. 8. 9.]
 [0. 1. 2. 3. 4.]]
class tinyms.primitives.Round[source]

Returns half to even of a tensor element-wise.

Refer to mindspore.ops.round() for more detailsed.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> round = ops.Round()
>>> output = round(x)
>>> print(output)
[ 1.  2.  2.  2. -4.]
class tinyms.primitives.Rsqrt[source]

Computes reciprocal of square root of input tensor element-wise.

\[out_{i} = \frac{1}{\sqrt{x_{i}}}\]
Inputs:
  • x (Tensor) - The input of Rsqrt. Its rank must be in [0, 7] inclusive and each element must be a non-negative number.

Outputs:

Tensor, has the same type and shape as x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor([[4, 4], [9, 9]], mindspore.float32)
>>> rsqrt = ops.Rsqrt()
>>> output = rsqrt(input_tensor)
>>> print(output)
[[0.5        0.5       ]
 [0.33333334 0.33333334]]
class tinyms.primitives.SGD(dampening=0.0, weight_decay=0.0, nesterov=False)[source]

Computes the stochastic gradient descent. Momentum is optional.

Nesterov momentum is based on the formula from paper On the importance of initialization and momentum in deep learning.

Note

If parameters are not grouped, the weight_decay in optimizer will be applied on the network parameters without ‘beta’ or ‘gamma’ in their names. Users can group parameters to change the strategy of decaying weight. When parameters are grouped, each group can set weight_decay. If not, the weight_decay in optimizer will be applied. For more details, please refer to mindspore.nn.SGD.

Parameters:
  • dampening (float) – The dampening for momentum. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0.

  • nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • parameters (Tensor) - Parameters to be updated. With float16 or float32 data type.

  • gradient (Tensor) - Gradient, with float16 or float32 data type.

  • learning_rate (Tensor) - Learning rate, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32)

  • accum (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type.

  • momentum (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. e.g. Tensor(0.1, mindspore.float32).

  • stat (Tensor) - States to be updated with the same shape as gradient, with float16 or float32 data type.

Outputs:

Tensor, parameters to be updated.

Raises:
  • TypeError – If dampening or weight_decay is not a float.

  • TypeError – If nesterov is not a bool.

  • TypeError – If parameters, gradient, learning_rate, accum, momentum or stat is not a Tensor.

  • TypeError – If dtype of parameters, gradient, learning_rate, accum, momentum or stat is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sgd = ops.SGD()
>>> parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
>>> gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
>>> learning_rate = Tensor(0.01, mindspore.float32)
>>> accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
>>> momentum = Tensor(0.1, mindspore.float32)
>>> stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
>>> output = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
>>> print(output.asnumpy())
[1.99 -0.4903 1.695 3.9801]
class tinyms.primitives.STFT(n_fft, hop_length, win_length, normalized, onesided, return_complex)[source]

Applies Short-time Fourier transform (STFT) on input signal.

STFT segments the signal into narrow time intervals and takes the Fourier transform of each segment to quantify the change of a nonstationary signal’s frequency and phase content over time.

Refer to mindspore.ops.stft() for more details.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore as ms
>>> from mindspore.ops import STFT
>>> import numpy as np
>>> x = ms.Tensor(np.random.rand(2,7192), ms.float32)
>>> window = ms.Tensor(np.random.rand(64), ms.float32)
>>> stft = STFT(64, 16, 64, False, True, True)
>>> output = stft(x, window)
>>> print(output.shape)
(2, 33, 446)
class tinyms.primitives.SampleDistortedBoundingBoxV2(seed=0, seed2=0, aspect_ratio_range=(0.75, 1.33), area_range=(0.05, 1.0), max_attempts=100, use_image_if_no_bounding_boxes=False)[source]

Creates a single bounding box that is randomly distorted for an image.

It is often used for object localization and image recognition tasks. In such tasks, bounding box annotations are supplied in addition to ground-truth labels, and data augmentation techniques are often used to randomly distort an image while preserving its content.

This function takes the image_size, bounding_boxes, and a series of constraints as input, and outputs a randomly distorted localization of an object (i.e., bounding box) based on these inputs.

The output is returned as 3 tensors:

The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into mindspore.ops.Slice to crop the image. The latter is the generated distorted bounding box.

Parameters:
  • seed (int, optional) – Random number seed. If either seed or seed2 is set to a non-zero value, the seed is to the given value. Otherwise, a random seed is uesed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

  • aspect_ratio_range (Union[list(float), tuple(float)], optional) – Specifying the valild range of aspect ratio of cropped area. Aspect ratio of area = area_width / area_height. The value of this attribute should be positive. Default: (0.75, 1.33).

  • area_range (Union[list(float), tuple(float)], optional) – The cropped area of the image must contain a fraction of the supplied image within this range. The value of this attribute should be in range (0.0, 1.0]. Default: (0.05, 1.0).

  • max_attempts (int, optional) – A poditive integer specifies the number of attempts that will be made to generate a cropped region of the image based on the given constraints. If the maximum number of attempts is exceeded without success, the function will return the entire original image. Default: 100.

  • use_image_if_no_bounding_boxes (bool, optional) – Controls behavior if no bounding boxes supplied. If no bounding boxes supplied (bounding_boxes in shape \((0, N, 4)\) or \((batch, 0, 4)\)), and this attribute is set True, then assume an implicit bounding box covering the whole input, else if this attribute is set False, then raise an error. Default: False.

Inputs:
  • image_size (Tensor) - 1-D Tensor, containing [height, width, channels]. The value of this input tensor should be positive.

  • bounding_boxes (Tensor) - 3-D Tensor with shape \((batch, N, 4)\) describing the N bounding boxes associated with the image. The value of this input tensor should be in range [0.0, 1.0]. The data type is float32.

  • min_object_covered (Tensor) - The least fraction of bounding box the croped area need to cover. This parameter’s value should be between 0.0 and 1.0, inclusive. If the value is 0, the cropped area does not need to overlap with any of the supplied bounding boxes. The data type is float32.

Outputs:
  • begin (Tensor) - A 1-D Tensor, containing [offset_height, offset_width, 0]. The data type is same as image_size.

  • size (Tensor) - A 1-D Tensor, containing [target_height, target_width, -1]. The data type is same as image_size. When the data type of image_size is uint8, the last value of size, which is originally -1, will be forced to 255.

  • bboxes (Tensor) - A 3-D Tensor with shape \((1, 1, 4)\), containing the distorted bounding box. The data type is float32.

Raises:
  • TypeError – If image_size is not a Tensor.

  • TypeError – If bounding_boxes is not a Tensor.

  • TypeError – If min_object_covered is not a Tensor.

  • TypeError – If seed or seed2 is not an int.

  • TypeError – If aspect_ratio_range is not a list or a tuple with type float.

  • TypeError – If area_range is not a list or a tuple with type float.

  • TypeError – If use_image_if_no_bounding_boxes is not a bool.

  • ValueError – If the dimension of image_size is not 1.

  • ValueError – If the elements of image_size is not 3.

  • ValueError – If the dimension of bounding_boxes is not 3.

  • ValueError – If the elements of each bounding box in bounding_boxes is not 4.

  • ValueError – If the elements of min_object_covered is not 1.

  • ValueError – If the elements of aspect_ratio_range list or tuple is not 2.

  • ValueError – If the values of aspect_ratio_range is not positive.

  • ValueError – If the second value of aspect_ratio_range is less than or equal to the first one.

  • ValueError – If the elements of area_range list or tuple is not 2.

  • ValueError – If the values of area_range is out of range (0.0, 1.0].

  • ValueError – If the second value of area_range is less than or equal to the first one.

  • ValueError – If the value of max_attempts is not positive int.

  • ValueError – If use_image_if_no_bounding_boxes is False and no bounding boxes supplied.

  • RuntimeError – If the values of image_size is not positive.

  • RuntimeError – If the values of bounding_boxes is out of range [0.0, 1.0].

  • RuntimeError – If the bounding_boxes cannot make up bounding box.

  • RuntimeError – If the value of min_object_covered is out of range [0.0, 1.0].

Supported Platforms:

Ascend CPU

Examples

>>> image_size = Tensor([640, 480, 3], mindspore.int32)
>>> bounding_boxes = Tensor([[[0.38, 0.17, 0.95, 0.40]]], mindspore.float32)
>>> min_object_covered = Tensor([0.8], mindspore.float32)
>>> sample_distorted_bounding_box_v2 = \
...   ops.SampleDistortedBoundingBoxV2(seed=1, seed2=1, aspect_ratio_range=(0.9, 1.1),
...                                    area_range=(0.1,1.0), max_attempts=100,
...                                    use_image_if_no_bounding_boxes=False)
>>> output = sample_distorted_bounding_box_v2(image_size, bounding_boxes, min_object_covered)
>>> begin, size, bboxes = output[0], output[1], output[2]
>>> print(begin)
[133   1   0]
>>> print(size)
[502 457  -1]
>>> print(bboxes)
[[[0.2078125  0.00208333 0.9921875  0.95416665]]]
class tinyms.primitives.ScalarCast[source]

Casts the input scalar to another type.

Refer to mindspore.ops.scalar_cast() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> scalar_cast = ops.ScalarCast()
>>> output = scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
class tinyms.primitives.ScalarSummary[source]

This operator will put a scalar to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable, it must not be an empty string.

  • value (Tensor) - The value of scalar, and the dim of value must be 0 or 1.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.ScalarSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         name = "x"
...         self.summary(name, x)
...         x = self.add(x, y)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor(3), Tensor(4))
>>> print(summary)
7
class tinyms.primitives.ScalarToArray[source]

The ScalarToArray primitive is deprecated. Please use the mindspore.ops.ScalarToTensor instead.

class tinyms.primitives.ScalarToTensor[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Refer to mindspore.ops.scalar_to_tensor() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScalarToTensor()
>>> data = 1
>>> output = op(data, mindspore.float32)
>>> print(output)
1.0
class tinyms.primitives.ScaleAndTranslate(kernel_type='lanczos3', antialias=True)[source]

Scale And Translate the input image tensor.

Note

  • Input images must be a 4-D tensor.

  • Input size, scale and translation must be a 1-D tensor with two elements.

Parameters:
  • kernel_type (str, optional) – Deciding which image filtering algorithm to choose. Valid options: [“lanczos1”, “lanczos3”, “lanczos5”, “gaussian”, “box”, “triangle”, “keyscubic”, “mitchellcubic”] Default: “lanczos3”.

  • antialias (bool, optional) – Deciding whether to use the antialias. Default: True.

Inputs:
  • images (Tensor) - A 4-D tensor of shape \((batch, image\_height, image\_width, channel)\).

  • size (Tensor) - The size of the output image after scale and translate operations. A 1-D tensor with two positive elements whose dtype is int32 and shape must be \((2,)\).

  • scale (Tensor) - Indicates the zoom factor. A 1-D tensor with two positive elements whose dtype is float32 and shape must be \((2,)\).

  • translation (Tensor) - Translate the pixel value. A 1-D tensor with two elements whose dtype is float32 and shape must be \((2,)\).

Outputs:

A 4-D tensor with type: float32 and shape \((batch, size[0], size[1], channel)\).

Raises:
  • TypeError – If kernel_type is not str.

  • TypeError – If antialias is not bool.

  • TypeError – If images is not tensor with valid dtype.

  • TypeError – If size is not a tensor of int32.

  • TypeError – If scale is not a tensor of float32.

  • TypeError – If translation is not a tensor of float32.

  • ValueError – If kernel_type is not in [“lanczos1”, “lanczos3”, “lanczos5”, “gaussian”, “box”, “triangle”, “keyscubic”, “mitchellcubic”].

  • ValueError – If the rank of images is not 4.

  • ValueError – If the shape of size is not \((2,)\).

  • ValueError – If the shape of scale is not \((2,)\).

  • ValueError – If the shape of translation is not \((2,)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScaleAndTranslate()
>>> image = Tensor(np.array([[[[9.0], [5.0], [2.0], [1.0]],
...                           [[6.0], [1.0], [9.0], [7.0]]]]), mindspore.float32)
>>> size = Tensor(np.array([2, 2]).astype(np.int32))
>>> scale = Tensor(np.array([1, 1]).astype(np.float32))
>>> translation = Tensor(np.array([1, 1]).astype(np.float32))
>>> output = op(image, size, scale, translation)
>>> print(output)
[[[[0.]
   [0.]]
  [[0.]
   [9.]]]]
class tinyms.primitives.ScatterAdd(use_locking=False)[source]

Updates the value of the input tensor through the addition operation.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{+}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Note

This is an in-place update operator. Therefore, the input_x will be updated after the operation is completed.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. If true, input_x will be protected by the lock. Otherwise, the calculation result is undefined. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[1. 1. 1.]
 [3. 3. 3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [7.0, 7.0, 7.0] = [10.0, 10.0, 10.0]
>>> # input_x[1] = [10.0, 10.0, 10.0] + [9.0, 9.0, 9.0] = [19.0, 19.0, 19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 1.  1.  1.]
 [19. 19. 19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [8.0, 8.0, 8.0] + [9.0, 9.0, 9.0] = [17.0, 17.0, 17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 3.  3.  3.]
 [17. 17. 17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
>>> # input_x[1] = [3.0, 3.0, 3.0] + [9.0, 9.0, 9.0] = [12.0, 12.0, 12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_add = ops.ScatterAdd()
>>> output = scatter_add(input_x, indices, updates)
>>> print(output)
[[ 8.  8.  8.]
 [12. 12. 12.]]
class tinyms.primitives.ScatterAddWithAxis(axis=0)[source]

‘ops.ScatterAddWithAxis’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.TensorScatterElements’ instead.

Supported Platforms:

Deprecated

Examples

>>> op = ops.ScatterAddWithAxis(0)
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> indices = Tensor(np.array([[1, 0, 2], [0, 2, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[1, 1, 1], [1, 1, 1]]), mindspore.float32)
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 2.  3.  3.]
 [ 5.  5.  7.]
 [ 7.  9.  10.]]
>>> op = ops.ScatterAddWithAxis(1)
>>> input_x = Tensor(np.array([[1, 2, 3, 4, 5]]), mindspore.int32)
>>> indices = Tensor(np.array([[2, 4]]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.int32)
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 1  2  11  4  13]]
class tinyms.primitives.ScatterDiv(use_locking=False)[source]

Updates the value of the input tensor through the divide operation.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do divide operation whose data type must be mstype.int32 or mstype.int64.

  • updates (Tensor) - The tensor doing the divide operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [7.0, 7.0, 7.0] = [15.0, 15.0, 15.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[21. 21. 21.]
 [15. 15. 15.]]
class tinyms.primitives.ScatterMax(use_locking=False)[source]

Updates the value of the input tensor through the maximum operation.

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do max operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor that performs the maximum operation with input_x, the data type is the same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> scatter_max = ops.ScatterMax()
>>> output = scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
class tinyms.primitives.ScatterMin(use_locking=False)[source]

Updates the value of the input tensor through the minimum operation.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 1.0, 2.0], [0.0, 0.0, 0.0]]), mindspore.float32),
...                     name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> scatter_min = ops.ScatterMin()
>>> output = scatter_min(input_x, indices, update)
>>> print(output)
[[0. 1. 1.]
 [0. 0. 0.]]
class tinyms.primitives.ScatterMul(use_locking=False)[source]

Updates the value of the input tensor through the multiply operation.

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index to do multiply operation whose data type must be mstype.int32 or mstype.int64.

  • updates (Tensor) - The tensor doing the multiply operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
>>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  1.   1.   1.]
 [378. 378. 378.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
>>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[  3.   3.   3.]
 [126. 126. 126.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7.  7.  7.]
 [54. 54. 54.]]
class tinyms.primitives.ScatterNd[source]

Scatters a tensor into a new tensor depending on the specified indices.

The following figure shows the calculation process of inserting two slices in the first dimension of a rank-3 with two matrices of new values:

tinyms/ScatterNd.png

Refer to mindspore.ops.scatter_nd() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.ScatterNd()
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
>>> shape = (4, 4, 4)
>>> output = op(indices, updates, shape)
>>> print(output)
[[[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]
 [[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = op(indices, updates, shape)
>>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
>>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
>>> # [
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> # ]
>>> # Step 2: Modify the data at the specified location according to the indicators
>>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
>>> # means that the empty tensor in the 0th row and 1st col set to 3.2
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 0.   0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
>>> # means that the empty tensor in the 1th row and 1st col set to 1.1
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 1.1  0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # The final result is as follows:
>>> print(output)
[[0. 3.2 0.]
 [0. 1.1 0.]
 [0. 0.  0.]]
class tinyms.primitives.ScatterNdAdd(use_locking=False)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_add = ops.ScatterNdAdd(use_locking)
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_add = ops.ScatterNdAdd(use_locking)
>>> output = scatter_nd_add(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]]
class tinyms.primitives.ScatterNdDiv(use_locking=False)[source]

Applies sparse division to individual values or slices in a tensor.

Using given values to update tensor value through the division operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.scatter_nd_div() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_div = ops.ScatterNdDiv(use_locking)
>>> output = scatter_nd_div(input_x, indices, updates)
>>> print(output)
[1.         0.25       0.5        4.         0.71428573 6.
 7.         0.8888889 ]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.float32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_div = ops.ScatterNdDiv(use_locking)
>>> output = scatter_nd_div(input_x, indices, updates)
>>> print(output)
[[[1.         1.         1.         1.        ]
  [0.5        0.5        0.5        0.5       ]
  [0.33333334 0.33333334 0.33333334 0.33333334]
  [0.25       0.25       0.25       0.25      ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]
 [[0.2        0.2        0.2        0.2       ]
  [0.16666667 0.16666667 0.16666667 0.16666667]
  [0.14285715 0.14285715 0.14285715 0.14285715]
  [0.125      0.125      0.125      0.125     ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]]
class tinyms.primitives.ScatterNdMax(use_locking=False)[source]

Applies sparse maximum to individual values or slices in a tensor.

Using given values to update parameter value through the maximum operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_max() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_max = ops.ScatterNdMax()
>>> output = scatter_nd_max(input_x, indices, updates)
>>> print(output)
[ 1. 8. 6.  4. 7.  6.  7. 9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_max = ops.ScatterNdMax()
>>> output = scatter_nd_max(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
class tinyms.primitives.ScatterNdMin(use_locking=False)[source]

Applies sparse minimum to individual values or slices in a tensor.

Using given values to update tensor value through the minimum operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.ones(8) * 10, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_min = ops.ScatterNdMin(use_locking)
>>> output = scatter_nd_min(input_x, indices, updates)
>>> print(output)
[10.  8.  6. 10.  7. 10. 10.  9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)) * 10, mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_min = ops.ScatterNdMin(use_locking)
>>> output = scatter_nd_min(input_x, indices, updates)
>>> print(output)
[[[ 1  1  1  1]
  [ 2  2  2  2]
  [ 3  3  3  3]
  [ 4  4  4  4]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]
 [[ 5  5  5  5]
  [ 6  6  6  6]
  [ 7  7  7  7]
  [ 8  8  8  8]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]]
class tinyms.primitives.ScatterNdMul(use_locking=False)[source]

Applies sparse multiplication to individual values or slices in a tensor.

Using given values to update parameter value through the multiplication operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.scatter_nd_mul() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_nd_mul = ops.ScatterNdMul()
>>> output = scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[ 1. 16. 18.  4. 35.  6.  7. 72.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> scatter_nd_mul = ops.ScatterNdMul()
>>> output = scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
class tinyms.primitives.ScatterNdSub(use_locking=False)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Refer to mindspore.ops.scatter_nd_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> use_locking = False
>>> scatter_nd_sub = ops.ScatterNdSub(use_locking)
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> use_locking = False
>>> scatter_nd_sub = ops.ScatterNdSub(use_locking)
>>> output = scatter_nd_sub(input_x, indices, updates)
>>> print(output)
[[[-1 -1 -1 -1]
  [-2 -2 -2 -2]
  [-3 -3 -3 -3]
  [-4 -4 -4 -4]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]
 [[-5 -5 -5 -5]
  [-6 -6 -6 -6]
  [-7 -7 -7 -7]
  [-8 -8 -8 -8]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]]
class tinyms.primitives.ScatterNdUpdate(use_locking=True)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N, and its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor, with int32 or int64 data type.

  • updates (Tensor) - N-D(2D or 3D) Tensor The tensor to be updated to the input tensor, has the same type as input. The shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or an int64.

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.ScatterNdUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[1.   0.3   3.6]
 [0.4  2.2  -3.2]]
class tinyms.primitives.ScatterNonAliasingAdd[source]

Applies sparse addition to the input using individual values or slices.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Inputs:
  • input_x (Parameter) - The target parameter. The data type must be float16, float32 or int32.

  • indices (Tensor) - The index to perform the addition operation whose data type must be mindspore.int32.

  • updates (Tensor) - The tensor that performs the addition operation with input_x, the data type is the same as input_x, the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

Outputs:

Parameter, the updated input_x.

Raises:
  • TypeError – If dtype of indices is not int32.

  • TypeError – If dtype of input_x is not one of float16, float32, int32.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> scatter_non_aliasing_add = ops.ScatterNonAliasingAdd()
>>> output = scatter_non_aliasing_add(input_x, indices, updates)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
class tinyms.primitives.ScatterSub(use_locking=False)[source]

Updates the value of the input tensor through the subtraction operation.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{-}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means any number of additional dimensions.

  • indices (Tensor) - The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) - The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices_shape + x_shape[1:].

Outputs:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices_shape + x_shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[-1. -1. -1.]
 [-1. -1. -1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [7.0, 7.0, 7.0] = [-10.0, -10.0, -10.0]
>>> # input_x[1] = [-10.0, -10.0, -10.0] - [9.0, 9.0, 9.0] = [-19.0, -19.0, -19.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -1.  -1.  -1.]
 [-19. -19. -19.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-8.0, -8.0, -8.0] - [9.0, 9.0, 9.0] = [-17.0, -17.0, -17.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -3.  -3.  -3.]
 [-17. -17. -17.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
>>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
>>> # input_x[1] = [-3.0, -3.0, -3.0] - [9.0, 9.0, 9.0] = [-12.0, -12.0, -12.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> scatter_sub = ops.ScatterSub()
>>> output = scatter_sub(input_x, indices, updates)
>>> print(output)
[[ -8.  -8.  -8.]
 [-12. -12. -12.]]
class tinyms.primitives.ScatterUpdate(use_locking=True)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – Whether to protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) - The index of input tensor. With int32 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) - The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> op = ops.ScatterUpdate()
>>> output = op(input_x, indices, updates)
>>> print(output)
[[2. 1.2  1.]
 [3. 1.2  1.]]
class tinyms.primitives.SeLU[source]

Activation function SeLU (Scaled exponential Linear Unit).

The activation function is defined as:

\[E_{i} = scale * \begin{cases} x_{i}, &\text{if } x_{i} \geq 0; \cr \text{alpha} * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(alpha\) and \(scale\) are pre-defined constants(\(alpha=1.67326324\) and \(scale=1.05070098\)).

See more details in Self-Normalizing Neural Networks.

Inputs:
  • input_x (Tensor) - Tensor of any dimension. The data type is int8, int32, float16, float32, float64(only CPU, GPU).

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If dtype of input_x is not int8, int32, float16, float32, float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> selu = ops.SeLU()
>>> output = selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
class tinyms.primitives.SearchSorted(dtype=mindspore.int64, right=False)[source]

Returns the indices correspond to the positions where the given numbers in values should be inserted into sorted_sequence so that the order of the sequence is maintained.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.searchsorted() for more details.

Parameters:
  • dtype (mindspore.dtype, optional) – Output data type. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int64.

  • right (bool, optional) – Search Strategy. If True, return the last suitable index found; if False, return the first such index. Default: False.

Inputs:
  • sorted_sequence (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R-1, x_R)\) or (x_1). It must contain a monotonically increasing sequence on the innermost dimension.

  • values (Tensor) - The value that should be inserted. The shape of tensor is \((x_1, x_2, ..., x_R-1, x_S)\).

Outputs:

Tensor containing the indices from the innermost dimension of sorted_sequence such that, if insert the corresponding value in the values tensor, the order of sorted_sequence would be preserved, whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sorted_sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
>>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
>>> output = ops.SearchSorted()(sorted_sequence, values)
>>> print(output)
[[2 4 5]
 [1 2 4]]
class tinyms.primitives.SegmentMax[source]

Computes the maximum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=max_j(input\_x_j)\) in which the maximum value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMax()
>>> output = op(x, segment_ids)
>>> print(output)
[[4. 5. 6.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentMean[source]

Computes the mean along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=mean_j(input\_x_j)\) in which the mean value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [1, 2, 3], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMean()
>>> output = op(x, segment_ids)
>>> print(output)
[[1. 2. 3.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentMin[source]

Computes the minimum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i=min_j(input\_x_j)\) in which the minimum value is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to zero: \(output[i] = 0\).

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentMin()
>>> output = op(x, segment_ids)
>>> print(output)
[[1. 2. 3.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.SegmentProd[source]

Computes the cumulative product along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i = \prod_j input\_x_j\) in which the cumulative product is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to 1: \(output[i] = 1\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentProd()
>>> output = op(x, segment_ids)
>>> print(output)
[[ 4. 10. 18.]
 [ 1.  1.  1.]
 [ 7.  8.  9.]]
class tinyms.primitives.SegmentSum[source]

Computes the cumulative sum along segments of a Tensor.

Specifically, it generates a new Tensor output such that \(output_i = \sum_j input\_x_j\) in which the cumulative sum is obtained from all elements corresponding to \(j\) that meets \(segment\_ids[j] == i\). If a segment contains no elements for a given segment \(i\), then the corresponding element in the output Tensor is set to 0: \(output[i] = 0\).

Warning

If the dtype of input_x is complex number, the gradient can not be calculated.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is real number or complex number and whose rank is not less than 1.

  • segment_ids (Tensor) - A 1-D tensor whose dtype is int32 or int64. The size of tensor must be equal to the first dimension of the shape of input_x. Values must be sorted in ascending order and need not cover all values in the full range of valid values, but must be positive integer. Only constant values is allowed.

Outputs:

Tensor, whose dtype and the dimension of the shape is the same as input_x. The first dimension of the shape is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If segment_ids is not a Tensor.

  • TypeError – If the dtype of input_x is invalid.

  • TypeError – If the dtype of segment_ids is invalid.

  • ValueError – If the rank of input_x is less than 1.

  • ValueError – If the rank of segment_ids is not equal to 1.

  • ValueError – If the size of segment_ids is not equal to the first dimension of the shape of input_x.

  • ValueError – If the values of segment_ids are negative.

  • ValueError – If the values of segment_ids are not sorted in ascending order.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mstype.float64)
>>> segment_ids = Tensor([0, 0, 2], mstype.int64)
>>> op = ops.SegmentSum()
>>> output = op(x, segment_ids)
>>> print(output)
[[5. 7. 9.]
 [0. 0. 0.]
 [7. 8. 9.]]
class tinyms.primitives.Select[source]

The conditional tensor determines whether the corresponding element in the output must be selected from x (if True) or y (if False) based on the value of each element.

It can be defined as:

\[\begin{split}out_i = \begin{cases} x_i, & \text{if } condition_i \\ y_i, & \text{otherwise} \end{cases}\end{split}\]
Inputs:
  • condition (Tensor[bool]) - The condition tensor, decides which element is chosen. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • x (Tensor) - The first tensor to be selected and the shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • y (Tensor) - The second tensor to be selected and the shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

Outputs:

Tensor, has the same shape as condition.

Raises:
  • TypeError – If x or y is not a Tensor.

  • ValueError – If shape of the three inputs are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> select = ops.Select()
>>> input_cond = Tensor([True, False])
>>> input_x = Tensor([2,3], mindspore.float32)
>>> input_y = Tensor([1,2], mindspore.float32)
>>> output = select(input_cond, input_x, input_y)
>>> print(output)
[2. 2.]
class tinyms.primitives.Shape[source]

Returns the shape of the input tensor.

Refer to mindspore.ops.shape() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.Shape()
>>> output = shape(input_x)
>>> print(output)
(3, 2, 1)
class tinyms.primitives.Sigmoid[source]

Sigmoid activation function. Refer to mindspore.ops.sigmoid() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> sigmoid = ops.Sigmoid()
>>> output = sigmoid(input_x)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
class tinyms.primitives.SigmoidCrossEntropyWithLogits[source]

Uses the given logits to compute sigmoid cross entropy between the logits and the label.

Measures the distribution error in discrete classification tasks where each class is independent and not mutually exclusive using cross entropy loss.

Sets input logits as \(X\), input label as \(Y\), output as \(loss\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})] \end{array}\end{split}\]
Inputs:
  • logits (Tensor) - Input logits. Tensor of shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • label (Tensor) - Ground truth label. With the same shape and type as logits.

Outputs:

Tensor, with the same shape and type as input logits.

Raises:

TypeError – If logits or label is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32))
>>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32))
>>> sigmoid = ops.SigmoidCrossEntropyWithLogits()
>>> output = sigmoid(logits, labels)
>>> print(output)
[[ 0.6111007   0.5032824   0.26318604]
 [ 0.58439666  0.5530153  -0.4368139 ]]
class tinyms.primitives.Sign[source]

Performs sign on the tensor element-wise.

\[sign(x) = \begin{cases} -1, &if\ x < 0 \cr 0, &if\ x = 0 \cr 1, &if\ x > 0\end{cases}\]
Inputs:
  • x (Tensor) - The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
>>> sign = ops.Sign()
>>> output = sign(x)
>>> print(output)
[[ 1.  0. -1.]]
class tinyms.primitives.Sin[source]

Computes sine of the input element-wise.

Refer to mindspore.ops.sin() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sin = ops.Sin()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sin(x)
>>> print(output)
[0.5810352 0.27635565 0.41687083 0.5810352]
class tinyms.primitives.Sinc[source]

Computes the normalized sinc of input.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.sinc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.ops.operations.math_ops as ops
>>> from mindspore import Tensor, dtype
>>> sinc = ops.Sinc()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinc(x)
>>> print(output)
[0.47735003 0.8759357  0.7224278  0.47735003]
class tinyms.primitives.Sinh[source]

Computes hyperbolic sine of the input element-wise.

Refer to mindspore.ops.sinh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sinh = ops.Sinh()
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sinh(x)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
class tinyms.primitives.Size[source]

Returns a Scalar of type int that represents the size of the input Tensor and the total number of elements in the Tensor.

Refer to mindspore.ops.size() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> size = ops.Size()
>>> output = size(input_x)
>>> print(output)
4
class tinyms.primitives.Slice[source]

Slices a tensor in the specified shape.

Refer to mindspore.ops.slice() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> slice_op = ops.Slice()
>>> output = slice_op(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 2))
>>> print(output)
[[[3 3]]]
>>> output = slice_op(data, (1, 0, 0), (1, 1, 1))
>>> print(output)
[[[3]]]
>>> output = slice_op(data, (1, 1, 0), (1, 1, 3))
>>> print(output)
[[[4 4 4]]]
>>> output = slice_op(data, (1, 0, 1), (1, 1, 2))
>>> print(output)
[[[3 3]]]
class tinyms.primitives.SmoothL1Loss(beta=1.0, reduction='none')[source]

Calculate the smooth L1 loss, and the L1 loss function has robustness.

Refer to mindspore.ops.smooth_l1_loss() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> loss = ops.SmoothL1Loss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
[0.  0.  0.5]
class tinyms.primitives.SoftMarginLoss(reduction='mean')[source]

SoftMarginLoss operation.

Creates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).

\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}\]

where \(x.nelement()\) is the number of elements of x.

Parameters:

reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’ or ‘sum’. Default: “mean”.

Inputs:
  • logits (Tensor) - Predict data. Data type must be float16 or float32.

  • labels (Tensor) - Ground truth data, with the same type and shape as logits.

Outputs:

Tensor or Scalar, if reduction is “none”, its shape is the same as logits. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If logits or labels is not a Tensor.

  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • ValueError – If shape of logits is not the same as labels.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.SoftMarginLoss()
>>> logits = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> labels = Tensor(np.array([[-1, 1], [1, -1]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.6764238
class tinyms.primitives.SoftShrink(lambd=0.5)[source]

Applies the SoftShrink function element-wise.

Refer to mindspore.ops.softshrink() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> input_x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mindspore.float16)
>>> softshrink = ops.SoftShrink()
>>> output = softshrink(input_x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
class tinyms.primitives.Softmax(axis=-1)[source]

Applies the Softmax operation to the input tensor on the specified axis.

Refer to mindspore.ops.softmax() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softmax = ops.Softmax()
>>> output = softmax(logits)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
class tinyms.primitives.SoftmaxCrossEntropyWithLogits[source]

Gets the softmax cross-entropy value between logits and labels with one-hot encoding.

The updating formulas of SoftmaxCrossEntropyWithLogits algorithm are as follows,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})} \end{array}\end{split}\]

where \(X\) represents logits. \(Y\) represents label. \(loss\) represents output.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N, C)\), has the same data type with logits.

Outputs:

Tuple of 2 tensors(loss, dlogits), the loss shape is \((N,)\), and the dlogits with the same shape as logits.

Raises:
  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • TypeError – If logits or labels is not a Tensor.

  • ValueError – If shape of logits is not the same as labels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32)
>>> softmax_cross = ops.SoftmaxCrossEntropyWithLogits()
>>> loss, dlogits = softmax_cross(logits, labels)
>>> print(loss)
[0.5899297  0.52374405]
>>> print(dlogits)
[[ 0.02760027  0.20393994  0.01015357  0.20393994 -0.44563377]
 [ 0.08015892  0.02948882  0.08015892 -0.4077012   0.21789455]]
class tinyms.primitives.Softplus[source]

Softplus activation function.

Softplus is a smooth approximation to the ReLU function. It can be used to constrain the output of a machine to always be positive. The function is shown as follows:

\[\text{output} = \log(1 + \exp(\text{x}))\]
Inputs:
  • input_x (Tensor) - Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If the dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softplus = ops.Softplus()
>>> output = softplus(input_x)
>>> print(output)
[1.3132615 2.126928  3.0485873 4.01815   5.0067153]
class tinyms.primitives.Softsign[source]

Softsign activation function.

Refer to mindspore.ops.softsign() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> softsign = ops.Softsign()
>>> output = softsign(input_x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
class tinyms.primitives.Sort(axis=-1, descending=False)[source]

Sorts the elements of the input tensor along the given dimension in the specified order.

Warning

Currently, the data types of Float16 is well supported. Using Float32 might cause loss of accuracy.

Parameters:
  • axis (int) – The dimension to sort along. Default: -1.

  • descending (bool) – Controls the sort order. If descending is True then the elements are sorted in descending order by value. Default: False.

Inputs:
  • x (Tensor) - The input tensor of any dimension, with a type of float16 or float32.

Outputs:
  • y1 (Tensor) - A tensor whose values are the sorted values, with the same shape and data type as input.

  • y2 (Tensor) - the indices of the elements in the original input tensor. Data type is int32.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If descending is not a bool.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is not in range of [-len(x.shape), len(x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.Sort()
>>> output = sort(x)
>>> # The output below is based on the Ascend platform.
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
 [ 3.0000e+00,  5.0000e+00,  9.0000e+00],
 [ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
 [2, 0, 1],
 [0, 1, 2]]))
class tinyms.primitives.SpaceToBatch(block_size, paddings)[source]

SpaceToBatch is deprecated. Please use mindspore.ops.SpaceToBatchND instead. Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_size, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the square of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters:
  • block_size (int) – The block size of dividing blocks with value greater than or equal to 2.

  • paddings (Union[tuple, list]) – The padding values for H and W dimension, containing 2 subtraction lists. Each subtraction list contains 2 integer value. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_size.

Inputs:
  • input_x (Tensor) - The input tensor. It must be a 4-D tensor. The data type is Number.

Outputs:

Tensor, the output tensor with the same data type as input. Assume input shape is \((n, c, h, w)\) with \(block\_size\) and \(paddings\). The shape of the output tensor will be \((n', c', h', w')\), where

\(n' = n*(block\_size*block\_size)\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_size\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_size\)

Raises:
Supported Platforms:

Deprecated

Examples

>>> block_size = 2
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch = ops.SpaceToBatch(block_size, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToBatchND(block_shape, paddings)[source]

Divides spatial dimensions into blocks and combines the block size with the original batch.

This operation will divide spatial dimensions into blocks with block_shape, and then the output tensor’s spatial dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and all elements in block_shape. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters:
  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all elements greater than or euqal to 1. If block_shape is a list or tuple, the length of block_shape is the number of spatial dimensions, called M later. If block_shape is an int, the block size of M dimensions are the same, equal to block_shape. In this case of Ascend, M must be 2.

  • paddings (Union[tuple, list]) – The padding values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be greater than or equal to 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i + offset,where offset = N-M, and N is the number of input dimensions. For each i, input_shape[i + offset]+paddings[i][0]+paddings[i][1] should be divisible by block_shape[i].

Inputs:
  • input_x (Tensor) - The input tensor. The input tensor must be a 4-D tensor on Ascend.

Outputs:

Tensor, the output tensor with the same data type as the input. Assume the input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\) with \(block\_shape\) and \(paddings\). The shape of the output tensor will be \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n*(block\_shape[0]*...*block\_shape[M-1]) \\ w'_i = (w_i+paddings[i-1][0]+paddings[i-1][1])//block\_shape[i-1] \end{array}\end{split}\]
Raises:
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If paddings is neither list nor tuple.

  • ValueError – If block_shape is not one dimensional when block_shape is a list or tuple.

  • ValueError – If the length of block_shape is not 2 on Ascend.

  • ValueError – If shape of paddings is not (M, 2), where M is the length of block_shape.

  • ValueError – If the element of block_shape is not an integer larger than or equal to 1.

  • ValueError – If the element of paddings is not an integer larger than or euqal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_shape = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch_nd = ops.SpaceToBatchND(block_shape, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = space_to_batch_nd(input_x)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
class tinyms.primitives.SpaceToDepth(block_size)[source]

Rearrange blocks of spatial data into depth.

The output tensor’s height dimension is \(height / block\_size\).

The output tensor’s weight dimension is \(weight / block\_size\).

The depth of output tensor is \(block\_size * block\_size * input\_depth\).

The input tensor’s height and width must be divisible by block_size. The data format is “NCHW”.

Parameters:

block_size (int) – The block size used to divide spatial data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor. The data type is Number. It must be a 4-D tensor.

Outputs:

Tensor, the same data type as x. It must be a 4-D tensor. Tensor of shape \((N, (C_{in} * \text{block_size} * 2), H_{in} / \text{block_size}, W_{in} / \text{block_size})\).

Raises:
  • TypeError – If block_size is not an int.

  • ValueError – If block_size is less than 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
>>> block_size = 2
>>> space_to_depth = ops.SpaceToDepth(block_size)
>>> output = space_to_depth(x)
>>> print(output.shape)
(1, 12, 1, 1)
class tinyms.primitives.SparseApplyAdadelta(epsilon, use_locking=False)[source]

Updates relevant entries according to the adadelta scheme.

\[\begin{split}\begin{array}{ll} \\ accum = \rho * accum + (1 - \rho) * grad^2 \\ \text{update} = \sqrt{\text{accum_update} + \epsilon} * \frac{grad}{\sqrt{accum + \epsilon}} \\ var = var - update * lr \\ \text{accum_update} = \rho * \text{accum_update} + (1 - \rho) * update^2 \\ \end{array}\end{split}\]

Inputs of ‘var’, ‘accum’, ‘accum_update’ and ‘grad’ comply with the implicit type conversion rules to make the data types consistent. Besides, inputs of ‘lr’ and ‘rho’ also support implicit type conversion. If they have different data types, the lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.

Note

If there are negative values or values greater than or equal to var.shape[0] in indices, the behavior is undefined. Besides, this operator doesn’t support duplicates in indices.

Parameters:
  • epsilon (float) – A small value added for numerical stability. Its value must be greater or equal to 0.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Weights to be updated. With float32 or float16 data type.

  • accum (Parameter) - Accumulation to be updated. Mush have the same shape and dtype as var. With float32 or float16 data type.

  • accum_update (Parameter) - Accum_update to be updated. Must have the same shape and dtype as var. With float32 or float16 data type.

  • lr (Union[float, Tensor]) - Learning rate, must be a scalar. With float32 or float16 data type.

  • rho (Union[float, Tensor]) - Decay rate, must be a scalar. With float32 or float16 data type.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. Must be one of the following types: int32, int64 and indices.shape[0] = grad.shape[0].

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

  • accum_update (Tensor) - The same shape and data type as accum_update.

Raises:
  • TypeError – If epsilon is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If var, ‘accum’, ‘accum_update’ is not a Parameter.

  • TypeError – If dtype of accum, accum_updata, grad is not same as var.

  • TypeError – If dtype of var, accum, accum_update, lr, rho or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If epsilon is less than 0.

  • ValueError – If the shape of accum, accum_updata, grad is not same as var.

  • ValueError – If the rank of indices is not equal to 1.

  • ValueError – If shape of indices is not same as shape of first dimension of grad.

Supported Platforms:

Ascend

Examples

>>> class Net(nn.Cell):
...     def __init__(self,epsilon,use_locking = False):
...         super(Net, self).__init__()
...         self.sparse_apply_adadelta = P.SparseApplyAdadelta(epsilon,use_locking)
...         self.var = Parameter(Tensor(np.array([[1.0,2.0],[2.0,3.0]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[1.5,2.5],[3.5,4.5]]).astype(np.float32)), name="accum")
...         self.accum_update = Parameter(Tensor(np.array([[1.2,2.4],[1.8,0.6]]).astype(np.float32)),
...                name="accum_update")
...     def construct(self, lr, rho, grad, indices):
...         out = self.sparse_apply_adadelta(self.var, self.accum, self.accum_update, lr, rho, grad, indices)
...         return out
...
>>> epsilon = 1e-6
>>> net = Net(epsilon)
>>> lr = 0.01
>>> rho = 0.2
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> output = net(lr, rho, grad, Tensor(np.array([0,1],dtype=np.int32)))
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 9.94611859e-01,  1.98851788e+00],
 [ 1.99840558e+00,  2.99478507e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.72000009e-01,  8.91999960e-01],
 [ 7.08000004e-01,  1.41200006e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 4.72257614e-01,  1.53470778e+00],
 [ 3.80338937e-01,  3.37563992e-01]]))
class tinyms.primitives.SparseApplyAdagrad(lr, update_slots=True, use_locking=False)[source]

Deprecated

class tinyms.primitives.SparseApplyAdagradV2(lr, epsilon, use_locking=False, update_slots=True)[source]

Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ var -= lr * grad * \frac{1}{\sqrt{accum} + \epsilon} \end{array}\end{split}\]

where \(\epsilon\) represents epsilon.

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – Learning rate.

  • epsilon (float) – A small value added for numerical stability.

  • use_locking (bool) – If True, the var and accum tensors will be protected from being updated. Default: False.

  • update_slots (bool) – If True, the computation logic will be different to False. Default: True.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Accumulation to be updated. The shape and data type must be the same as var.

  • grad (Tensor) - Gradients has the same data type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The type must be int32 and \(indices.shape[0] = grad.shape[0]\).

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If neither lr nor epsilon is a float.

  • TypeError – If neither update_slots nor use_locking is a bool.

  • TypeError – If dtype of var, accum or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is not int32.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_adagrad_v2 = ops.SparseApplyAdagradV2(lr=1e-8, epsilon=1e-6)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_adagrad_v2(self.var, self.accum, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.array([0]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[ 1.99999988e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[ 5.89999974e-01]]))
class tinyms.primitives.SparseApplyFtrl(lr, l1, l2, lr_power, use_locking=False)[source]

Updates relevant entries according to the FTRL-proximal scheme For more details, please refer to mindspore.nn.FTRL.

All of inputs except indices comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • lr (float) – The learning rate value, must be positive.

  • l1 (float) – l1 regularization strength, must be greater than or equal to zero.

  • l2 (float) – l2 regularization strength, must be greater than or equal to zero.

  • lr_power (float) – Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero.

  • use_locking (bool, optional) – Use locks for updating operation if true . Default: False.

Inputs:
  • var (Parameter) - The variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - The accumulation to be updated, must be same data type and shape as var.

  • linear (Parameter) - The linear coefficient to be updated, must be the same data type and shape as var.

  • grad (Tensor) - A tensor of the same type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. The type must be int32 or int64 and \(indices.shape[0] = grad.shape[0]\).

Outputs:
  • var (Tensor) - Tensor, has the same shape and data type as var.

  • accum (Tensor) - Tensor, has the same shape and data type as accum.

  • linear (Tensor) - Tensor, has the same shape and data type as linear.

Raises:
  • TypeError – If lr, l1, l2 or lr_power is not a float.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, linear or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • RuntimeError – If the data type of all of inputs except indices conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SparseApplyFtrlNet(nn.Cell):
...     def __init__(self):
...         super(SparseApplyFtrlNet, self).__init__()
...         self.sparse_apply_ftrl = ops.SparseApplyFtrl(lr=0.01, l1=0.0, l2=0.0, lr_power=-0.5)
...         self.var = Parameter(Tensor(np.array([[0.2]]).astype(np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0.1]]).astype(np.float32)), name="accum")
...         self.linear = Parameter(Tensor(np.array([[0.6]]).astype(np.float32)), name="linear")
...
...     def construct(self, grad, indices):
...         out = self.sparse_apply_ftrl(self.var, self.accum, self.linear, grad, indices)
...         return out
...
>>> net = SparseApplyFtrlNet()
>>> grad = Tensor(np.array([[0.7]]).astype(np.float32))
>>> indices = Tensor(np.ones([1]), mindspore.int32)
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[1, 1], dtype=Float32, value=
[[2.00000003e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[1.00000001e-01]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[6.00000024e-01]]))
class tinyms.primitives.SparseApplyFtrlV2(lr, l1, l2, l2_shrinkage, lr_power, use_locking=False)[source]

The SparseApplyFtrlV2 interface is deprecated, please use the mindspore.ops.SparseApplyFtrl instead.

Supported Platforms:

Deprecated

class tinyms.primitives.SparseApplyProximalAdagrad(use_locking=False)[source]

Updates relevant entries according to the proximal adagrad algorithm. Compared with mindspore.ops.ApplyProximalAdagrad, an additional index tensor is input.

\[\begin{split}\begin{array}{ll} \\ accum += grad * grad \\ \text{prox_v} = var - lr * grad * \frac{1}{\sqrt{accum}} \\ var = \frac{sign(\text{prox_v})}{1 + lr * l2} * \max(\left| \text{prox_v} \right| - lr * l1, 0) \end{array}\end{split}\]

Inputs of var, accum and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:

use_locking (bool) – If true, the var and accum tensors will be protected from being updated. Default: False.

Inputs:
  • var (Parameter) - Variable tensor to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • accum (Parameter) - Variable tensor to be updated, has the same shape and dtype as var.

  • lr (Union[Number, Tensor]) - The learning rate value, must be a float number or a scalar tensor with float16 or float32 data type. It must be positive.

  • l1 (Union[Number, Tensor]) - l1 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type. It must be non-negative.

  • l2 (Union[Number, Tensor]) - l2 regularization strength, must be a float number or a scalar tensor with float16 or float32 data type. It must be non-negative.

  • grad (Tensor) - A tensor of the same type as var and \(grad.shape[1:] = var.shape[1:]\) if var.shape > 1.

  • indices (Tensor) - A tensor of indices in the first dimension of var and accum. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and \(indices.shape[0] = grad.shape[0]\).

Outputs:

Tuple of 2 tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • accum (Tensor) - The same shape and data type as accum.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of var, accum, lr, l1, l2 or grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If lr <= 0 or l1 < 0 or l2 < 0.

  • RuntimeError – If the data type of var, accum and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.sparse_apply_proximal_adagrad = ops.SparseApplyProximalAdagrad()
...         self.var = Parameter(Tensor(np.array([[4.1, 7.2], [1.1, 3.0]], np.float32)), name="var")
...         self.accum = Parameter(Tensor(np.array([[0, 0], [0, 0]], np.float32)), name="accum")
...         self.lr = 1.0
...         self.l1 = 1.0
...         self.l2 = 0.0
...     def construct(self, grad, indices):
...         out = self.sparse_apply_proximal_adagrad(self.var, self.accum, self.lr, self.l1,
...                                                  self.l2, grad, indices)
...         return out
...
>>> net = Net()
>>> grad = Tensor(np.array([[1, 1], [1, 1]], np.float32))
>>> indices = Tensor(np.array([0, 1], np.int32))
>>> output = net(grad, indices)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.09999990e+00,  5.19999981e+00],
 [ 0.00000000e+00,  1.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
class tinyms.primitives.SparseApplyRMSProp(rho, momentum, epsilon, use_locking=False)[source]

Update relevant entries according to the rmsprop algorithm.

\[\begin{split}\begin{array}{ll} \\ ms = rho * ms_{t-1} + (1 - rho) * grad * grad \\ mom = momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) \\ var = var - mom \end{array}\end{split}\]

Inputs of var, ms, mom and grad comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • rho (float) – Decay rate. The value should be between 0 and 1, otherwise the behavior is undefined.

  • momentum (float) – Momentum. The value should be greater or equal to 0, otherwise the behavior is undefined.

  • epsilon (float) – A small value added for numerical stability. The value should be greater than 0, otherwise the behavior is undefined.

  • use_locking (bool) – If True, updating of the var, ms, and mom tensors are protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. Default: False.

Inputs:
  • var (Parameter) - Variable to be updated. The data type must be float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • ms (Parameter) - The dict of mutable tensor ms. Must have the same shape and dtype as var.

  • mom (Parameter) - The dict of mutable tensor mom. Must have the same shape and dtype as var.

  • lr ([Number, Tensor]) - Learning rate. Must be a scalar. With float16 or float32 data type.

  • grad (Tensor) - A tensor for gradient. Must have the same shape and dtype as var.

  • indices (Tensor) - A tensor of indices in the first dimension of var, ms and mom. If there are duplicates in indices, the behavior is undefined. Must be one of the following types: int32, int64 and indices.shape[0] = var.shape[0].

Outputs:

Tuple of 3 Tensors, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • ms (Tensor) - The same shape and data type as ms.

  • mom (Tensor) - The same shape and data type as mom.

Raises:
  • TypeError – If var, ms or mom is not a Parameter.

  • TypeError – If grad or indices is not a Tensor.

  • TypeError – If dtype of var, ms, mom, lr, grad is neither float16 nor float32.

  • TypeError – If dtype of indices is neither int32 nor int64.

  • TypeError – If lr is neither a Number or a Tensor.

  • TypeError – If use_locking is not a bool.

  • TypeError – If dtype of epsilon, rho, momentum is not a float.

  • ValueError – If shape of ms, mom, grad is not same as var.

  • ValueError – If the shape size of lr is not 0.

  • ValueError – If shape of indices is not same as shape of first dimension of var.

  • ValueError – If epsilon is less than or equal to 0.

  • ValueError – If momentum is less than 0.

  • ValueError – If rho is less than 0 or greater than 1.

  • ValueError – If dimension of var is less than 1.

  • RuntimeError – If the data type of var, ms, mom and grad conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class SparseApplyRMSPropNet(nn.Cell):
...     def __init__(self, rho, momentum, epsilon, use_locking=False):
...         super(SparseApplyRMSPropNet, self).__init__()
...         self.sparse_apply_r_m_s_prop = P.SparseApplyRMSProp(rho, momentum, epsilon, use_locking)
...         self.var = Parameter(Tensor(np.array([[0.6, 0.3], [0.1, 0.5]]).astype(np.float32)), name="var")
...         self.ms = Parameter(Tensor(np.array([[0.2, 0.4], [0.1, 0.3]]).astype(np.float32)), name="ms")
...         self.mom = Parameter(Tensor(np.array([[0.3, 0.1], [0.3, 0.6]]).astype(np.float32)), name="mom")
...     def construct(self, lr, grad, indices):
...         out = self.sparse_apply_r_m_s_prop(self.var, self.ms, self.mom, lr, grad, indices)
...         return out
...
>>> rho = 0.2
>>> momentum = 0.01
>>> epsilon = 1e-6
>>> net = SparseApplyRMSPropNet(rho, momentum, epsilon)
>>> lr = 0.01
>>> grad = Tensor(np.array([[0.3, 0.7], [0.1, 0.8]]).astype(np.float32))
>>> indices = Tensor(np.array([0, 1], dtype=np.int32))
>>> out = net(lr, grad, indices)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 5.88035822e-01,  2.88811117e-01],
 [ 9.10239667e-02,  4.83422279e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.12000003e-01,  4.72000003e-01],
 [ 2.80000009e-02,  5.72000027e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.19641740e-02,  1.11888833e-02],
 [ 8.97603668e-03,  1.65777095e-02]]))
class tinyms.primitives.SparseGatherV2[source]

Returns a slice of input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor, must be in the range [0, input_params.shape[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises:

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> out = ops.SparseGatherV2()(input_params, input_indices, axis)
>>> print(out)
[[2. 7.]
 [4. 54.]
 [2. 55.]]
class tinyms.primitives.SparseSlice[source]

Slices a SparseTensor based on the start and size.

Inputs:
  • indices (Tensor) - A 2D Tensor of shape \((N, R)\), the indices of the SparseTensor. Support int64, each element value should be a non-negative int number. The shape is \((N, R)\).

  • values (Tensor) - A 1D Tensor, represents the value corresponding to the position in the indices. The shape should be \((N,)\).

  • shape (Tensor) - A 1D Tensor of type int64 which specifies the shape of sparsetensor, represent sparse tensor shape. The shape should be \((R,)\).

  • start (Tensor) - A 1D Tensor of type int64, represents the start of the slice. The shape should be \((R,)\).

  • size (Tensor) - A 1D Tensor of type int64, represents the size of the slice. The shape should be \((R,)\).

Outputs:

A SparseTensor objects resulting from splicing.

  • *y_indices: A Tensor of type int64.

  • *y_values: A Tensor. Has the same type as values.

  • *y_shape: A Tensor of type int64. Has the same size as size.

Raises:
  • TypeError – If the dtype of indices, shape, start, size are not int64.

  • ValueError – If indices is not 2-D tensor.

  • ValueError – If values, start, shape , size is not a 1-D tensor.

  • ValueError – If the number of indices is not corresponding to the number of values.

  • ValueError – If the shape of indices[1] is not corresponding to shape.

  • ValueError – If the shape of shape is not corresponding to start.

  • ValueError – If the shape of shape is not corresponding to size.

Supported Platforms:

Examples

>>> indices = Tensor(np.array([[0, 1], [1, 2], [1, 3], [2, 2]]).astype(np.int64))
>>> values = Tensor(np.array([1, 2, 3, 4]).astype(np.int64))
>>> shape = Tensor(np.array([3, 4]).astype(np.int64))
>>> start = Tensor(np.array([0, 1]).astype(np.int64))
>>> size = Tensor(np.array([2, 3]).astype(np.int64))
>>> sparseslice = ops.SparseSlice()
>>> output = sparseslice(indices, values, shape, start, size)
>>> print(output[0])
[[0 0]
 [1 1]
 [1 2]]
>>> print(output[1])
[1 2 3]
>>> print(output[2])
[2 3]
class tinyms.primitives.SparseSoftmaxCrossEntropyWithLogits(is_grad=False)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Sets input logits as X, input label as Y, output as loss. Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr 0, & j \neq y_i \end{cases} \\ loss = \sum_{ij} loss_{ij} \end{array}\end{split}\]
Parameters:

is_grad (bool) – If true, this operation returns the computed gradient. Default: False.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.

  • labels (Tensor) - Ground truth labels, with shape \((N)\). Data type must be int32 or int64.

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Raises:
  • TypeError – If is_grad is not a bool.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • TypeError – If dtype of labels is neither int32 nor int64.

  • ValueError – If \(logits.shape[0] != labels.shape[0]\).

Supported Platforms:

GPU CPU

Examples

>>> logits = Tensor([[2, 3, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([0, 1], mindspore.int32)
>>> sparse_softmax_cross = ops.SparseSoftmaxCrossEntropyWithLogits()
>>> loss = sparse_softmax_cross(logits, labels)
>>> print(loss)
3.4878292
>>> sparse_softmax_cross_grad = ops.SparseSoftmaxCrossEntropyWithLogits(is_grad=True)
>>> loss_grad = sparse_softmax_cross_grad(logits, labels)
>>> print(loss_grad)
[[-0.48415753  0.04306427  0.00582811  0.11706084  0.3182043 ]
 [ 0.04007946 -0.4852556   0.04007946  0.2961494   0.10894729]]
class tinyms.primitives.SparseTensorDenseAdd[source]

Add a sparse tensor and a dense tensor to get a dense tensor.

Inputs:
  • x1_indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, ndim)\).

  • x1_values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. The shape should be \((n,)\).

  • x1_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have ndim elements, represent sparse tensor shape is \((ndim,)\).

  • x2 (Tensor) - A dense Tensor, the dtype is same as values.

Outputs:

Tensor, add result of sparse tensor and dense tensor. The dtype is same as values, and the shape is x1_shape.

Raises:
  • TypeError – If the dtype of x1_indices and ‘x1_shape’ is neither int32 nor int64.

  • ValueError – If x1_shape, shape of x1_indices, shape of x1_values and shape of ‘x2’ don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> from mindspore.common import dtype as mstype
>>> x1_indices = Tensor([[0, 0], [0, 1]], dtype=mstype.int64)
>>> x1_values = Tensor([1, 1], dtype=mstype.float32)
>>> x1_shape = Tensor([3, 3], dtype=mstype.int64)
>>> x2= Tensor([[1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype=mstype.float32)
>>> sparse_tensor_dense_add = ops.SparseTensorDenseAdd()
>>> out = sparse_tensor_dense_add(x1_indices, x1_values, x1_shape, x2)
>>> print(out)
[[2. 2. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
class tinyms.primitives.SparseTensorDenseMatmul(adjoint_st=False, adjoint_dt=False)[source]

Multiplies sparse matrix A by dense matrix B. The rank of sparse matrix and dense matrix must be equal to 2.

Parameters:
  • adjoint_st (bool) – If true, sparse tensor is transposed before multiplication. Default: False.

  • adjoint_dt (bool) – If true, dense tensor is transposed before multiplication. Default: False.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. Support float16, float32, float64, int32, int64, complex64, complex128. The shape should be \((n,)\).

  • sparse_shape (tuple(int) or (Tensor)) - A positive int tuple or tensor which specifies the shape of sparse tensor, and only constant value is allowed when sparse_shape is a tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

  • dense (Tensor) - A 2-D Tensor, the dtype is same as values. If adjoint_st is False and adjoint_dt is False, the shape must be \((C, M)\). If adjoint_st is False and adjoint_dt is True, the shape must be \((M, C)\). If adjoint_st is True and adjoint_dt is False, the shape must be \((N, M)\). If adjoint_st is True and adjoint_dt is True, the shape must be \((M, N)\).

Outputs:

Tensor, the dtype is the same as values. If adjoint_st is False, the shape is \((N, M)\). If adjoint_st is True, the shape is \((C, M)\).

Raises:
  • TypeError – If the type of adjoint_st or adjoint_dt is not bool, or the dtype of indices, dtype of values and dtype of dense don’t meet the parameter description.

  • ValueError – If sparse_shape, shape of indices, shape of values, and shape of dense don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> from mindspore.common import dtype as mstype
>>> indices = Tensor([[0, 1], [1, 2]], dtype=mindspore.int32)
>>> values = Tensor([1, 2], dtype=mindspore.float32)
>>> sparse_shape = (3, 4)
>>> dense = Tensor([[1, 1], [2, 2], [3, 3], [4, 4]], dtype=mindspore.float32)
>>> sparse_dense_matmul = ops.SparseTensorDenseMatmul()
>>> out = sparse_dense_matmul(indices, values, sparse_shape, dense)
>>> print(out)
[[2. 2.]
 [6. 6.]
 [0. 0.]]
class tinyms.primitives.SparseToDense[source]

Converts a sparse representation into a dense tensor.

Inputs:
  • indices (Tensor) - A 2-D Tensor, represents the position of the element in the sparse tensor. Support int32, int64, each element value should be a non-negative int number. The shape is \((n, 2)\).

  • values (Tensor) - A 1-D Tensor, represents the value corresponding to the position in the indices. The shape should be \((n,)\).

  • sparse_shape (tuple(int)) - A positive int tuple which specifies the shape of sparse tensor, should have 2 elements, represent sparse tensor shape is \((N, C)\).

Outputs:

Tensor, converted from sparse tensor. The dtype is same as values, and the shape is sparse_shape.

Raises:
  • TypeError – If the dtype of indices is neither int32 nor int64.

  • ValueError – If sparse_shape, shape of indices and shape of values don’t meet the parameter description.

Supported Platforms:

GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=mindspore.float32)
>>> sparse_shape = (3, 4)
>>> sparse_to_dense = ops.SparseToDense()
>>> out = sparse_to_dense(indices, values, sparse_shape)
>>> print(out)
[[0. 1. 0. 0.]
 [0. 0. 2. 0.]
 [0. 0. 0. 0.]]
class tinyms.primitives.Split(axis=0, output_num=1)[source]

Splits the input tensor into output_num of tensors along the given axis and output numbers.

Refer to mindspore.ops.split() for more details.

Parameters:
  • axis (int) – Index of the split position. Default: 0.

  • output_num (int) – The number of output tensors. Must be positive int. Default: 1.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], the shape of each output tensor is the same, which is \((y_1, y_2, ..., y_S)\). And the data type is the same with input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> split = ops.Split(1, 2)
>>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), mindspore.int32)
>>> print(x)
[[1 1 1 1]
 [2 2 2 2]]
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]), Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [2, 2]]))
>>> split = ops.Split(1, 4)
>>> output = split(x)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [2]]))
class tinyms.primitives.SplitV(size_splits, split_dim, num_split)[source]

Splits the input tensor into num_split tensors along the given dimension.

The input_x tensor will be split into sub-tensors with individual shapes given by size_splits along the split dimension. This requires that input_x.shape(split_dim) is equal to the sum of size_splits.

The shape of input_x is \((x_1, x_2, ..., x_M, ..., x_R)\) whose rank is R. Set the given split_dim as M, and \(-R \le M < R\). Set the given num_split as N, the given size_splits as \((x_{m_1}, x_{m_2}, ..., x_{m_N})\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The output is a list of tensor objects, for the \(i\)-th tensor, it has the shape of \((x_1, x_2, ..., x_{m_i}, ..., x_R)\). \(x_{m_i}\) is the \(M\)-th dimension of the \(i\)-th tensor. Then, the shape of the output tensor is

\[((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\]
Parameters:
  • size_splits (Union[tuple, list]) – A tuple or list of sizes of each output tensor along the split dimension, and the sum of these sizes should equal to the dimension of the input tensor along split_dim. The list may also contain a single instance of the value -1, which indicates that the size of that dimension should be inferred.

  • split_dim (int) – An int indicates the dimension along which to split. Must be in the range [-len(input_x.shape), len(input_x.shape)).

  • num_split (int) – The number of output tensors. Must be positive int.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ...,x_M ..., x_R)\).

Outputs:

Tensor, a list of num_split Tensor objects with the shape \(((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))\), \(x_M=\sum_{i=1}^Nx_{m_i}\). The data type is the same with input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If size_splits is not a tuple or a list.

  • TypeError – If element of size_splits is not an int.

  • TypeError – If split_dim or num_split is not an int.

  • ValueError – If rank of the size_splits is not equal to num_split.

  • ValueError – If sum of the size_splits is not equal to the dimension of value along split_dim.

  • ValueError – If split_dim is out of the range [-len(input_x.shape), len(input_x.shape)).

  • ValueError – If the num_split is less than or equal to 0.

Supported Platforms:

Ascend

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[1, -1], split_dim=1, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[3, 1], dtype=Int32, value=
[[1],
 [4],
 [7]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[2, 3],
 [5, 6],
 [8, 9]]))
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
>>> op = ops.SplitV(size_splits=[2, 1], split_dim=0, num_split=2)
>>> output = op(input_x)
>>> print(output)
(Tensor(shape=[2, 3], dtype=Int32, value=
[[1, 2, 3],
 [4, 5, 6]]), Tensor(shape=[1, 3], dtype=Int32, value=
[[7, 8, 9]]))
class tinyms.primitives.Sqrt[source]

Returns square root of a tensor element-wise.

Note

When there are some negative number, it will return a Tensor whose specific position is nan.

\[out_{i} = \sqrt{x_{i}}\]
Inputs:
  • x (Tensor) - The input tensor with a dtype of Number, the shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, has the same shape and data type as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> sqrt = ops.Sqrt()
>>> output = sqrt(x)
>>> print(output)
[1. 2. 3.]
class tinyms.primitives.Square[source]

Returns square of a tensor element-wise.

\[out_{i} = (x_{i})^2\]
Inputs:
  • x (Tensor) - The input tensor with a dtype of Number, its rank must be in [0, 7] inclusive.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> square = ops.Square()
>>> output = square(x)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.SquareSumAll[source]

Returns the square sum of a tensor element-wise.

\[\begin{split}\left\{\begin{matrix}out_{x} = {\textstyle \sum_{0}^{N}} (x_{i})^2 \\out_{y} = {\textstyle \sum_{0}^{N}} (y_{i})^2 \end{matrix}\right.\end{split}\]

Note

SquareSumAll only supports float16 and float32 data type.

Inputs:
  • x (Tensor) - The input tensor. The data type must be float16 or float32. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) - The input tensor has the same type and shape as the x.

Outputs:
  • output_x (Tensor) - The same type as the x.

  • output_y (Tensor) - The same type as the x.

Raises:
  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([0, 0, 2, 0]), mindspore.float32)
>>> y = Tensor(np.array([0, 0, 2, 4]), mindspore.float32)
>>> square_sum_all = ops.SquareSumAll()
>>> output = square_sum_all(x, y)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4),
 Tensor(shape=[], dtype=Float32, value= 20))
class tinyms.primitives.SquaredDifference[source]

Subtracts the second input tensor from the first input tensor element-wise and returns square of it.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = (x_{i} - y_{i}) * (x_{i} - y_{i}) = (x_{i} - y_{i})^2\]
Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – if x and y is not a Number or a bool or a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 6.0]), mindspore.float32)
>>> squared_difference = ops.SquaredDifference()
>>> output = squared_difference(x, y)
>>> print(output)
[1. 4. 9.]
class tinyms.primitives.Squeeze(axis=())[source]

Return the Tensor after deleting the dimension of size 1 in the specified axis.

Refer to mindspore.ops.squeeze() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> squeeze = ops.Squeeze(2)
>>> output = squeeze(input_x)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
class tinyms.primitives.Stack(axis=0)[source]

Stacks a list of tensors in specified axis.

Refer to mindspore.ops.stack() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> stack = ops.Stack()
>>> output = stack([data1, data2])
>>> print(output)
[[0. 1.]
 [2. 3.]]
class tinyms.primitives.StandardLaplace(seed=0, seed2=0)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x) = \frac{1}{2}\exp(-|x|)\]
Parameters:
  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If seed or seed2 is not an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If seed or seed2 is not a non-negative int.

  • ValueError – If shape is a tuple containing non-positive items.

  • ValueError – If shape is a Tensor, and the rank of the Tensor is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (4, 16)
>>> stdlaplace = ops.StandardLaplace(seed=2)
>>> output = stdlaplace(shape)
>>> result = output.shape
>>> print(result)
(4, 16)
class tinyms.primitives.StandardNormal(seed=0, seed2=0)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Refer to mindspore.ops.standard_normal() for more details.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. A second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (tuple) - The shape of random tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape is the same as the input shape. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (3, 4)
>>> stdnormal = ops.StandardNormal(seed=2)
>>> output = stdnormal(shape)
>>> print(output)
[[-1.3031056   0.64198005 -0.65207404 -1.767485  ]
 [-0.91792876  0.6508565  -0.9098478  -0.14092612]
 [ 0.7806437   1.1585592   1.9676613  -0.00440959]]
class tinyms.primitives.StopGradient[source]

StopGradient is used for eliminating the effect of a value on the gradient, such as truncating the gradient propagation from an output of a function.

Refer to mindspore.ops.stop_gradient() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> def net(x, y):
...     out1 = ops.MatMul()(x, y)
...     out2 = ops.MatMul()(x, y)
...     out2 = ops.StopGradient()(out2)
...     return out1, out2
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> grad_fn = ops.grad(net)
>>> output = grad_fn(x, y)
>>> print(output)
[[1.4100001 1.6       6.5999994]
 [1.4100001 1.6       6.5999994]]
class tinyms.primitives.StridedSlice(begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0)[source]

Extracts a strided slice of a tensor.

Refer to mindspore.ops.strided_slice() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> #         [[[1. 1. 1.]
>>> #           [2. 2. 2.]]
>>> #
>>> #          [[3. 3. 3.]
>>> #           [4. 4. 4.]]
>>> #
>>> #          [[5. 5. 5.]
>>> #           [6. 6. 6.]]]
>>> # In order to visually view the multi-dimensional array, write the above as follows:
>>> #         [
>>> #             [
>>> #                 [1,1,1]
>>> #                 [2,2,2]
>>> #             ]
>>> #             [
>>> #                 [3,3,3]
>>> #                 [4,4,4]
>>> #             ]
>>> #             [
>>> #                 [5,5,5]
>>> #                 [6,6,6]
>>> #             ]
>>> #         ]
>>> strided_slice = ops.StridedSlice()
>>> output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
>>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
>>> # start = [1, 0, 2] , end = [3, 1, 3], stride = [1, 1, 1], Find a segment of (start, end),
>>> # note that end is an open interval
>>> # To facilitate understanding, this operator can be divided into three steps:
>>> # Step 1: Calculation of the first dimension:
>>> # start = 1, end = 3, stride = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
>>> # output_1th =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #         [4,4,4]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #         [6,6,6]
>>> #     ]
>>> # ]
>>> # Step 2: Calculation of the second dimension
>>> # 2nd dimension, start = 0, end = 1, stride = 1. So only 0th rows can be taken, and the output at this time.
>>> # output_2nd =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #     ]
>>> # ]
>>> # Step 3: Calculation of the third dimension
>>> # 3nd dimension,start = 2, end = 3, stride = 1, So can take 2th cols,
>>> # and you get the final output at this time.
>>> # output_3ed =
>>> # [
>>> #     [
>>> #         [3]
>>> #     ]
>>> #     [
>>> #         [5]
>>> #     ]
>>> # ]
>>> # The final output after finishing is:
>>> print(output)
[[[3.]]
 [[5.]]]
>>> # another example like :
>>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
class tinyms.primitives.Sub[source]

Subtracts the second input tensor from the first input tensor element-wise.

Refer to mindspore.ops.sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> sub = ops.Sub()
>>> output = sub(x, y)
>>> print(output)
[-3 -3 -3]
class tinyms.primitives.SubAndFilter[source]

Dynamic kernel, sub an offset and return the elements which in range [0, max_num).

Inputs:
  • input_x (Tensor) - Input tensor.

  • max_num (Int) - The max value of element that after sub offset.

  • offset (int) - Specifies the offset value of this input_x.

Outputs:

tuple(Tensor), tuple of 2 tensors, filter_res and filter_idx. - filter_res (Tensor) - The result that input_x minus offset,

and return which in the range [0, max_num).

  • filter_idx (Tensor) - A tensor containing indices of elements in the input coressponding to the output tensor.

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([1, 3, 5, 8, 9, 16]), mindspore.int32)
>>> max_num = 10
>>> offset = 5
>>> output = ops.SubAndFilter()(x, max_num, offset)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [0, 3, 4]),
 Tensor(shape=[3], dtype=Int32, value= [2, 3, 4]))
class tinyms.primitives.Svd(full_matrices=False, compute_uv=True)[source]

Computes the singular value decompositions of one or more matrices.

Refer to mindspore.ops.svd() for more details.

Parameters:
  • full_matrices (bool, optional) – If true, compute full-sized \(U\) and \(V\). If false, compute only the leading P singular vectors, with P is the minimum of M and N. Default: False.

  • compute_uv (bool, optional) – If true, compute the left and right singular vectors. If false, compute only the singular values. Default: True.

Inputs:
  • input (Tensor) - Tensor of the matrices to be decomposed. The shape should be \((*, M, N)\), the supported dtype are float32 and float64.

Outputs:
  • s (Tensor) - Singular values. The shape is \((*, P)\).

  • u (Tensor) - Left singular vectors. If compute_uv is False, u will be zero value. The shape is \((*, M, P)\). If full_matrices is True, the shape will be \((*, M, M)\).

  • v (Tensor) - Right singular vectors. If compute_uv is False, v will be zero value. The shape is \((*, N, P)\). If full_matrices is True, the shape will be \((*, N, N)\).

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, set_context
>>> from mindspore import ops
>>> set_context(device_target="CPU")
>>> svd = ops.Svd(full_matrices=True, compute_uv=True)
>>> a = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
>>> s, u, v = svd(a)
>>> print(s)
[7.0652843 1.040081 ]
>>> print(u)
[[ 0.30821905 -0.48819482 0.81649697]
 [-0.90613353  0.11070572 0.40824813]
 [ 0.2896955   0.8656849  0.4082479 ]]
>>> print(v)
[[ 0.63863593 0.769509  ]
 [ 0.769509  -0.63863593]]
class tinyms.primitives.Tan[source]

Computes tangent of x element-wise.

Refer to mindspore.ops.tan() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tan = ops.Tan()
>>> x = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = tan(x)
>>> print(output)
[-1.5574081 0. 1.5574081]
class tinyms.primitives.Tanh[source]

Computes hyperbolic tangent of input element-wise.

Refer to mindspore.ops.tanh() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> tanh = ops.Tanh()
>>> output = tanh(input_x)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
class tinyms.primitives.TensorAdd[source]

Same as operator Add. TensorAdd will be deprecated in the future. Please use Add instead.

class tinyms.primitives.TensorScatterAdd[source]

Creates a new tensor by adding the values from the positions in input_x indicated by indices, with values from updates. When multiple values are given for the same index, the updated result will be the sum of all values. This operation is almost equivalent to using mindspore.ops.ScatterNdAdd, except that the updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_add() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterAdd()
>>> # 5, Perform the addition operation for the first time:
>>> #      first_input_x = input_x[0][0] + updates[0] = [[0.9, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the addition operation for the second time:
>>> #      second_input_x = input_x[0][0] + updates[1] = [[3.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 3.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterDiv[source]

Creates a new tensor by dividing the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will be to divided these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_div() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.0]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterDiv()
>>> # 5, Perform the division operation for the first time:
>>> #      first_input_x = input_x[0][0] / updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the division operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.05, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-0.05  0.3  3.6  ]
 [ 0.4   0.5  -3.2 ]]
class tinyms.primitives.TensorScatterElements(axis=0, reduction='none')[source]

Updates the value of the input Tensor through specified reduction operation.

Refer to mindspore.ops.tensor_scatter_elements() for more details.

Warning

If there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

Supported Platforms:

Ascend GPU CPU

Examples

>>> op = ops.TensorScatterElements(0, "none")
>>> data = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> indices = Tensor(np.array([[1, 0, 2], [0, 2, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[0, 0, 0], [0, 0, 0]]), mindspore.float32)
>>> output = op(data, indices, updates)
>>> print(output)
[[ 0.0  0.0  3.0]
 [ 0.0  5.0  0.0]
 [ 7.0  0.0  0.0]]
>>> op = ops.TensorScatterElements(1, "add")
>>> data = Tensor(np.array([[1, 2, 3, 4, 5]), mindspore.float32)
>>> indices = Tensor(np.array([[2, 4]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.float32)
>>> output = op(data, indices, updates)
>>> print(output)
[[ 1  2  11  4  13]]
class tinyms.primitives.TensorScatterMax[source]

By comparing the value at the position indicated by indices in x with the value in the updates, the value at the index will eventually be equal to the largest one to create a new tensor.

Refer to mindspore.ops.tensor_scatter_max() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMax()
>>> # 5, Perform the max operation for the first time:
>>> #      first_input_x = Max(input_x[0][0], updates[0]) = [[1.0, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the max operation for the second time:
>>> #      second_input_x = Max(input_x[0][0], updates[1]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ 2.2  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMin[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor.

Refer to mindspore.ops.tensor_scatter_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMin()
>>> # 5, Perform the min operation for the first time:
>>> #      first_input_x = Min(input_x[0][0], updates[0]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the min operation for the second time:
>>> #      second_input_x = Min(input_x[0][0], updates[1]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[ -0.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
class tinyms.primitives.TensorScatterMul[source]

Creates a new tensor by multiplying the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to multiply these values respectively. The updates are applied on output Tensor instead of input Parameter.

Refer to mindspore.ops.tensor_scatter_mul() for more details.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterMul()
>>> # 5, Perform the multiply operation for the first time:
>>> #      first_input_x = input_x[0][0] * updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the multiply operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.22, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-0.22  0.3   3.6  ]
 [ 0.4   0.5   -3.2 ]]
class tinyms.primitives.TensorScatterSub[source]

Creates a new tensor by subtracting the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using mindspore.ops.ScatterNdSub , except that the updates are applied on output Tensor instead of input Parameter. Refer to mindspore.ops.tensor_scatter_sub() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> op = ops.TensorScatterSub()
>>> # 5, Perform the subtract operation for the first time:
>>> #      first_input_x = input_x[0][0] - updates[0] = [[-1.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the subtract operation for the second time:
>>> #      second_input_x = input_x[0][0] - updates[1] = [[-3.3, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = op(input_x, indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
 [ 0.4        0.5       -3.2      ]]
class tinyms.primitives.TensorScatterUpdate[source]

Creates a new tensor by updating the positions in input_x indicated by indices, with values from update. This operation is almost equivalent to using mindspore.ops.ScatterNdUpdate , except that the updates are applied on input_x instead of a zero tensor.

indices must have rank at least 2, the last axis is the depth of each index vectors. For each index vector, there must be a corresponding value in update. If the depth of each index tensor matches the rank of input_x, then each index vector corresponds to a scalar in input_x and each update updates a scalar. If the depth of each index tensor is less than the rank of input_x, then each index vector corresponds to a slice in input_x, and each update updates a slice.

The order in which updates are applied is nondeterministic, meaning that if there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

Inputs:
  • input_x (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1]. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions. The data type is Number.

  • indices (Tensor) - The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • update (Tensor) - The tensor to update the input tensor, has the same type as input, and \(update.shape = indices.shape[:-1]+input_x.shape[indices.shape[-1]:]\)

Outputs:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • ValueError – If the value of input_x are not match with input indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = ops.TensorScatterUpdate()
>>> output = op(input_x, indices, update)
>>> print(output)
[[ 1.   0.3  3.6]
 [ 0.4  2.2 -3.2]]
class tinyms.primitives.TensorShape[source]

Returns the shape of the input tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = ops.TensorShape()
>>> output = shape(input_x)
>>> print(output)
[3 2 1]
class tinyms.primitives.TensorSummary[source]

This operator will put a tensor to a summary file with protocol buffer format. It must be used with SummaryRecord or SummaryCollector, which specify the directory of the summary file. The summary file can be loaded and shown by MindInsight, see MindInsight documents for details.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor, set_context
>>>
>>>
>>> class SummaryDemo(nn.Cell):
...     def __init__(self,):
...         super(SummaryDemo, self).__init__()
...         self.summary = ops.TensorSummary()
...         self.add = ops.Add()
...
...     def construct(self, x, y):
...         x = self.add(x, y)
...         name = "x"
...         self.summary(name, x)
...         return x
>>> set_context(mode=mindspore.GRAPH_MODE)
>>> summary = SummaryDemo()(Tensor([[1]]), Tensor([[2]]))
>>> print(summary)
[[3]]
class tinyms.primitives.Tile[source]

Replicates an input tensor with given multiples times.

Refer to mindspore.ops.tile() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tile = ops.Tile()
>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
>>> multiples = (2, 3, 2)
>>> output = tile(input_x, multiples)
>>> print(output)
[[[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]
 [[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]]
class tinyms.primitives.TopK(sorted=True)[source]

Finds values and indices of the k largest entries along the last dimension.

Warning

  • If sorted is set to False, it will use the aicpu operator, the performance may be reduced. In addition, due to different memory layout and traversal methods on different platforms, the display order of calculation results may be inconsistent when sorted is False.

If the input_x is a one-dimensional Tensor, finds the k largest entries in the Tensor, and outputs its value and index as a Tensor. values[k] is the k largest item in input_x, and its index is indices [k].

For a multi-dimensional matrix, calculates the first k entries in each row (corresponding vector along the last dimension), therefore:

\[values.shape = indices.shape = input.shape[:-1] + [k].\]

If the two compared elements are the same, the one with the smaller index value is returned first.

Parameters:

sorted (bool, optional) – If True, the obtained elements will be sorted by the values in descending order. If False, the obtained elements will not be sorted. Default: True.

Inputs:
  • input_x (Tensor) - Input to be computed, data type must be float16, float32 or int32 on CPU, and float16 or float32 on GPU.

  • k (int) - The number of top elements to be computed along the last dimension, constant input is needed.

Outputs:

A tuple consisting of values and indexes.

  • values (Tensor) - The k largest elements in each slice of the last dimension.

  • indices (Tensor) - The indices of values within the last dimension of input.

Raises:
  • TypeError – If sorted is not a bool.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If k is not an int.

  • TypeError – If dtype of input_x is not one of the following: float16, float32 or int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4, 5], mindspore.float16)
>>> k = 3
>>> values, indices = ops.TopK(sorted=True)(input_x, k)
>>> print((values, indices))
(Tensor(shape=[3], dtype=Float16, value= [ 5.0000e+00,  4.0000e+00,  3.0000e+00]), Tensor(shape=[3],
  dtype=Int32, value= [4, 3, 2]))
class tinyms.primitives.Trace[source]

Returns a new tensor that is the sum of the input trace.

Note

Input must be matrix, and complex number is not supported at present.

Warning

This is an experimental API that is subject to change or deletion.

Inputs:
  • x (Tensor) - A matrix to be calculated. The matrix must be two dimensional.

Outputs:

Tensor, 0D Tensor with 1 element, it has the same data type as input x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
15.0
>>> x = Tensor(np.arange(1, 13).reshape(3, 4), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
18.0
>>> x = Tensor(np.arange(12, 0, -1).reshape(4, 3), mindspore.float32)
>>> trace = ops.Trace()
>>> output = trace(x)
>>> print(output)
24.0
class tinyms.primitives.Transpose[source]

Permutes the dimensions of the input tensor according to input permutation.

Refer to mindspore.ops.transpose() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> transpose = ops.Transpose()
>>> output = transpose(input_x, input_perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
class tinyms.primitives.TridiagonalMatMul[source]

Return the result of a multiplication of two matrices, where the left one is a Tridiagonal Matrix.

Inputs:
  • superdiag (Tensor) - Superdiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\). Last element is ignored.

  • maindiag (Tensor) - Maindiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\).

  • subdiag (Tensor) - Subdiagonals of Tridiagonal Matrices to the left of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\). First element is ignored.

  • rhs (Tensor) - MxN Matrices to the right of multiplication. Data types must be: float16, float32, double, complex64, complex128. The shape is \((..., 1, M)\).

Outputs:

Tensor, with the same shape and data type as the rhs.

Raises:
  • TypeError – If dtypes of superdiag, maindiag, subdiag and rhs are not float16, float32, double, complex64, complex128.

  • ValueError – If the col of input superdiag, the col of input maindiag, the col of input subdiag and the row of input rhs are not equal.

  • ValueError – If the row of input superdiag, the row of input maindiag and the row of input subdiag are not 1.

  • ValueError – If the rank of input superdiag, the rank of input maindiag, the rank of input subdiag and rank row of input rhs are not equal to or greater than 2.

  • ValueError – If the shape of input superdiag, the shape of input maindiag and the shape of input subdiag are not same.

  • ValueError – If the shape of input superdiag ignoring the last two elements, the shape of input maindiag ignoring the last two elements, the shape of input subdiag ignoring the last two elements and the shape of input rhs ignoring the last two elements are not same.

Supported Platforms:

CPU

Examples

>>> tridiagonalmatmul = ops.TridiagonalMatMul()
>>> superdiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> maindiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> subdiag = Tensor(np.array([[1, 2, 3]]).astype(np.float32))
>>> rhs = Tensor(np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]).astype(np.float32))
>>> output = tridiagonalmatmul(superdiag,maindiag,subdiag,rhs)
>>> print(output)
[[ 2.  2.  2. ]
 [ 6.  6.  6.]
 [ 6.  6.  6.]]
class tinyms.primitives.Tril(diagonal=0)[source]

Returns the lower triangular portion of the 2-D matrix or the set of matrices in a batch. The remaining elements of the resulting Tensor are assigned a value of 0. The lower triangular section of the matrix comprises of the elements present on and below the main diagonal.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main didiagonal.

Inputs:
  • x (Tensor) - A Tensor with shape \((x_1, x_2, ..., x_R)\). The rank must be at least 2. Supporting all number types including bool.

Outputs:

Tensor, the same shape and data type as the input x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If diagonal is not an int.

  • TypeError – If the type of x is neither number nor bool.

  • ValueError – If the rank of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril()
>>> result = tril(x)
>>> print(result)
[[ 1  0  0  0]
 [ 5  6  0  0]
 [10 11 12  0]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril(diagonal=1)
>>> result = tril(x)
>>> print(result)
[[ 1  2  0  0]
 [ 5  6  7  0]
 [10 11 12 13]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> tril = ops.Tril(diagonal=-1)
>>> result = tril(x)
>>> print(result)
[[ 0  0  0  0]
 [ 5  0  0  0]
 [10 11  0  0]
 [14 15 16  0]]
class tinyms.primitives.TrilIndices(row, col, offset=0, dtype=mindspore.int32)[source]

Calculates the indices of the lower triangular elements in a row * col matrix and returns them as a 2-by-N Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.tril_indices() for more details.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int32.

Outputs:
  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = ops.TrilIndices(4, 3, -1, mstype.int64)
>>> output = net()
>>> print(output)
[[1 2 2 3 3 3]
 [0 0 1 0 1 2]]
>>> print(output.dtype)
Int64
class tinyms.primitives.TripletMarginLoss(p=2, swap=False, eps=1e-06, reduction='mean')[source]

TripletMarginLoss operation.

Creates a criterion that measures the triplet loss given an input tensors \(x1\), \(x2\), \(x3\) and a margin with a value greater than \(0\). This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be \((N, D)\).

The distance swap is described in detail in the paper Learning local feature descriptors with triplets and shallow convolutional neural networks by V. Balntas, E. Riba et al.

The loss function for each sample in the mini-batch is:

\[L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]

where

\[d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p\]
Parameters:
  • p (int, optional) – The norm degree for pairwise distance. Default: 2.

  • eps (float, optional) – Default: 1e-06.

  • swap (bool, optional) – The distance swap. Default: False.

  • reduction (str, optional) – Apply specific reduction method to the output: “none”, “mean”, “sum”. Default: “mean”.

Inputs:
  • x (Tensor) - A sample randomly selected from the training set. Data type must be BasicType.

  • positive (Tensor) - A sample belonging to the same category as x, with the same type and shape as x.

  • negative (Tensor) - A sample belonging to the different class from x, with the same type and shape as x.

  • margin (Tensor) - Make a margin between the positive pair and the negative pair.

Outputs:

Union[Tensor, Scalar], if reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If x or positive or negative or margin is not a Tensor.

  • TypeError – If dtype of x or positive or negative is not BasicType.

  • TypeError – If dtype of x, positive and negative is not the same.

  • TypeError – If margin is not float32.

  • TypeError – If p is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If swap is not a bool.

  • ValueError – If dimensions of input x, positive and negative are less than or equal to 1 at the same time.

  • ValueError – If the dimension of input x or positive or negative is bigger than or equal to 8.

  • ValueError – If length of shape of margin is not 0.

  • ValueError – If shape of x, positive and negative cannot broadcast.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

GPU

Examples

>>> loss = ops.TripletMarginLoss()
>>> x = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> positive = Tensor(np.array([[0.4, 0.6], [0.4, 0.6]]), mindspore.float32)
>>> negative = Tensor(np.array([[0.2, 0.9], [0.3, 0.7]]), mindspore.float32)
>>> margin = Tensor(1.0, mindspore.float32)
>>> output = loss(x, positive, negative, margin)
>>> print(output)
0.8881968
class tinyms.primitives.Triu(diagonal=0)[source]

Returns the upper triangular portion of the 2-D matrix or the set of matrices in a batch. The remaining elements of the resulting Tensor are assigned a value of 0. The upper triangular section of the matrix comprises of the elements present on and above the main diagonal.

Parameters:

diagonal (int, optional) – The index of diagonal. Default: 0, indicating the main diagonal.

Inputs:
  • x (Tensor) - The input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions. The data type is Number.

Outputs:
  • y (Tensor) - A tensor has the same shape and data type as input.

Raises:
Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu()
>>> result = triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 0  6  7  8]
 [ 0  0 12 13]
 [ 0  0  0 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu(diagonal=1)
>>> result = triu(x)
>>> print(result)
[[ 0  2  3  4]
 [ 0  0  7  8]
 [ 0  0  0 13]
 [ 0  0  0  0]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> triu = ops.Triu(diagonal=-1)
>>> result = triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 5  6  7  8]
 [ 0 11 12 13]
 [ 0  0 16 17]]
class tinyms.primitives.TriuIndices(row, col, offset=0, dtype=mindspore.int32)[source]

Calculates the indices of the upper triangular elements in a row * col matrix and returns them as a 2-by-N Tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.triu_indices() for more details.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mstype.int32 and mstype.int64. Default: mstype.int32.

Outputs:
  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = ops.TriuIndices(5, 4, 2, mstype.int64)
>>> output = net()
>>> print(output)
[[0 0 1]
 [2 3 3]]
>>> print(output.dtype)
Int64
class tinyms.primitives.Trunc[source]

Returns a new tensor with the truncated integer values of the elements of input.

Refer to mindspore.ops.trunc() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]), mindspore.float32)
>>> output = ops.Trunc()(x)
>>> print(output)
[ 3.  0. -0. -3.]
class tinyms.primitives.TruncateDiv[source]

Divides the first input tensor by the second input tensor element-wise and rounds the results of division towards zero. Equivalent to C-style integer division.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

Broadcasting is supported.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_div = ops.TruncateDiv()
>>> output = truncate_div(x, y)
>>> print(output)
[0 1 0]
class tinyms.primitives.TruncateMod[source]

Returns the remainder of division element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048, the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1, D2, ..., Dn)\), then \(D1*D2... *DN<=1000000,n<=8\).

Inputs:
  • x (Union[Tensor, numbers.Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, numbers.Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> truncate_mod = ops.TruncateMod()
>>> output = truncate_mod(x, y)
>>> print(output)
[ 2  1 -1]
class tinyms.primitives.TruncatedNormal(dtype=mindspore.float32, seed=0, seed2=0)[source]

Returns a Tensor of the specified shape filled with truncated normal values.

The generated values conform to a Gaussian distribution.

Note

  • The value of shape must be greater than zero. The output length can not exceed 1000000.

  • When seed or seed2 is assigned a non-zero value, that value will be used as the seed. Otherwise, a random seed will be used instead.

Parameters:
  • seed (int, optional) – Random number seed. Default: 0.

  • seed2 (int, optional) – The second seed to avoid seed collision. Default: 0.

  • dtype (mindspore.dtype, optional) – Specified output data type. Must be one of the following types: mindspore.float16, mindspore.float32 and mindspore.float64. Default: mindspore.float32.

Inputs:
  • shape (Tensor) - The shape of random tensor to be generated. Its type must be one of the following types: mindspore.int32 and mindspore.int64.

Outputs:

Tensor. Its shape is specified by the input shape. Its type is specified by dtype. Its values are in [-2,2].

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If data type of dtype and shape are not allowed.

  • TypeError – If seed is not an integer.

  • ValueError – If shape elements are not positive.

  • ValueError – If shape is not a 1-D tensor.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = Tensor(np.array([2, 2]), mstype.int32)
>>> seed = 0
>>> seed2 = 0
>>> truncated_normal = ops.TruncatedNormal(seed=seed, seed2=seed2)
>>> output = truncated_normal(shape)
>>> print(output)
[[ -1.303105  0.641905 ]
 [ -0.917926  0.650655 ]]
class tinyms.primitives.TupleToArray[source]

Converts a tuple to a tensor.

Refer to mindspore.ops.tuple_to_array() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = (1,2,3)
>>> print(type(input_x))
<class 'tuple'>
>>> output = ops.TupleToArray()(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
[1 2 3]
class tinyms.primitives.UniformCandidateSampler(num_true, num_sampled, unique, range_max, seed=0, remove_accidental_hits=False)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution.

Refer to mindspore.ops.uniform_candidate_sampler() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sampler = ops.UniformCandidateSampler(1, 3, False, 4, 1)
>>> output1, output2, output3 = sampler(Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int64)))
>>> print(output1.shape)
(3,)
>>> print(output2.shape)
(5, 1)
>>> print(output3.shape)
(3,)
class tinyms.primitives.UniformInt(seed=0, seed2=0)[source]

Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function:

\[\text{P}(i|a,b) = \frac{1}{b-a+1},\]

where the \(a\) indicates the min distribution parameter, the \(b\) indicates the max distribution parameter.

Note

  • The number in tensor minval must be strictly less than maxval at any position after broadcasting.

  • If neither seed nor seed2 is assigned a non-zero value, a randomly generated seed is used instead.

Parameters:
  • seed (int) – Random seed, must be non-negative. Default: 0.

  • seed2 (int) – Random seed2, must be non-negative. A second seed to avoid seed collision. Default: 0.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of random tensor to be generated. Only constant value is allowed.

  • minval (Tensor) - The distribution parameter, \(a\). It defines the minimum possibly generated value, with int32 data type. Only one number is supported.

  • maxval (Tensor) - The distribution parameter, \(b\). It defines the maximum possibly generated value, with int32 data type. Only one number is supported.

Outputs:

Tensor. The shape is the same as the input ‘shape’, and the data type is int32.

Raises:
  • TypeError – If neither seed nor seed2 is an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If neither minval nor maxval is a Tensor.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 4)
>>> minval = Tensor(1, mstype.int32)
>>> maxval = Tensor(5, mstype.int32)
>>> uniform_int = ops.UniformInt(seed=10)
>>> output = uniform_int(shape, minval, maxval)
>>> result = output.shape
>>> print(result)
(2, 4)
class tinyms.primitives.UniformReal(seed=0, seed2=0)[source]

Produces random floating-point values, uniformly distributed to the interval [0, 1).

Parameters:
  • seed (int) – The operator-level random seed, used to generate random numbers, must be non-negative. Default: 0.

  • seed2 (int) – The global random seed and it will combile with the operator-level random seed to determine the final generated random number, must be non-negative. Default: 0.

Note

  • Global random seed and operator-level random seed are not set: Use a randomly generated seed.

  • Global random seed is set, but operator-level random seed is not set: A global random seed will splice with a randomly generated seed.

  • Global random seed is not set, operator-level random seed is set: The default global random seed is used, and splices with the operator-level random seed.

  • Both Global random and operator-level random seed are set: The global random seed will splice with the operator-level random seed.

Inputs:
  • shape (Union[tuple, Tensor]) - The shape of tensor to be generated. Only constant value is allowed.

Outputs:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If seed or seed2 is not an int.

  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is not a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 2)
>>> uniformreal = ops.UniformReal(seed=2)
>>> output = uniformreal(shape)
>>> result = output.shape
>>> print(result)
(2, 2)
class tinyms.primitives.Unique[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

The output contains Tensor y and Tensor idx, the format is probably similar to (y, idx). The shape of Tensor y and Tensor idx is different in most cases, because Tensor y will be duplicated, and the shape of Tensor idx is consistent with the input.

To get the same shape between idx and y, please refer to mindspore.ops.UniqueWithPad.

Inputs:
  • input_x (Tensor) - The input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tuple, containing Tensor objects (y, idx), y is a tensor with the same type as input_x, and contains the unique elements in x. idx is a tensor containing indices of elements in the input corresponding to the output tensor.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.Unique()(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>> y = output[0]
>>> print(y)
[1 2 5]
>>> idx = output[1]
>>> print(idx)
[0 1 2 1]
>>> # As can be seen from the above, y and idx shape
>>> # note that for GPU, this operator must be wrapped inside a model, and executed in graph mode.
>>> class UniqueNet(nn.Cell):
...     def __init__(self):
...         super(UniqueNet, self).__init__()
...         self.unique_op = ops.Unique()
...
...     def construct(self, x):
...         output, indices = self.unique_op(x)
...         return output, indices
...
>>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> net = UniqueNet()
>>> output = net(input_x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
class tinyms.primitives.UniqueConsecutive(return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.unique_consecutive() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> unique_consecutive = ops.UniqueConsecutive(True, True, None)
>>> output, idx, counts = unique_consecutive(x)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
class tinyms.primitives.UniqueWithPad[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function. The returned tuple(y, idx) after the input Tensor x is processed by the unique operator, in which the shapes of y and idx are mostly not equal. Therefore, in order to solve the above situation, the UniqueWithPad operator will fill the y Tensor with the pad_num specified by the user to make it have the same shape as the Tensor idx.

Refer to mindspore.ops.unique_with_pad() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 3, 4, 5]), mindspore.int32)
>>> pad_num = 8
>>> output = ops.UniqueWithPad()(x, pad_num)
>>> print(output)
(Tensor(shape=[8], dtype=Int32, value= [1, 2, 3, 4, 5, 8, 8, 8]),
 Tensor(shape=[8], dtype=Int32, value= [0, 0, 1, 1, 2, 2, 3, 4]))
class tinyms.primitives.Unpack(axis=0)[source]

Same as operator Unstack. Unpack will be deprecated in the future. Please use Unstack instead.

class tinyms.primitives.UnravelIndex[source]

Transforms an array consisting of flattened indices into a tuple that contains coordinate arrays.

Inputs:
  • indices (Tensor) - The input Tensor, containing indices that will be transformed into the flattened form of an array with dimensions specified by dims. The dimension of indices must be 0-D or 1-D. Must be one of the following types: int32, int64.

  • dims (Tensor) - The shape of the array to use for unraveling indices. The dimension of dims must be 1-D. Must have the same type as indices.

Outputs:
  • y (Tensor) - Tensor, it should be 2-D or 1-D(if indices is 0D) and has the same type as indices.

Raises:
  • TypeError – If the data type of indices and dims are different.

  • TypeError – If the data type of indices and dims is not int32 or int64.

  • ValueError – If the dimension of dims is not 1 or dimension of indices is not 1 or 0.

  • ValueError – If indices contains negative elements.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([2, 5]), mindspore.int32)
>>> dims = Tensor(np.array([3, 3]), mindspore.int32)
>>> output = ops.UnravelIndex()(indices, dims)
>>> print(output)
[[0 2]
 [1 2]]
class tinyms.primitives.UnsortedSegmentMax[source]

Computes the maximum along segments of a tensor.

Refer to mindspore.ops.unsorted_segment_max() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: Only have two num_segments, where is 0 and 1, and segment_ids=[0, 1, 1]
>>> # num_segments = 2 indicates that there are two types of segment_id,
>>> # the first number '0' in [0, 1, 1] indicates input_x[0],
>>> # the second number '1' in [0, 1, 1] indicates input_x[1],
>>> # the third number '1' in [0, 1, 1] indicates input_x[2],
>>> # input_x[0], which is [1, 2, 3] will not be compared to other segment_id.
>>> # Only the same segment_id will be compared.
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
>>>
>>> # case 2: The segment_ids=[0, 0, 1, 1].
>>> # [1, 2, 3] will compare with [4, 2, 0],
>>> # and [4, 5, 6] will compare with [4, 2, 1].
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (4, 3)
>>> print(output)
    [[4. 2. 3.]
     [4. 5. 6.]]
>>> # case 3: If the input_x have three dimensions even more, what will happen?
>>> # The shape of input_x is (2, 4, 3),
>>> # and the length of segment_ids should be the same as the first dimension of input_x.
>>> # Because the segment_ids are different, input_x[0] will not be compared to input_x[1].
>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]],
...                            [[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_max = ops.UnsortedSegmentMax()
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(input_x.shape)
    (2, 4, 3)
>>> print(output)
    [[[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]
     [[1. 2. 3.]
      [4. 2. 0.]
      [4. 5. 6.]
      [4. 2. 1.]]]
>>> # case 4: It has the same input with the 3rd case.
>>> # Because num_segments is equal to 2, there are two segment_ids, but currently only one 0 is used.
>>> # the segment_id i is absent in the segment_ids, then output[i] will be filled with
>>> # the smallest possible value of the input_x's type.
>>> segment_ids = Tensor(np.array([0, 0]).astype(np.int32))
>>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
>>> print(output)
    [[[ 1.0000000e+00  2.0000000e+00  3.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  0.0000000e+00]
      [ 4.0000000e+00  5.0000000e+00  6.0000000e+00]
      [ 4.0000000e+00  2.0000000e+00  1.0000000e+00]]
     [[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
      [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]]]
class tinyms.primitives.UnsortedSegmentMin[source]

Computes the minimum of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_min() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_min = ops.UnsortedSegmentMin()
>>> output = unsorted_segment_min(input_x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
class tinyms.primitives.UnsortedSegmentProd[source]

Computes the product of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_prod() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_prod = ops.UnsortedSegmentProd()
>>> output = unsorted_segment_prod(input_x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
class tinyms.primitives.UnsortedSegmentSum[source]

Computes the sum of a tensor along segments.

Refer to mindspore.ops.unsorted_segment_sum() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
>>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
>>> num_segments = 6
>>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 2. 5. 0.]
class tinyms.primitives.Unstack(axis=0, num=None)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters:
  • axis (int) – Dimension along which to unpack. Default: 0. Negative values wrap around. The range is [-R, R).

  • num (Union[None, int]) – The number of output tensors. Automatically inferred by input_x and axis if None. Default: None.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

Outputs:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> unstack = ops.Unstack()
>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = unstack(input_x)
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
class tinyms.primitives.UpdateState[source]

UpdateState is used for update side-effect state.

Inputs:
  • value (State) - the state value to be updated.

  • expr (Expression) - the expression to evaluate before state changes.

Outputs:

State, the updated state value.

class tinyms.primitives.UpperBound(out_type=mindspore.int32)[source]

Returns a tensor that contains the index for finding the upper bound of the value of the input values element in the input sorted_x.

Parameters:

out_type (mindspore.dtype, optional) – Specified output type. Supported types: mindspore.dtype.int32 and mindspore.dtype.int64. Default: mindspore.dtype.int32.

Inputs:
  • sorted_x (Tensor) - The input tensor whose dtype is real number. The rank must be 2. Each row of the sorted_x needs to be sorted in ascending order.

  • values (Tensor) - The input tensor whose dtype is the same as sorted_x. The rank must be 2. The shape[0] of the two inputs must be consistent.

Outputs:

Tensor, whose dtype is determined by out_type and whose shape is consistent with values.

Raises:
  • TypeError – If sorted_x is not a Tensor.

  • TypeError – If values is not a Tensor.

  • TypeError – If the type of sorted_x is not the same as that of values.

  • ValueError – If rank of the sorted_x is not equal to 2.

  • ValueError – If rank of the values is not equal to 2.

  • ValueError – If the number of rows of sorted_x is not consistent with that of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> upperbound = ops.UpperBound(out_type = mindspore.int32)
>>> sorted_x = Tensor(np.arange(12).reshape(3, 4).astype(np.int8))
>>> values = Tensor(np.array([[3], [6], [9]]).astype(np.int8))
>>> output = upperbound(sorted_x, values)
>>> print(output)
[[4]
 [3]
 [2]]
class tinyms.primitives.UpsampleNearest3D(output_size=None, scales=None)[source]

Performs nearest neighbor upsampling operation.

This operator scale up the volumetric input with specified output_size or scales factors, using nearest neighbor algorithm.

One of output_size or scales must be given, and cannot specify both.

Parameters:
  • output_size (Union[tuple[int], list[int]], optional) – A tuple or list of int specifying the output volumetric size. Default: None.

  • scales (Union[tuple[float], list[float]], optional) – A tuple or list of float specifying the upsampling factors. Default: None.

Inputs:
  • x (Tensor) - 5D tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Must be one of the following types: [float16, float32, float64].

Outputs:
  • y (Tensor) - Upsampled output with the same data type as x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – When output_size is not None and output_size is not list[int] or tuple[int].

  • TypeError – When scales is not None and scales is not list[float] or tuple[float].

  • TypeError – If dtype of x is not int [float16, float32, float64].

  • ValueError – If any value of output_size is negative or zero when output_size is not empty.

  • ValueError – If any value of scales is negative or zero when scales is not empty.

  • ValueError – If shape of x is not 5D.

  • ValueError – If none of scales and output_size is specified or both specified.

  • ValueError – If size of scales is not equal 3 when scales is specified.

  • ValueError – If size of output_size is not equal 3 when output_size is specified.

Supported Platforms:

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...       .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> output_size = [3, 4, 5]
>>> net = ops.UpsampleNearest3D(output_size = output_size)
>>> output = net(x)
>>> print(output)
[[[[[ 1.  1.  2.  3.  4.]
    [ 1.  1.  2.  3.  4.]
    [ 5.  5.  6.  7.  8.]
    [ 5.  5.  6.  7.  8.]]
   [[ 1.  1.  2.  3.  4.]
    [ 1.  1.  2.  3.  4.]
    [ 5.  5.  6.  7.  8.]
    [ 5.  5.  6.  7.  8.]]
   [[ 9.  9. 10. 11. 12.]
    [ 9.  9. 10. 11. 12.]
    [13. 13. 14. 15. 16.]
    [13. 13. 14. 15. 16.]]]]]
class tinyms.primitives.UpsampleTrilinear3D(output_size=None, scales=None, align_corners=False)[source]

Performs upsampling with trilinear interpolation across 3dims for 5dim input Tensor.

This operator scale up the volumetric input with specified output_size or scales factors, using trilinear upscaling algorithm.

Note

One of scales and output_size MUST be specified and it is an error if both are specified.

Parameters:
  • output_size (Union[tuple[int], list[int]], optional) – A tuple or list of 3 int elements \((output\_depth, output\_height, output\_width)\). Defaults to None. Only one of scales and output_size can be specified.

  • scales (Union[tuple[float], list[float]], optional) – A tuple or list of 3 float elements \((scale\_depth, scale\_height, scale\_width)\). Defaults to None.

  • align_corners (bool, optional) – An optional bool. Defaults to false. If True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation use edge value padding for out of boundary values.

Inputs:
  • x (Tensor) - A 5-D input tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Must be one of the following types: float16, float32, float64.

Outputs:
  • y (Tensor) - Upsampled output with the same data type as x. Tensor of shape \((N, C, D_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – When output_size is not None and output_size is not list[int] or tuple[int].

  • TypeError – When scales is not None and scales is not list[float] or tuple[float].

  • TypeError – If dtype of x is not in [float16, float32, float64].

  • TypeError – If type of align_corners is not bool.

  • ValueError – If any value of output_size is negative or zero when output_size is not empty.

  • ValueError – If any value of scales is negative or zero when scales is not empty.

  • ValueError – If shape of x is not 5D.

  • ValueError – If none of scales and output_size is specified or both specified.

  • ValueError – If size of scales is not equal 3 when scales is specified.

  • ValueError – If size of output_size is not equal 3 when output_size is specified.

Supported Platforms:

Examples

>>> net = ops.UpsampleTrilinear3D(output_size=[4, 64, 48])
>>> in_x = Tensor(input_data=np.random.randn(2, 3, 4, 512, 256))
>>> out = net(in_x)
>>> print(out.shape)
(2, 3, 4, 64, 48)
>>>
>>> net = ops.UpsampleTrilinear3D(output_size=[2, 4, 4])
>>> in_x = Tensor(np.arange(1, 5, dtype=np.float32).reshape((1, 1, 1, 2, 2)))
>>> out = net(in_x)
>>> print(out)
[[[[[1.   1.25 1.75 2.  ]
    [1.5  1.75 2.25 2.5 ]
    [2.5  2.75 3.25 3.5 ]
    [3.   3.25 3.75 4.  ]]
[[1. 1.25 1.75 2. ]

[1.5 1.75 2.25 2.5 ] [2.5 2.75 3.25 3.5 ] [3. 3.25 3.75 4. ]]]]]

class tinyms.primitives.Xdivy[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Inputs:
  • x (Union[Tensor, Number, bool]) - The first input is a number, or a bool, or a tensor whose data type is float16, float32, float64, complex64, complex128 or bool.

  • y (Union[Tensor, Number, bool]) - The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is float16, float32, float64, complex64, complex128 or bool.

Outputs:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If dtype of x and ‘y’ is not in [float16, float32, float64, complex64, complex128, bool].

  • ValueError – If x could not be broadcast to a tensor with shape of y.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xdivy = ops.Xdivy()
>>> output = xdivy(x, y)
>>> print(output)
[ 1.   2.  -0.5]
infer_dtype(x_dtype, y_dtype)[source]

Infer type for output of Xdivy :param x_dtype: input type of x :param y_dtype: input type of y :return:

infer_shape(x_shape, y_shape)[source]

Infer shape for output of Xdivy :param x_shape: input shape of x :param y_shape: input shape of y :return:

infer_value(x, y)[source]

Infer value for constant folding :param x: :param y: :return:

class tinyms.primitives.Xlogy[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when x is zero.

Refer to mindspore.ops.xlogy() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> xlogy = ops.Xlogy()
>>> output = xlogy(x, y)
>>> print(output)
[-3.465736   0.        2.7725887]
class tinyms.primitives.Zeros[source]

Zeros will be deprecated in the future. Please use class mindspore.ops.zeros instead.

Creates a tensor filled with value zeros.

Creates a tensor with shape described by the first argument and fills it with value zeros in type of the second argument.

Inputs:
  • shape (Union[tuple[int], int]) - The specified shape of output tensor.

  • type (mindspore.dtype) - The specified type of output tensor.

Outputs:

Tensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If shape is neither int nor tuple.

  • TypeError – If shape is a tuple whose elements are not all int.

Supported Platforms:

Deprecated

Examples

>>> zeros = ops.Zeros()
>>> output = zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.ZerosLike[source]

Returns a Tensor with a value of 0 and its shape and data type is the same as the input.

Inputs:
  • input_x (Tensor) - Input Tensor of any dimension. The data type is Number.

Outputs:

Tensor, has the same shape and data type as input_x but filled with zeros.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> zeroslike = ops.ZerosLike()
>>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = zeroslike(input_x)
>>> print(output)
[[0. 0.]
 [0. 0.]]
class tinyms.primitives.Zeta[source]

Compute the Hurwitz zeta function ζ(x,q) of input Tensor.

Warning

This is an experimental API that is subject to change or deletion.

\[\zeta \left ( x,q \right )= \textstyle \sum_{n=0} ^ {\infty} \left ( q+n\right )^{-x}\]
Inputs:
  • x (Tensor) - A Tensor, types: float32, float64.

  • q (Tensor) - A Tensor, must have the same shape and type as x.

Outputs:

Tensor, has the same dtype and shape as the x.

Raises:
  • TypeError – If either of x and q is not tensor.

  • TypeError – If dtype of x is neither float32 nor float64.

  • TypeError – If dtype of q is neither float32 nor float64.

  • ValueError – If shape of x is not same as the q.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([10.]), mindspore.float32)
>>> q = Tensor(np.array([1.]), mindspore.float32)
>>> zeta = ops.Zeta()
>>> z = zeta(x, q)
>>> print(z)
[1.0009946]
tinyms.primitives.kernel(fn=None, reg_info=None, compile_attrs=None)[source]

The decorator of the Hybrid DSL function for the Custom Op. When a function written by the Hybrid DSL is decorated by kernel, it can be run as a usual Python function. Also, this function can be used in the api Custom and to create mindspore.ops.Custom, with func_type “hybrid” or “pyfunc”. Creating mindspore.ops.Custom with mode “hybrid” by the Hybrid DSL function will enjoy the automatic dtype/shape infer for free.

Parameters:
  • fn (Function) – The Python function that will be run as a custom operator. Default: None.

  • reg_info (tuple[str, dict]) – Each item represents registration information in json format. Default: None.

  • compile_attrs (Dict) – The Python object is used to distinguish the compiled function. Default: None.

Returns:

Function, if fn is not None, returns a callable function that will execute the Hybrid DSL function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import ops, Tensor
>>> from mindspore.ops import kernel, DataType, CustomRegOp
...
>>> # Create a dict for the compile flags.
>>> attrs = {
...     "test1": True,
...     "test2": "good",
...     "test3": 12,
... }
>>> # Create the reg info json string.
>>> op_gpu_info = CustomRegOp() \
...     .input(0, "a") \
...     .input(0, "b") \
...     .output(0, "y") \
...     .dtype_format(DataType.F32_None, DataType.F32_None, DataType.F32_None) \
...     .target("GPU") \
...     .get_op_info()
>>>
>>> # Create inputs for the custom op.
>>> input_x = np.ones([4, 4]).astype(np.float32)
>>> input_y = np.ones([4, 4]).astype(np.float32)
...
>>> # Write a Hybrid DSL function through the decorator @kernel.
>>> # We can also pass the compile attrs and the reg info through the decorator.
>>> @kernel(reg_info=op_gpu_info, compile_attrs=attrs)
... def outer_product(a, b):
...     c = output_tensor(a.shape, a.dtype)
...
...     with block_realize(c):
...         for i0 in range(a.shape[0]):
...             for i1 in range(b.shape[1]):
...                 c[i0, i1] = 0.0
...                 for i2 in range(a.shape[1]):
...                     c[i0, i1] = c[i0, i1] + (a[i0, i2] * b[i2, i1])
...     return c
...
>>> # We can use the function directly as a python function.
>>> # In this case, the inputs should be numpy arrays.
>>> result = outer_product(input_x, input_y)
...
>>> # Create a custom op with mode "hybrid" (default value) by the Hybrid DSL function.
>>> # In this case, we will enjoy the automatic dtype/shape infer for free.
>>> # The inputs should be mindspore tensors.
>>> test_op_hybrid = ops.Custom(outer_product)
>>> output = test_op_hybrid(Tensor(input_x), Tensor(input_y))
tinyms.primitives.ms_kernel(fn=None, reg_info=None, compile_attrs=None)[source]

Same as docarator kernel. ms_hybrid will be deprecated in the future. Please use kernel instead.

Supported Platforms:

Deprecated

class tinyms.primitives.AdaptiveMaxPool2D(output_size)[source]

Performs 2D adaptive max pooling on a multi-plane input signal.

Refer to mindspore.ops.adaptive_max_pool2d() for more details.

Parameters:

output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Inputs:
  • input_x (Tensor) - The input of AdaptiveMaxPool2D, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((None, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D(2)
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = ops.AdaptiveMaxPool2D((1, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> print(output[0])
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
class tinyms.primitives.Median(global_median=False, axis=0, keep_dims=False)[source]

Computes the median and its corresponding indices of input tensor in the axis dimension. If global_median is True, computes the median of all elements of tensor.

Warning

When attr global_median is True, the value of the second output tensor indices is meaningless.

Parameters:
  • global_median (bool, optional) – Whether the output tensor is the median of all input tensor elements or not. Default: Fasle.

  • axis (int, optional) – The specified dimension to compute median. Default: 0.

  • keep_dims (bool, optional) – Whether the output tensor need to retain axis dimension or not. Default: False.

Inputs:
  • x (Tensor) - A Tensor to calculate median with. Supported dtype:int16, int32, int64, float32 or float64.

Outputs:
  • y (Tensor) - Median, has the same dtype as the x.

    • If global_median is True, the y has only one element.

    • If keep_dims is True, the y has the same shape as the x except the size of y in dimension axis is 1.

    • Otherwise, the y lacks axis dimension than input.

  • indices (Tensor) - Indices, Has the same shape as the y, with dtype int64.

Raises:
  • TypeError – If dtype of x is not one of the following: int16, int32, int64, float32, float64.

  • TypeError – If input x is not a Tensor.

  • TypeError – If global_median or keep_dims is assigned a nonboolean value.

  • TypeError – If axis is not int.

  • ValueError – If axis is not in range of [-x.dim, x.dim-1].

Supported Platforms:

GPU CPU

Examples

>>> # case 1 : common median compute
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[5, 1, 2],[3, 5, 7], [1, 6, 4]]).astype(np.int64))
>>> median = ops.Median(global_median=False, axis=0, keep_dims=False)
>>> y = median(x)
>>> print(y)
(Tensor(shape=[3], dtype=Int64, value= [3, 5, 4]), Tensor(shape=[3], dtype=Int64, value= [1, 1, 2]))
>>> # case 2 : global median compute
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 7, 6],[5, 1, 3],[9, 17, 1]]).astype(np.int32))
>>> median = ops.Median(global_median=True)
>>> y = median(x)
>>> print(y)
(Tensor(shape=[], dtype=Int32, value= 5), Tensor(shape=[], dtype=Int64, value= 0))
class tinyms.primitives.Roll(shift, axis)[source]

Rolls the elements of a tensor along an axis.

Refer to mindspore.ops.roll() for more details.

Parameters:
  • shift (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • axis (Union[list(int), tuple(int), int]) – Specifies the dimension indexes of shape to be rolled.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x.

Supported Platforms:

GPU

Examples

>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> op = ops.Roll(shift=2, axis=0)
>>> output = op(input_x)
>>> print(output)
[3. 4. 0. 1. 2.]
>>> input_x = Tensor(np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]).astype(np.float32))
>>> op = ops.Roll(shift=-1, axis=0)
>>> output = op(input_x)
>>> print(output)
[[5. 6. 7. 8. 9.]
 [0. 1. 2. 3. 4.]]
class tinyms.primitives.UniqueConsecutive(return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Warning

This is an experimental API that is subject to change or deletion.

Refer to mindspore.ops.unique_consecutive() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> unique_consecutive = ops.UniqueConsecutive(True, True, None)
>>> output, idx, counts = unique_consecutive(x)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
tinyms.primitives.abs(input)[source]

Returns absolute value of a tensor element-wise.

\[out_i = |input_i|\]
Parameters:

input (Tensor) – The input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> output = ops.abs(input)
>>> print(output)
[1. 1. 0.]
tinyms.primitives.absolute(input)[source]

Alias for mindspore.ops.abs() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.accumulate_n(x)[source]

Computes accumulation of all input tensors element-wise.

mindspore.ops.accumulate_n() is similar to mindspore.ops.addn(), but there is a significant difference between them: accumulate_n will not wait for all of its inputs to be ready before summing. That is to say, accumulate_n is able to save memory when inputs are ready at different time since the minimum temporary storage is proportional to the output size rather than the input size.

Parameters:

x (Union(tuple[Tensor], list[Tensor])) – The input tuple or list is made up of multiple tensors whose dtype is number to be added together. Each element of tuple or list should have the same shape.

Returns:

Tensor, has the same shape and dtype as each entry of x.

Raises:
  • TypeError – If x is neither tuple nor list.

  • ValueError – If there is an input element with a different shape.

Supported Platforms:

Ascend

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = ops.accumulate_n([x, y, x, y])
>>> print(output)
[10. 14. 18.]
tinyms.primitives.acos(input)[source]

Computes arccosine of input tensors element-wise.

\[out_i = cos^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = ops.acos(input)
>>> print(output)
[0.737726  1.5307857 1.2661036 0.9764105]
tinyms.primitives.acosh(input)[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]

Warning

Given an input tensor input, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf].

Parameters:

input (Tensor) – The input tensor of inverse hyperbolic cosine function.

Returns:

Tensor, has the same shape and type as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = ops.acosh(x)
>>> print(output)
[0.        0.9624237 1.7627472 5.298292 ]
tinyms.primitives.adaptive_avg_pool1d(input, output_size)[source]

Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N, C, L_{in})\), adaptive_avg_pool1d outputs regional average in the \(L_{in}\)-dimension. The output is of shape \((N, C, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, L_{in})\), with float16 or float32 data type.

  • output_size (int) – the target output size \(L_{out}\).

Returns:

Tensor of shape \((N, C, L_{out})\), has the same type as input.

Raises:
  • TypeError – If output_size is not an int.

  • TypeError – If input is neither float16 nor float32.

  • ValueError – If output_size is less than 1.

  • ValueError – If length of shape of input is not equal to 3.

  • ValueError – If the last dimension of input is smaller than output_size.

  • ValueError – If the last dimension of input is not divisible by output_size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_avg_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
tinyms.primitives.adaptive_avg_pool2d(input, output_size)[source]

Performs 2D adaptive average pooling on a multi-plane input signal. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input features.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

For adaptive average pooling for 2D:

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= \frac{\sum Input[h_{start}:h_{end}, w_{start}:w_{end}]}{(h_{end}- h_{start}) * (w_{end}- w_{start})} \end{align}\end{split}\]
Parameters:
  • input (Tensor) – The input of adaptive_avg_pool2d, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

Returns:

Tensor, with the same type as the input.

Shape of the output is input_shape[:len(input_shape) - len(out_shape)] + out_shape.

\[\begin{split}out\_shape = \begin{cases} input\_x\_shape[-2] + output\_size[1], & \text{if output_size is (None, w);}\\ output\_size[0] + input\_x\_shape[-1], & \text{if output_size is (h, None);}\\ input\_x\_shape[-2:], & \text{if output_size is (None, None);}\\ (h, h), & \text{if output_size is h;}\\ (h, w), & \text{if output_size is (h, w)} \end{cases}\end{split}\]
Raises:
  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is less than or equal to the dimension of output_size.

Supported Platforms:

GPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> output = ops.adaptive_avg_pool2d(input, (None, 2))
>>> print(output)
[[[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]
 [[1.5 2.5]
  [4.5 5.5]
  [7.5 8.5]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_avg_pool2d(input, 2)
>>> print(output)
[[[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]
 [[3. 4.]
  [6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_avg_pool2d(input, (1, 2))
>>> print(output)
[[[4.5 5.5]]
 [[4.5 5.5]]
 [[4.5 5.5]]]
tinyms.primitives.adaptive_avg_pool3d(input, output_size)[source]

Performs 3D adaptive average pooling on a multi-plane input signal. That is, for any input size, the size of the specified output is \((D, H, W)\). The number of output features is equal to the number of input planes.

Suppose the last 3 dimension size of x is \((inD, inH, inW)\), the last 3 dimension size of output is \((outD, outH, outW)\).

\[\begin{split}\begin{array}{ll} \\ \forall \quad od \in [0,outD-1], oh \in [0,outH-1], ow \in [0,outW-1]\\ output[od,oh,ow] = \\ \qquad mean(x[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\ where,\\ \qquad istartD= \left\lceil \frac{od * inD}{outD} \right\rceil \\ \qquad iendD=\left\lfloor \frac{(od+1)* inD}{outD} \right\rfloor \\ \qquad istartH=\left\lceil \frac{oh * inH}{outH} \right\rceil \\ \qquad iendH=\left\lfloor \frac{(oh+1) * inH}{outH} \right\rfloor \\ \qquad istartW=\left\lceil \frac{ow * inW}{outW} \right\rceil \\ \qquad iendW=\left\lfloor \frac{(ow+1) * inW}{outW} \right\rfloor \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input of adaptive_avg_pool3d, which is a 5D or 4D Tensor.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((D, H, W)\), or an int D for \((D, D, D)\). \(D\), \(H\) and \(W\) can be int or None which means the output size is the same as that of the input.

Returns:

Tensor, with the same type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is not 4D or 5D.

  • ValueError – If output_size value is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(3, 3, 4)
>>> output_size=(3, 3, 4)
>>> input_val = np.random.randn(4, 3, 5, 6, 7)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 3, 3, 3, 4)
>>> # case 2: output_size=4
>>> output_size=5
>>> input_val = np.random.randn(2, 3, 8, 6, 12)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(2, 3, 5, 5, 5)
>>> # case 3: output_size=(None, 4, 5)
>>> output_size=(None, 4, 5)
>>> input_val = np.random.randn(4, 1, 9, 10, 8)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 1, 9, 4, 5)
tinyms.primitives.adaptive_max_pool1d(input, output_size)[source]

Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N, C, L_{in})\), adaptive_max_pool1d outputs regional maximum in the \(L_{in}\)-dimension. The output is of shape \((N, C, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, L_{in})\), with float16 or float32 data type.

  • output_size (int) – the target output size \(L_{out}\).

Returns:

Tensor of shape \((N, C, L_{out})\), has the same type as input.

Raises:
  • TypeError – If input is neither float16 nor float32.

  • TypeError – If output_size is not an int.

  • ValueError – If output_size is less than 1.

  • ValueError – If the last dimension of input is smaller than output_size.

  • ValueError – If the last dimension of input is not divisible by output_size.

  • ValueError – If length of shape of input is not equal to 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_max_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
tinyms.primitives.adaptive_max_pool2d(input, output_size, return_indices=False)[source]

This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= {\max Input[h_{start}:h_{end}, w_{start}:w_{end}]} \end{align}\end{split}\]

Note

Ascend platform only supports float16 type for input.

Parameters:
  • input (Tensor) – A 3D or 4D tensor, with float16, float32 or float64 data type.

  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

  • return_indices (bool) – If return_indices is True , the indices of max value would be output. Default: False .

Returns:

Tensor, with the same shape and dtype as the input.

Raises:
  • TypeError – If output_size is not int or tuple.

  • TypeError – If input is not a tensor.

  • TypeError – If return_indices is not a bool.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • ValueError – If the data format of input is not “NCHW” or “CHW”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> output = ops.adaptive_max_pool2d(input, (None, 2))
>>> print(output)
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_max_pool2d(input, 2)
>>> print(output)
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_max_pool2d(input, (1, 2))
>>> print(output)
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
tinyms.primitives.adaptive_max_pool3d(input, output_size, return_indices=False)[source]

Calculates the 3D adaptive max pooling for an input Tensor.

Parameters:
  • input (Tensor) – Tensor, with shape \((C, D, H, W)\) or \((N, C, D, H, W)\).

  • output_size (Union[int, tuple]) – The specified output size, which is an int or Tuple(int) that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. If it is None, the output size and input size of the corresponding dimension are the same.

  • return_indices (bool, optional) – If return_indices is True, the indices of max value would be output, Otherwise, it will not be output. Default: False.

Returns:

  • y (Tensor) - Tensor, with the same number of dims and data type as the input.

  • argmax (Tensor) - Tensor, the indices of max value, which has the same shape as the y and it’s data type is int32. It will output only when return_indices is True.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the dimensions number of input is not 4 or 5.

  • TypeError – If dtype of input is not int or float.

  • ValueError – If output_size is neither an int nor a tuple with shape (3,).

Supported Platforms:

GPU CPU

Examples

>>> input = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> output_size = (1, 1, 2)
>>> output = ops.adaptive_max_pool3d(input, output_size, True)
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
[[[[33 35]]]]
tinyms.primitives.add(input, other)[source]

Adds other value to input Tensor.

\[out_{i} = input_{i} + other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one of the input input , other after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If input and other is not one of the following: Tensor, number.Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: x and y are both Tensor.
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = ops.add(x, y)
>>> print(output)
[5. 7. 9.]
>>> # case 2: x is a scalar and y is a Tensor
>>> x = Tensor(1, mindspore.int32)
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> output = ops.add(x, y)
>>> print(output)
[5. 6. 7.]
>>> # the data type of x is int32, the data type of y is float32,
>>> # and the output is the data format of higher precision float32.
>>> print(output.dtype)
Float32
tinyms.primitives.addbmm(input, batch1, batch2, *, beta=1, alpha=1)[source]

Applies batch matrix multiplication to batch1 and batch2, with a reduced add step and add input to the result.

The optional values alpha and beta are the matrix-matrix product between batch1 and batch2 and the scale factor for the added tensor input respectively. If beta is 0, then input will be ignored.

\[output = \beta input + \alpha (\sum_{i=0}^{b-1} {batch1_i @ batch2_i})\]
Parameters:
  • input (Tensor) – Tensor to be added.

  • batch1 (Tensor) – The first batch of tensor to be multiplied.

  • batch2 (Tensor) – The second batch of tensor to be multiplied.

Keyword Arguments:
  • beta (Union[int, float], optional) – Multiplier for input. Default: 1.

  • alpha (Union[int, float], optional) – Multiplier for batch1 @ batch2. Default: 1.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If alpha or beta is not an int or float.

  • ValueError – If batch1, batch2 cannot apply batch matrix multiplication.

Supported Platforms:

Ascend GPU CPU

Examples

>>> m = np.ones((3, 3)).astype(np.float32)
>>> arr1 = np.arange(24).astype(np.float32).reshape((2, 3, 4))
>>> arr2 = np.arange(24).astype(np.float32).reshape((2, 4, 3))
>>> a = Tensor(arr1)
>>> b = Tensor(arr2)
>>> c = Tensor(m)
>>> output = ops.addbmm(c, a, b)
>>> print(output)
[[ 949. 1009. 1069.]
 [1285. 1377. 1469.]
 [1621. 1745. 1869.]]
tinyms.primitives.addcdiv(input, tensor1, tensor2, value=1)[source]

Performs the element-wise division of tensor tensor1 by tensor tensor2, multiply the result by the scalar value and add it to input_data.

\[y[i] = input[i] + value[i] * (tensor1[i] / tensor2[i])\]
Parameters:
  • input (Tensor) – The tensor to be added.

  • tensor1 (Tensor) – The numerator tensor.

  • tensor2 (Tensor) – The denominator tensor.

  • value (Union[Tensor, Number]) – The multiplier for tensor1/tensor2. Default: 1.

Returns:

Tensor, has the same shape and dtype as tensor1/tensor2.

Raises:
  • TypeError – If dtype of tensor1, tensor2, input is not tensor.

  • ValueError – If tensor1 could not be broadcast to a tensor with shape of tensor2.

  • ValueError – If value could not be broadcast to tensors with shapes of tensor1/tensor2.

  • ValueError – If input could not be broadcast to tensors with shapes of value*(tensor1/tensor2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([1, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.array([4, 3, 2, 1]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = ops.addcdiv(input_data, x1, x2, value)
>>> print(y)
[1.25      1.6666667 2.5       5.       ]
tinyms.primitives.addcmul(input, tensor1, tensor2, value=1)[source]

Performs the element-wise product of tensor tensor1 and tensor tensor2, multiply the result by the scalar value and add it to input_data.

\[output[i] = input[i] + value[i] * (tensor1[i] * tensor2[i])\]
Parameters:
  • input (Tensor) – The tensor to be added.

  • tensor1 (Tensor) – The tensor to be multiplied.

  • tensor2 (Tensor) – The tensor to be multiplied.

  • value (Union[Tensor, Number]) – The multiplier for tensor1*tensor2. Default: 1.

Returns:

Tensor, has the same shape and dtype as x1*x2.

Raises:
  • TypeError – If dtype of tensor1, tensor2, input is not Tensor.

  • TypeError – If dtype of input is not one of: float32, float16, int32.

  • TypeError – If dtype of tensor1 or tensor2 is not one of: float32, float16, int32.

  • TypeError – If dtype of value is not one of: float32, float16, int32.

  • ValueError – If tensor1 could not be broadcast to a tensor with shape of tensor2.

  • ValueError – If value could not be broadcast to tensors with shapes of tensor1 * tensor2.

  • ValueError – If input could not be broadcast to tensors with shapes of value*(tensor1*tensor2).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_data = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> x2 = Tensor(np.array([[1, 2, 3]]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = ops.addcmul(input_data, x1, x2, value)
>>> print(y)
[[ 2.  3.  4.]
 [ 3.  5.  7.]
 [ 4.  7. 10.]]
tinyms.primitives.addmm(input, mat1, mat2, *, beta=1, alpha=1)[source]

Multiplies matrix mat1 and matrix mat2. The matrix input is added to the final result.

\[output = \beta input + \alpha (mat1 @ mat2)\]
Parameters:
  • input (Tensor) – Tensor to be added.

  • mat1 (Tensor) – The first tensor to be multiplied.

  • mat2 (Tensor) – The second tensor to be multiplied.

Keyword Arguments:
  • beta (Union[int, float], optional) – Multiplier for input. Default: 1.

  • alpha (Union[int, float], optional) – Multiplier for mat1 @ mat2. Default: 1.

Returns:

Tensor, has the same dtype as input.

Raises:

ValueError – If mat1, mat2 cannot apply matrix multiplication.

Supported Platforms:

Ascend GPU CPU

Examples

>>> m = np.ones((3, 3)).astype(np.float32)
>>> arr1 = np.arange(12).astype(np.float32).reshape((3, 4))
>>> arr2 = np.arange(12).astype(np.float32).reshape((4, 3))
>>> a = Tensor(arr1)
>>> b = Tensor(arr2)
>>> c = Tensor(m)
>>> output = ops.addmm(c, a, b)
>>> print(output)
[[ 43.  49.  55.]
 [115. 137. 159.]
 [187. 225. 263.]]
tinyms.primitives.addmv(x, mat, vec, *, beta=1, alpha=1)[source]

Multiplies matrix mat and vector vec. The vector x is added to the final result.

If mat is a \((N, M)\) tensor, vec is a 1-D tensor of size \(M\), then x must be broadcastable with a 1-D tensor of size \(N\).In this case out will be 1-D tensor of size \(N\).

The optional values beta and alpha are the matrix-vector product between mat and vec and the scale factor for the added Tensor x respectively. If beta is 0, then x will be ignored.

\[output = β x + α (mat @ vec)\]
Parameters:
  • x (Tensor) – Vector to be added. The shape of the tensor is \((N,)\).

  • mat (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((N, M)\).

  • vec (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((M,)\).

Keyword Arguments:
  • beta (scalar[int, float, bool], optional) – Multiplier for x (β). The beta must be int or float or bool. Default: 1.

  • alpha (scalar[int, float, bool], optional) – Multiplier for mat @ vec (α). The alpha must be int or float or bool. Default: 1.

Returns:

Tensor, the shape of the output tensor is \((N,)\), has the same dtype as x.

Raises:
  • TypeError – If mat, vec, x is not a Tensor.

  • TypeError – If inputs mat, ‘vec’ are not the same dtype.

  • ValueError – If mat is not a 2-D Tensor.

  • ValueError – If vec is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2., 3.]).astype(np.float32))
>>> mat = Tensor(np.array([[2., 5., 3.], [4., 2., 2.]]).astype(np.float32))
>>> vec = Tensor(np.array([3., 2., 4.]).astype(np.float32))
>>> output = ops.addmv(x, mat, vec)
>>> print(output)
[30. 27.]
tinyms.primitives.addn(x)[source]

Computes addition of all input tensors element-wise.

All input tensors must have the same shape.

Parameters:

x (Union(tuple[Tensor], list[Tensor])) – A tuple or list composed of Tensor.

Returns:

Tensor, has the same shape and dtype as each Tensor of x.

Raises:
  • TypeError – If x is neither tuple nor list.

  • ValueError – If there are Tensors with different shapes in x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> output = ops.addn([x, y, x, y])
>>> print(output)
[10. 14. 18.]
tinyms.primitives.addr(x, vec1, vec2, *, beta=1, alpha=1)[source]

Computes the outer product of two vector vec1 and vec2, and adds the resulting matrix to x.

Given vec1 and vec2 of sizes \(N\) and \(M\), x must be able to broadcast to a matrix of shape \((N, M)\).

beta and alpha are optional scaling factors for the outer product of vec1 and vec2, and the matrix x respectively. Setting beta to 0 will exclude x from the computation.

\[output = β x + α (vec1 ⊗ vec2)\]
Parameters:
  • x (Tensor) – Vector to be added. The shape of the tensor is \((N, M)\).

  • vec1 (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((N,)\).

  • vec2 (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((M,)\).

Keyword Arguments:
  • beta (scalar[int, float, bool], optional) – Multiplier for x (β). The beta must be int or float or bool, Default: 1.

  • alpha (scalar[int, float, bool], optional) – Multiplier for vec1vec2 (α). The alpha must be int or float or bool, Default: 1.

Returns:

Tensor, the shape of the output tensor is \((N, M)\), has the same dtype as x.

Raises:
  • TypeError – If x, vec1, vec2 is not a Tensor.

  • TypeError – If inputs vec1, vec2 are not the same dtype.

  • ValueError – If vec1, vec2 is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2., 2.], [3., 2.], [3., 4.]], np.float32))
>>> vec1 = Tensor(np.array([2., 3., 2.], np.float32))
>>> vec2 = Tensor(np.array([3, 4], np.float32))
>>> output = ops.addr(x, vec1, vec2)
>>> print(output)
[[ 8. 10.]
 [12. 14.]
 [ 9. 12.]]
tinyms.primitives.adjoint(x)[source]

Calculates the conjugation of Tensor element by element, and transposes the last two dimensions.

Parameters:

x (Tensor) – Input Tensor.

Returns:

Tensor, the calculated result.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([[0. + 0.j, 1. + 1.j], [2. + 2.j, 3. + 3.j]]), mindspore.complex128)
>>> output = ops.adjoint(a)
>>> print(output)
[[0.-0.j 2.-2.j]
 [1.-1.j 3.-3.j]]
tinyms.primitives.affine_grid(theta, size, align_corners=False)[source]

Returns a 2D or 3D flow field (sampling grid) based on theta, a batch of affine matrices.

Parameters:
  • theta (Tensor) – The input tensor of flow field whose dtype is float16, float32. Input batch of affine matrices with shape \((N, 2, 3)\) for 2D grid or \((N, 3, 4)\) for 3D grid.

  • size (tuple[int]) – The target output image size. The value of target output with format \((N, C, H, W)\) for 2D grid or \((N, C, D, H, W)\) for 3D grid.

  • align_corners (bool, optional) – Geometrically, each pixel of input is viewed as a squqre instead of dot. If True, consider extremum -1 and 1 referring to the centers of the pixels rather than pixel corners. The default value is False, extremum -1 and 1 refer to the corners of the pixels, so that sampling is irrelevant to resolution of the image. Default: False.

Returns:

Tensor, a tensor whose data type is same as ‘theta’, and the shape is \((N, H, W, 2)\) for 2D grid or \((N, D, H, W, 3)\) for 3D grid.

Raises:
  • TypeError – If theta is not a Tensor or size is not a tuple.

  • ValueError – If the shape of theta is not \((N, 2, 3)\) or \((N, 3, 4)\).

  • ValueError – If the size of size is not 4 or 5.

  • ValueError – If the shape of theta is \((N, 2, 3)\), the size of size is not 4; If the shape of theta is \((N, 3, 4)\), the size of size is not 5.

  • ValueError – If the size[0] is not equal to the shape[0] of theta.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> theta = Tensor([[[0.8, 0.5, 0],[-0.5, 0.8, 0]]], mindspore.float32)
>>> out_size = (1, 3, 2, 3)
>>> output = ops.affine_grid(theta, out_size, False)
>>> print(output)
[[[[-0.78333336 -0.06666666]
[-0.25       -0.4       ]
[ 0.28333336 -0.73333335]]
[[-0.28333336  0.73333335]
[ 0.25        0.4       ]
[ 0.78333336  0.06666666]]]]
tinyms.primitives.all(input, axis=None, keep_dims=False)[source]

Reduces a dimension of input by the “logical AND” of all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[bool]) – The input Tensor. The dtype of the Tensor is bool. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Suppose the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keep_dims (bool, optional) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Returns:

Tensor, the dtype is bool.

  • If axis is None, and keep_dims is False, the output is a 0-D Tensor representing the “logical AND” of all elements in the input Tensor.

  • If axis is int, such as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), such as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> # case 1: Reduces a dimension by the "logicalAND" of all elements in the dimension.
>>> output = ops.all(x, keep_dims=True)
>>> print(output)
[[False]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.all(x, axis=0)
>>> print(output)
[True False]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.all(x, axis=1)
>>> print(output)
[False True]
tinyms.primitives.amax(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Reduces all dimensions of a tensor by returning the maximum value in input, by default. And also can reduce a dimension of input along specified axis. keepdims determines whether the dimensions of output and input are the same.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r).

  • keepdims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keepdims is False, the output is a 0-D tensor representing the product of all

    elements in the input tensor.

  • If axis is int, set as 1, and keepdims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keepdims is False, the shape of output is

    \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.amax(x, 1, keepdims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the maximum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.amax(x)
>>> print(output)
9.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.amax(x, 0, True)
>>> print(output)
[[[7. 7. 7. 7. 7. 7.]
  [8. 8. 8. 8. 8. 8.]
  [9. 9. 9. 9. 9. 9.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.amax(x, 1, True)
>>> print(output)
[[[3. 3. 3. 3. 3. 3.]]
 [[6. 6. 6. 6. 6. 6.]]
 [[9. 9. 9. 9. 9. 9.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.amax(x, 2, True)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
tinyms.primitives.amin(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Reduces all dimensions of a tensor by returning the minimum value in input, by default. And also can reduce a dimension of input along specified axis. keepdims determines whether the dimensions of output and input are the same.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r)..

  • keepdims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keepdims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keepdims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keepdims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.amin(x, 1, keepdims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by the minimum value of all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.amin(x)
>>> print(output)
1.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.amin(x, 0, True)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]
  [2. 2. 2. 2. 2. 2.]
  [3. 3. 3. 3. 3. 3.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.amin(x, 1, True)
>>> print(output)
[[[1. 1. 1. 1. 1. 1.]]
 [[4. 4. 4. 4. 4. 4.]]
 [[7. 7. 7. 7. 7. 7.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.amin(x, 2, True)
>>> print(output)
[[[1.]
  [2.]
  [3.]]
 [[4.]
  [5.]
  [6.]]
 [[7.]
  [8.]
  [9.]]]
tinyms.primitives.aminmax(input, *, axis=0, keepdims=False)[source]

It returns the minimum and maximum value along the given axis of input tensor.

Parameters:

input (Tensor) – The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\) .

Keyword Arguments:
  • axis (int, optional) – The dimension to reduce. The value range of axis is [-rank, rank), where “rank” is the dimension of input. Default: 0.

  • keepdims (bool, optional) – Whether to maintain dimension. When set to True, the output will keep the same dimension as the input, or the dimension specified by axis is reduced. Default: False.

Returns:

tuple (Tensor), containing the minimum value and maximum value of the input tensor.

  • If keepdims is True, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\).

  • If keepdims is False, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output0, output1 = ops.aminmax(x)
>>> print(output0, output1)
0.0 0.7
>>> output2, output3 = ops.aminmax(x, axis=-1, keepdims=True)
>>> print(output2, output3)
[0.] [0.7]
tinyms.primitives.angle(input)[source]

Returns the element-wise argument of a complex tensor. The elements in input are considered to be complex numbers of the form a+bj, where a is the real part and b is the imaginary part. The argument returned by this function is of the form \(atan2(b, a)\).

Parameters:

input (Tensor) – The input tensor. types: complex64, complex128.

Returns:

Tensor, has the float32 or float64 type and the same shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is not one of: complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([-1.5 + 7.8j, 3 + 5.75j], mindspore.complex64)
>>> output = ops.angle(input)
>>> print(output)
[1.7607845 1.0899091]
tinyms.primitives.any(input, axis=None, keep_dims=False)[source]

Reduces a dimension of input by the “logical OR” of all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[bool]) – The input Tensor. The dtype of the Tensor is bool. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)], optional) – The dimensions to reduce. Suppose the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keep_dims (bool, optional) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default : False.

Returns:

Tensor, the dtype is bool.

  • If axis is None, and keep_dims is False, the output is a 0-D Tensor representing the “logical OR” of all elements in the input Tensor.

  • If axis is int, such as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), such as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If keep_dims is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[True, False], [True, True]]))
>>> # case 1: Reduces a dimension by the "logical OR" of all elements in the dimension.
>>> output = ops.any(x, keep_dims=True)
>>> print(output)
[[ True]]
>>> print(output.shape)
(1, 1)
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.any(x, axis=0)
>>> print(output)
[True True]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.any(x, axis=1)
>>> print(output)
[True True]
tinyms.primitives.approximate_equal(x, y, tolerance=1e-05)[source]

Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False.

\[\begin{split}out_i = \begin{cases} & \text{ if } \left | x_{i} - y_{i} \right | < \text{tolerance},\ \ True \\ & \text{ if } \left | x_{i} - y_{i} \right | \ge \text{tolerance},\ \ False \end{cases}\end{split}\]

where tolerance indicates Acceptable maximum tolerance.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to the relatively highest precision data type.

Parameters:
  • x (Tensor) – A tensor. Must be one of the following types: float32, float16. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • y (Tensor) – A tensor of the same type and shape as x.

  • tolerance (float) – The maximum deviation that two elements can be considered equal. Default: 1e-05.

Returns:

Tensor, the shape is the same as the shape of x, and the data type is bool.

Raises:
  • TypeError – If tolerance is not a float.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> tol = 1.5
>>> x = Tensor(np.array([1, 2, 3]), mstype.float32)
>>> y = Tensor(np.array([2, 4, 6]), mstype.float32)
>>> output = ops.approximate_equal(Tensor(x), Tensor(y), tol)
>>> print(output)
[ True  False  False]
tinyms.primitives.arange(start=0, end=None, step=1, *, dtype=None)[source]

Creates a sequence of numbers that begins at start and extends by increments of step up to but not including end.

Parameters:
  • start (Union[float, int, Tensor], optional) – The start of the interval. If Tensor, the shape must be (). Default: 0.

  • end (Union[float, int, Tensor], optional) – The end of the interval, exclusive. If Tensor, the shape must be (). Default: None. If None, it defaults to the value of start, and 0 is used as the starting value.

  • step (Union[float, int, Tensor], optional) – Number that increments start. If Tensor, the shape must be (). Default: 1.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The required data type of returned Tensor. Default: None. If the value is not specified or is None, the type with the highest precision in the start, end, and step parameters is inferred.

Returns:

A 1-D Tensor, with the same type as the inputs.

Raises:
  • TypeError – If start, end or step is not an int or a float or a TensorScalar(Special Tensor with shape ()) in valid dtypes.

  • ValueError – If step = 0.

  • ValueError – If start >= end when step > 0.

  • ValueError – If start <= end when step < 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> output = ops.arange(1, 6)
>>> print(output)
[1 2 3 4 5]
>>> print(output.dtype)
Int64
>>> output = ops.arange(0, 3, 1.2)
>>> print(output)
[0.  1.2 2.4]
>>> print(output.dtype)
Float32
>>> output = ops.arange(7, 1, -2)
>>> print(output)
[7 5 3]
>>> print(output.dtype)
Int64
>>> output = ops.arange(ms.Tensor(12.0, dtype=ms.float64), 2, ms.Tensor(-1.0, dtype=ms.float32))
>>> print(output)
[12. 11. 10.  9.  8.  7.  6.  5.  4.  3.]
>>> print(output.dtype)
Float64
tinyms.primitives.arccos(input)[source]

Alias for mindspore.ops.acos() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arccosh(input)[source]

For details, please refer to mindspore.ops.acosh().

tinyms.primitives.arcsin(x)[source]

Alias for mindspore.ops.asin().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arcsinh(input)[source]

Alias for mindspore.ops.asinh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.arctan(input)[source]

For details, please refer to mindspore.ops.atan().

tinyms.primitives.arctan2(input, other)[source]

For details, please refer to mindspore.ops.atan2().

tinyms.primitives.arctanh(input)[source]

Alias for mindspore.ops.atanh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.argmax(input, dim=None, keepdim=False)[source]

Return the indices of the maximum values of a tensor across a dimension.

Parameters:
  • input (Tensor) – Input tensor.

  • dim (Union[int, None], optional) – The dimension to reduce. If dim is None, the indices of the maximum value within the flattened input will be returned. Default: None.

  • keepdim (bool, optional) – Whether the output tensor retains the specified dimension. Ignored if dim is None. Default: False.

Returns:

Tensor, indices of the maximum values across a dimension.

Raises:

ValueError – If dim is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
>>> output = ops.argmax(x, dim=-1)
>>> print(output)
[1 0 0]
tinyms.primitives.argmin(input, axis=None, keepdims=False)[source]

Returns the indices of the minimum value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the shape of the output tensor is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters:
  • input (Tensor) – Input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, None], optional) – Axis where the Argmin operation applies to. Default: None.

  • keepdims (bool, optional) – Whether the output tensor retains the specified dimension. Ignored if axis is None. Default: False.

Returns:

Tensor, indices of the min value of input tensor across the axis.

Raises:

TypeError – If axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = ops.argmin(input_x)
>>> print(index)
2
tinyms.primitives.argsort(input, axis=-1, descending=False)[source]

Sorts the input tensor along the given dimension in specified order and return the sorted indices.

Parameters:
  • input (Tensor) – The input tensor to sort.

  • axis (int) – The axis to sort along. Default: -1, means the last axis

  • descending (bool) – The sort order. If descending is True then the elements are sorted in descending order by value. Otherwise sort in ascending order. Default: False.

Returns:

Tensor, the indices of sorted input tensor. Data type is int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> sort = ops.argsort(x)
>>> print(sort)
[[2 1 0]
 [2 0 1]
 [0 1 2]]
tinyms.primitives.argwhere(input)[source]

Return a Tensor of the positions of all non-zero values.

Parameters:

input (Tensor) – The input tensor. The data type is Number or Bool.

Returns:

Tensor, a 2-D Tensor whose data type is int64, containing the positions of all non-zero values of the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> output = ops.argwhere(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
tinyms.primitives.asin(input)[source]

Computes arcsine of input tensors element-wise.

\[out_i = sin^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32, float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = ops.asin(x)
>>> print(output)
[0.8330704  0.04001067 0.30469266 0.5943858 ]
tinyms.primitives.asinh(x)[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(x_i)\]
Parameters:

x (Tensor) – The input tensor of inverse hyperbolic sine function.

Returns:

Tensor, has the same shape and type as x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = ops.asinh(x)
>>> print(output)
[-2.3124382  1.1947632  1.8184465  5.298342 ]
tinyms.primitives.assign(variable, value)[source]

Assigns Parameter with a value.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be assigned, has the same shape with variable.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If variable is not a Parameter.

  • TypeError – If value is not a Tensor.

  • RuntimeError – If the data type of variable and value conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> value = Tensor([2.0], mindspore.float32)
>>> variable = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="variable")
>>> ops.assign(variable, value)
>>> print(variable.asnumpy())
[2.]
tinyms.primitives.assign_add(variable, value)[source]

Updates a Parameter by adding a value to it.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be added to the variable. It must have the same shape as variable. it is recommended to use the same data type when using this operator.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If value is neither Number nor Tensor.

  • RuntimeError – If the data type of variable and value conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
>>> value = Tensor(np.ones([1]).astype(np.int32) * 100)
>>> ops.assign_add(variable, value)
>>> print(variable.asnumpy())
[101]
tinyms.primitives.assign_sub(variable, value)[source]

Updates a Parameter by subtracting a value from it.

Args of variable and value comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. If value is a number, the number is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation.

Note

Since variable is a data type Parameter, the data type cannot be changed, so only the type of value is allowed to be promoted to the type of variable. And the conversion type supported by different devices will be different, it is recommended to use the same data type when using this operator.

Parameters:
  • variable (Parameter) – The Parameter. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • value (Tensor) – The value to be subtracted from the variable. It must have the same shape as variable. it is recommended to use the same data type when using this operator.

Returns:

Tensor, has the same data type and shape as original variable.

Raises:
  • TypeError – If value is neither Number nor Tensor.

  • RuntimeError – If the data type of x, y conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> variable = mindspore.Parameter(initializer(1, [1], mindspore.int32), name="global_step")
>>> value = Tensor(np.ones([1]).astype(np.int32) * 100)
>>> ops.assign_sub(variable, value)
>>> print(variable.asnumpy())
[-99]
tinyms.primitives.atan(input)[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A Tensor, has the same type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 0.0]), mindspore.float32)
>>> output = ops.atan(x)
>>> print(output)
[0.7853982 0.       ]
tinyms.primitives.atan2(input, other)[source]

Returns arctangent of input/other element-wise.

It returns \(\theta\ \in\ [-\pi, \pi]\) such that \(input = r*\sin(\theta), other = r*\cos(\theta)\), where \(r = \sqrt{input^2 + other^2}\).

Note

  • Arg input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower precision data type will be converted to relatively the highest precision data type.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Tensor, Number.number) – The input tensor or scalar. \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32, float64

  • other (Tensor, Number.number) – The input tensor or scalar. It has the same shape with input.

Note

At least one of the input args should be Tensor.

Returns:

Tensor or scalar, the shape is the same as the one after broadcasting,and the data type is same as input.

Raises:
  • TypeError – If input or other is not a Tensor or scalar.

  • RuntimeError – If the data type of input and other conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 1]), mindspore.float32)
>>> other = Tensor(np.array([1, 1]), mindspore.float32)
>>> output = ops.atan2(input, other)
>>> print(output)
[0.        0.7853982]
tinyms.primitives.atanh(x)[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = \tanh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A Tensor, has the same type as the input.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -0.5]), mindspore.float32)
>>> output = ops.atanh(x)
>>> print(output)
[ 0.         -0.54930615]
tinyms.primitives.atleast_1d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least one dimension after this operation.

Scalar is converted to a 1-D Tensor, input tensor with one or more dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 1.

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.ones((2, 3)))
>>> x2 = Tensor(np.ones(()))
>>> x3 = Tensor(np.ones(5))
>>> out = ops.atleast_1d([x1, x2, x3])
>>> print(out[0].asnumpy())
[[1. 1. 1.]
 [1. 1. 1.]]
>>> print(out[1].asnumpy())
[1.]
>>> print(out[2].asnumpy())
[1. 1. 1. 1. 1.]
tinyms.primitives.atleast_2d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least 2 dimension after this operation.

Scalar or 1-D Tensor is converted to 2-D Tensor, tensor with higher dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 2 .

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> from mindspore import ops
>>> x1 = np.ones((2, 3))
>>> x2 = np.ones(())
>>> x3 = np.ones(5)
>>> out = ops.atleast_2d([x1, x2, x3])
>>> print(out)
(Tensor(shape=[2, 3], dtype=Float32, value=
[[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]), Tensor(shape=[1, 1], dtype=Float32, value=
[[ 1.00000000e+00]]), Tensor(shape=[1, 5], dtype=Float32, value=
[[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]))
tinyms.primitives.atleast_3d(inputs)[source]

Reshapes Tensor in inputs, every Tensor has at least 3 dimension after this operation.

Scalar, 1-D or 2-D Tensor is converted to 3-D Tensor, tensor with higher dimensions will be returned as it is.

Parameters:

inputs (Union[Tensor, list[Tensor]]) – One or more input tensors.

Returns:

Tensor or list[Tensor]. If returned a list, every element a in that list satisfies a.ndim >= 3. For example, a 1-D Tensor of shape \((N,)\) becomes a Tensor of shape \((1, N, 1)\), and a 2-D Tensor of shape \((M, N)\) becomes a tensor of shape \((M, N, 1)\).

Raises:

TypeError – If the input is not a tensor or a list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.ones((2, 3)))
>>> x2 = Tensor(np.ones(()))
>>> x3 = Tensor(np.ones(5))
>>> out = ops.atleast_3d([x1, x2, x3])
>>> print(out[0].asnumpy())
[[[1.]
  [1.]
  [1.]]

 [[1.]
  [1.]
  [1.]]]
>>> print(out[1].asnumpy())
[[[1.]]]
>>> print(out[2].asnumpy())
[[[1.]
  [1.]
  [1.]
  [1.]
  [1.]]]
tinyms.primitives.avg_pool1d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True)[source]

Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically the input is of shape \((N_{in}, C_{in}, L_{in})\), avg_pool1d outputs regional average in the \((L_{in})\)-dimension. Given kernel size \(ks = l_{ker}\) and stride \(s = s_0\), the operation is as follows.

\[\text{output}(N_i, C_j, l) = \frac{1}{l_{ker}} \sum_{n=0}^{l_{ker}-1} \text{input}(N_i, C_j, s_0 \times l + n)\]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C_{in}, L_{in})\).

  • kernel_size (int) – The size of kernel window used to take the average value. Default: 1.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int])) – The pad value to be filled. If padding is an integer, the paddings of left and right are the same, equal to pad. If padding is a tuple of 2 integers, the padding of left and right equal to padding[0] and padding[1] correspondingly. Default: 0.

  • ceil_mode (bool) – If True, apply ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, include the zero-padding in the averaging calculation. Default: True.

Returns:

Tensor of shape \((N, C_{out}, L_{out})\).

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size or stride is not an int.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • ValueError – If length of shape of input_x is not equal to 3.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If padding is not int nor a tuple whose length is equal to 2.

  • ValueError – If value(s) of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.avg_pool1d(input_x, kernel_size=6, stride=1)
>>> print(output.shape)
(1, 3, 1)
tinyms.primitives.avg_pool2d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=0)[source]

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \((k_{h}, k_{w})\) and strides , the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \frac{1}{k_{h} * k_{w}} \sum_{m=0}^{k_{h}-1} \sum_{n=0}^{k_{w}-1} \text{input}(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)\]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value. It is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int])) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of top, bottom, left and right are the same, equal to pad. If padding is a tuple of 4 integers, the padding of top, bottom, left and right equal to padding[0], padding[1], padding[2] and padding[3] correspondingly. Default: 0.

  • ceil_mode (bool) – If True, apply ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool) – If True, include the zero-padding in the averaging calculation. Default: True.

  • divisor_override (int) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

Returns:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size or stride is neither int nor tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If divisor_override is not an int.

  • ValueError – If length of shape of input_x is not equal to 4.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 2.

  • ValueError – If padding is not int nor a tuple whose length is equal to 4.

  • ValueError – If value(s) of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> output = ops.avg_pool2d(x, kernel_size=2, stride=1)
>>> print(output)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
  [[14.5  15.5  16.5]
   [18.5  19.5  20.5]]
  [[26.5  27.5  28.5]
   [30.5  31.5  32.5]]]]
tinyms.primitives.avg_pool3d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=0)[source]

Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\), avg_pool3d outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[ \begin{align}\begin{aligned}\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1}\\\text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\end{aligned}\end{align} \]

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

  • kernel_size (Union[int, tuple[int]], optional) – The size of kernel used to take the average value, is an int number that represents depth, height and width are both kernel_size, or a tuple of three int numbers that represent depth, height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • padding (Union(int, tuple[int]), optional) – The pad value to be filled. If padding is an integer, the addings of head, tail, top, bottom, left and right are the same, equal to pad. If padding is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] correspondingly. Default: 0

  • ceil_mode (bool, optional) – If True, ceil instead of floor to compute the output shape. Default: False.

  • count_include_pad (bool, optional) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int, optional) – If specified, it will be used as divisor in the averaging calculation, otherwise kernel_size will be used. Default: 0.

Returns:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\). Has the same data type with input_x.

Raises:
  • TypeError – If input_x is not an Tensor.

  • TypeError – If kernel_size, stride or padding is neither an int not a tuple.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If divisor_override is not an int.

  • ValueError – If length of shape of input_x is not equal to 5.

  • ValueError – If numbers in kernel_size or stride are not positive.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 3.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If element of padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(1 * 2 * 2 * 2 * 3).reshape((1, 2, 2, 2, 3)), mindspore.float16)
>>> output = ops.avg_pool3d(input_x, kernel_size=2, stride=1)
>>> print(output)
[[[[[ 5.  6.]]]
  [[[17. 18.]]]]]
tinyms.primitives.baddbmm(input, batch1, batch2, beta=1, alpha=1)[source]

The result is the sum of the input and a batch matrix-matrix product of matrices in batch1 and batch2. The formula is defined as follows:

\[\text{out}_{i} = \beta \text{input}_{i} + \alpha (\text{batch1}_{i} \mathbin{@} \text{batch2}_{i})\]
Parameters:
  • input (Tensor) – The input Tensor. When batch1 is a \((C, W, T)\) Tensor and batch2 is a \((C, T, H)\) Tensor, input must be broadcastable with \((C, W, H)\) Tensor.

  • batch1 (Tensor) – \(batch1\) in the above formula. Must be 3-D Tensor, dtype is same as input.

  • batch2 (Tensor) – \(batch2\) in the above formula. Must be 3-D Tensor, dtype is same as input.

  • beta (Union[float, int], optional) – multiplier for input. The default is 1.

  • alpha (Union[float, int], optional) – multiplier for \(batch1 @ batch2\). The default is 1. Arguments beta and alpha must be integers when inputs of type not FloatTensor, otherwise they should be a real number.

Returns:

Tensor, has the same dtype as input, shape will be \((C, W, H)\).

Raises:
  • TypeError – The type of input, batch1, batch2 is not Tensor.

  • TypeError – The types of input, batch1, batch2 are different.

  • TypeError – For inputs of type FloatTensor or DoubleTensor, arguments beta and alpha not be real numbers, otherwise not be integers.

  • TypeError – For Baddbmm, attributes alpha and beta are not real numbers

  • ValueError – If batch1 and batch2 are not 3-D tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([1, 3, 3]).astype(np.float32))
>>> batch1 = Tensor(np.ones([1, 3, 4]).astype(np.float32))
>>> batch2 = Tensor(np.ones([1, 4, 3]).astype(np.float32))
>>> output = ops.baddbmm(input, batch1, batch2)
>>> print(output)
[[[5. 5. 5.]
  [5. 5. 5.]
  [5. 5. 5.]]]
tinyms.primitives.bartlett_window(window_length, periodic=True, *, dtype=None)[source]

Bartlett window function is a triangular-shaped weighting function used for smoothing or frequency analysis of signals in digital signal processing.

The window_length is a input tensor which determines the returned window size, and its data should be an integer. In particular, if window_length is equal to 1, only a single value 1 exists in the returned window.

Attr periodic determines whether the returned window removes the last duplicate value from the symmetric window and prepares to be a periodic window with functions. Therefore, if attr periodic is true, the \(N\) in formula is \(window\_length + 1\).

\[\begin{split}w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases},\end{split}\]

where N is the full window size.

Parameters:
  • window_length (Tensor) – The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

  • periodic (bool, optional) – Indicates whether to returns a window to be used as periodic function or a symmetric window. Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The datatype of returned tensor. Only float16, float32 and float64 are allowed. Default: None.

Returns:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype. If dtype is None, output datatype is float32.

Raises:
  • TypeError – If window_length is not a Tensor.

  • TypeError – If the type of window_length is not one of: int32, int64.

  • TypeError – If periodic is not a bool.

  • TypeError – If dtype is not one of: float16, float32, float64.

  • ValueError – If the value range of window_length is not [0, 1000000].

  • ValueError – If the dimension of window_length is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(5, mstype.int32)
>>> output = ops.bartlett_window(window_length, periodic=True, dtype=mstype.float32)
>>> print(output)
[0. 0.4 0.8 0.8 0.4]
tinyms.primitives.batch_norm(input_x, running_mean, running_var, weight, bias, training=False, momentum=0.1, eps=1e-05)[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over inputs to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is weight, \(\beta\) is bias, \(\epsilon\) is eps, \(mean\) is the mean of x, \(variance\) is the variance of x.

Warning

  • For Ascend 310, the result accuracy fails to reach 1‰ due to the square root instruction.

Note

  • If training is False, weight, bias, running_mean and running_var are Tensors.

  • If training is True, weight, bias, running_mean and running_var are Parameters.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, C)\), with float16 or float32 data type.

  • running_mean (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • running_var (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • weight (Union[Tensor, Parameter]) – The shape \((C,)\), with float16 or float32 data type.

  • bias (Union[Tensor, Parameter]) – The shape \((C,)\), has the same data type with weight.

  • training (bool, optional) – If training is True, mean and variance are computed during training. If training is False, they’re loaded from checkpoint during inference. Default: False.

  • momentum (float, optional) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = (1 - momentum) * running\_mean + momentum * current\_mean\)). Momentum value must be [0, 1]. Default: 0.1.

  • eps (float, optional) – A small value added for numerical stability. Default: 1e-5.

Returns:

output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

Raises:
  • TypeError – If training is not a bool.

  • TypeError – If dtype of eps or momentum is not float.

  • TypeError – If input_x, weight, bias, running_mean or running_var is not a Tensor.

  • TypeError – If dtype of input_x, weight is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1.0, 2.0], [3.0, 4.0]], dtype.float32)
>>> running_mean = Tensor([0.5, 1.5], dtype.float32)
>>> running_var = Tensor([0.1, 0.2], dtype.float32)
>>> weight = Tensor([2.0, 2.0], dtype.float32)
>>> bias = Tensor([-1.0, -1.0], dtype.float32)
>>> output = ops.batch_norm(input_x, running_mean, running_var, weight, bias)
>>> print(output)
[[ 2.1621194  1.2360122]
 [14.810596  10.180061 ]]
tinyms.primitives.batch_to_space_nd(input_x, block_shape, crops)[source]

Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_shape, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s \(w_1, ..., w_M\) dimension is the product of original \(w_1, ..., w_M\) dimension and block_shape with given amount to crop from dimension, respectively.

If the input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\), the output shape is \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n//(block\_shape[0]*...*block\_shape[M-1]) \\ w'_i = w_i*block\_shape[i-1]-crops[i-1][0]-crops[i-1][1] \end{array}\end{split}\]
Parameters:
  • input_x (Tensor) – The input tensor. It must be greater or equal to 2-D tensor(equal to 4-D tensor on Ascend), batch dimension must be divisible by product of block_shape.

  • block_shape (Union[list(int), tuple(int), int]) – The block shape of dividing block with all value greater than or equal to 1. If block_shape is a tuple or list, the length of block_shape is M corresponding to the number of spatial dimensions. If block_shape is an int, the block size of M dimensions are the same, equal to block_shape. In this case of Ascend, M must be 2.

  • crops (Union[list(int), tuple(int)]) – The crops values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be >= 0. crops[i] specifies the crops values for spatial dimension i, which corresponds to input dimension i + offset,where offset = N-M, and N is the number of input dimensions. It is required that \(input\_shape[i+offset]*block\_shape[i] > crops[i][0]+crops[i][1]\)

Returns:

Tensor, the output tensor with the same type as input.

Raises:
  • TypeError – If block_shape is not one of list, tuple, int.

  • TypeError – If crops is neither list nor tuple.

  • ValueError – If block_shape is not one dimensional when block_shape is a list or tuple.

  • ValueError – If the length of block_shape is not 2 on Ascend.

  • ValueError – If the element of block_shape is not an integer larger than or euqal to 1.

  • ValueError – If shape of crops is not (M, 2), where M is the length of block_shape.

  • ValueError – If the element of crops is not an integer larger than or euqal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_shape = [2, 2]
>>> crops = [[0, 0], [0, 0]]
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = ops.batch_to_space_nd(input_x, block_shape, crops)
>>> print(output)
[[[[1.  2.]
   [3.  4.]]]]
tinyms.primitives.bernoulli(input, p=0.5, seed=None)[source]

Randomly set the elements of output to 0 or 1 with the probability of p which follows the Bernoulli distribution.

\[out_{i} \sim Bernoulli(p_{i})\]
Parameters:
  • input (Tensor) – Input Tensor. Data type must be int8, uint8, int16, int32, int64, bool, float32 or float64.

  • p (Union[Tensor, float], optional) – Success probability, representing the probability of setting 1 for the corresponding position of the current Tensor. It has the same shape as input, the value of p must be in the range [0, 1]. Default: 0.5.

  • seed (Union[int, None], optional) – The seed value for random generating. The value of seed must be -1 or a positive integer, and -1 means using the current timestamp. Default: None, which will be treated as 0.

Returns:

output (Tensor), with the same shape and type as input .

Raises:
  • TypeError – If dtype of input is not one of: int8, uint8, int16, int32, int64, bool, float32, float64.

  • TypeError – If dtype of p is not one of: float32, float64.

  • TypeError – If dtype of seed is not int or None.

  • ValueError – If p is not in range [0, 1].

  • ValueError – If seed is less than 0 and not -1.

  • ValueError – If p is a Tensor but has different shape than input.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int8)
>>> output = ops.bernoulli(input_x, p=1.0)
>>> print(output)
[1 1 1]
>>> input_p = Tensor(np.array([0.0, 1.0, 1.0]), mindspore.float32)
>>> output = ops.bernoulli(input_x, input_p)
>>> print(output)
[0 1 1]
tinyms.primitives.bessel_i0(x)[source]

Computes the Bessel i0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i0(x)
>>> print(output)
[1.266066  1.0634835 1.0634835 1.266066]
tinyms.primitives.bessel_i0e(x)[source]

Computes the Bessel i0e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i0e(x)
>>> print(output)
[0.46575961  0.64503527  0.64503527  0.46575961]
tinyms.primitives.bessel_i1(x)[source]

Computes the Bessel i1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i1(x)
>>> print(output)
[-0.5651591  -0.25789431  0.25789431  0.5651591]
tinyms.primitives.bessel_i1e(x)[source]

Computes the Bessel i1e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
>>> output = ops.bessel_i1e(x)
>>> print(output)
[-0.20791042  -0.15642083  0.15642083  0.20791042]
tinyms.primitives.bessel_j0(x)[source]

Computes the Bessel j0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_j0(x)
>>> print(output)
[0.93846981  0.76519769  0.22389078  -0.39714981]
tinyms.primitives.bessel_j1(x)[source]

Computes the Bessel j1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_j1(x)
>>> print(output)
[0.24226846  0.44005059  0.57672481 -0.06604333]
tinyms.primitives.bessel_k0(x)[source]

Computes the Bessel k0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k0(x)
>>> print(output)
[0.92441907  0.42102444  0.11389387  0.01115968]
tinyms.primitives.bessel_k0e(x)[source]

Computes the Bessel k0e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k0e(x)
>>> print(output)
[1.52410939  1.14446308  0.84156822  0.60929767]
tinyms.primitives.bessel_k1(x)[source]

Computes the Bessel k1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k1(x)
>>> print(output)
[1.65644112  0.60190723  0.13986588  0.0124835]
tinyms.primitives.bessel_k1e(x)[source]

Computes the Bessel k1e function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_k1e(x)
>>> print(output)
[2.73100971  1.63615349  1.03347685  0.68157595]
tinyms.primitives.bessel_y0(x)[source]

Computes the Bessel y0 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_y0(x)
>>> print(output)
[-0.44451874  0.08825696  0.51037567  -0.01694074]
tinyms.primitives.bessel_y1(x)[source]

Computes the Bessel y1 function of x element-wise.

Parameters:

x (Tensor) – The input tensor. The data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 1., 2., 4.]), mindspore.float32)
>>> output = ops.bessel_y1(x)
>>> print(output)
[-1.47147239  -0.78121282  -0.10703243  0.39792571]
tinyms.primitives.bias_add(input_x, bias)[source]

Returns the sum of the input_x and the bias Tensor. Before adding, the bias Tensor will be broadcasted to be consistent with the shape of the input_x Tensor.

Parameters:
  • input_x (Tensor) – The input tensor. The shape can be 2-5 dimensions.

  • bias (Tensor) – The bias tensor, with shape \((C)\). C must be the same as channel dimension C of input_x.

Returns:

Tensor, with the same shape and data type as input_x.

Raises:
  • TypeError – If input_x or bias is not a Tensor.

  • TypeError – If dtype of input_x or bias is inconsistent.

  • TypeError – If dimension of input_x is not in the range [2, 5].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3)), mindspore.float32)
>>> output = ops.bias_add(input_x, bias)
>>> print(output.shape)
(2, 3)
tinyms.primitives.binary_cross_entropy(logits, labels, weight=None, reduction='mean')[source]

Computes the binary cross entropy(Measure the difference information between two probability distributions) between predictive value logits and target value labels.

Set logits as \(x\), labels as \(y\), output as \(\ell(x, y)\), the weight of nth batch of binary cross entropy is \(w_n\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

In which, \(L\) indicates the loss of all batch_size, \(l\) indicates the loss of one batch_size, and \(n\) indicates one batch_size in the \(1-N\) range. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

  • The value of logits must range from 0 to l.

Parameters:
  • logits (Tensor) – The predictive value whose data type must be float16 or float32.

  • labels (Tensor) – The target value which has the same shape and data type as logits.

  • weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. Its shape must be able to broadcast to that of logits and labels. And it must have the same shape and data type as logits. Default: None. If set to None, the loss function will not consider any sample weights, and each sample will be treated as having equal importance when calculating the loss.

  • reduction (str, optional) – Specify the protocol calculation method used to output the results. Its value must be one of ‘none’, ‘mean’ or ‘sum’, respectively indicate that no calculation method is specified, using the average value for calculation, and using summation for calculation, not case-sensitive. Default: ‘mean’.

Returns:

Tensor or Scalar. Returns Tensor that has the same dtype and shape as logits if reduction is ‘none’. Otherwise, returns a scalar Tensor.

Raises:
  • TypeError – If logits, labels or weight is not a Tensor.

  • TypeError – If dtype of logits, labels or weight (if given) is neither float16 nor float32.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If shape of labels is not the same as logits or weight (if given).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = ops.binary_cross_entropy(logits, labels, weight)
>>> print(output)
0.38240486
tinyms.primitives.binary_cross_entropy_with_logits(logits, label, weight, pos_weight, reduction='mean')[source]

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

Sets input logits as \(X\), input label as \(Y\), input weight as \(W\), output as \(L\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ L_{ij} = -[Y_{ij}log(p_{ij}) + (1 - Y_{ij})log(1 - p_{ij})] \end{array}\end{split}\]

\(i\) indicates the \(i^{th}\) sample, \(j\) indicates the category. Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

\(\ell\) indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor \(weight\) assigns different weights to each piece of data in the batch, and the tensor \(pos_weight\) adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

\[\begin{split}\begin{array}{ll} \\ p_{ij,c} = sigmoid(X_{ij,c}) = \frac{1}{1 + e^{-X_{ij,c}}} \\ L_{ij,c} = -[P_{c}Y_{ij,c} * log(p_{ij,c}) + (1 - Y_{ij,c})log(1 - p_{ij,c})] \end{array}\end{split}\]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and \(P_c\) is the weight of the positive answer for the class c. \(P_c>1\) increases the recall, \(P_c<1\) increases the precision.

Parameters:
  • logits (Tensor) – Input logits. Data type must be float16 or float32.

  • label (Tensor) – Ground truth label, has the same shape as logits. Data type must be float16 or float32.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • pos_weight (Tensor) – A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

  • reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’, not case sensitive. If ‘none’, do not perform reduction. Default: ‘mean’.

Returns:

Tensor or Scalar, if reduction is ‘none’, it’s a tensor with the same shape and type as input logits. Otherwise, the output is a scalar.

Raises:
  • TypeError – If input logits, label, weight, pos_weight is not Tensor.

  • TypeError – If data type of input logits, label, weight, pos_weight is neither float16 nor float32.

  • TypeError – If data type of input reduction is not string.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.

  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> label = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> output = ops.binary_cross_entropy_with_logits(logits, label, weight, pos_weight)
>>> print(output)
0.3463612
tinyms.primitives.bincount(input, weights=None, minlength=0)[source]

Counts the number of occurrences of each value in input.

If you don’t specify minlength, the length of output Tensor will be the maximum value of the input input plus one.

If minlength is specified, the length of output Tensor is the value of maximum of input plus 1 and minlength.

Each value in the output Tensor marks the number of occurrences of that index in input. If ‘weights’ is specified, the output results are weighted, i.e out[n] += weight[i] instead of out[n] += 1.

Parameters:
  • input (Tensor) – 1-d input tensor.

  • weights (Tensor, optional) – Weights, a tensor of the same shape as input. Defaults to None.

  • minlength (int, optional) – A minimum number of bins for the output tensor. Defaults to 0.

Returns:

Tensor, a tensor of shape [max(input)+1] if input is non-empty, otherwise, the shape is [0].

Raises:
  • TypeError – if input or weights is not a tensor.

  • ValueError – If input is not one-dimensional, or if input and weights do not have the same shape.

  • ValueError – If minlength is a negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([2, 4, 1, 0, 0], dtype=mstype.int64)
>>> print(ops.bincount(x, minlength=7))
[2. 1. 1. 0. 1. 0. 0.]
>>> weights = Tensor([0, 0.25, 0.5, 0.75, 1], dtype=mstype.float32)
>>> print(ops.bincount(x, weights=weights))
[1.75 0.5  0.   0.   0.25]
tinyms.primitives.bitwise_and(input, other)[source]

Returns bitwise and of two tensors element-wise.

\[out_i = input_{i} \wedge other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_and(input, other)
>>> print(output)
[ 0  0  1 -1  1  0  1]
tinyms.primitives.bitwise_left_shift(input, other)[source]

Perform a left bitwise shift operation on the input element-wise, where the number of bits to shift is specified by other.

\[\begin{aligned} &out_{i} =input_{i} << other_{i} \end{aligned}\]
Parameters:
  • input (Union[Tensor, Scalar]) – The input to be left shifted.

  • other (Union[Tensor, Scalar]) – The number of bit to be applied on left arithmetic shift.

Returns:

Tensor, the result after bitwise left shift.

Raises:
  • TypeError – If neither input nor other is a tensor.

  • TypeError – If either input or other is not an int or a tensor of dtype: int or uint.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1024, 2]), mindspore.int16)
>>> other = Tensor(np.array([2]), mindspore.int16)
>>> output = ops.bitwise_left_shift(input, other)
>>> print(output)
[4096    8]
tinyms.primitives.bitwise_or(input, other)[source]

Returns bitwise or of two tensors element-wise.

\[out_i = input_{i} \mid other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_or(input, other)
>>> print(output)
[ 0  1  1 -1 -1  3  3]
tinyms.primitives.bitwise_right_shift(input, other)[source]

Perform a right bitwise shift operation on the input element-wise, where the number of bits to shift is specified by other.

\[\begin{aligned} &out_{i} =input_{i} >> other_{i} \end{aligned}\]
Parameters:
  • input (Union[Tensor, Scalar]) – The input to be right shifted.

  • other (Union[Tensor, Scalar]) – The number of bit to be applied on right arithmetic shift.

Returns:

Tensor, the result after bitwise right shift.

Raises:
  • TypeError – If neither input nor other is a tensor.

  • TypeError – If either input or other is not an int or a tensor of dtype: int or uint.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1024, 2]), mindspore.int16)
>>> other = Tensor(np.array([2]), mindspore.int16)
>>> output = ops.bitwise_right_shift(input, other)
>>> print(output)
[256   0]
tinyms.primitives.bitwise_xor(input, other)[source]

Returns bitwise xor of two tensors element-wise.

\[out_i = input_{i} \oplus other_{i}\]

Args of input and other comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input (Tensor) – The first input tensor with shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • other (Tensor) – The second input tensor with the same dtype as input.

Returns:

Tensor, has the same type as the input.

Raises:

TypeError – If input or other is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
>>> other = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
>>> output = ops.bitwise_xor(input, other)
>>> print(output)
[ 0  1  0  0 -2  3  2]
tinyms.primitives.blackman_window(window_length, periodic=True, *, dtype=None)[source]

Blackman window function, usually used to extract finite signal segment for FFT.

The window_length is a input tensor which determines the returned window size, and its data should be an integer. In particular, if window_length is equal to 1, only a single value 1 exists in the returned window.

Attr periodic determines whether the returned window removes the last duplicate value from the symmetric window and prepares to be a periodic window with functions. Therefore, if attr periodic is true, the \(N\) in formula is \(window\_length + 1\).

\[w[n] = 0.42 - 0.5 cos(\frac{2\pi n}{N - 1}) + 0.08 cos(\frac{4\pi n}{N - 1})\]

where \(N\) is the full window size, and n is natural number less than \(N\) :[0, 1, …, N-1].

Parameters:
  • window_length (Tensor) – The size of returned window, with data type int32, int64. The input data should be an integer with a value of [0, 1000000].

  • periodic (bool, optional) – Indicates whether to returns a window to be used as periodic function or a symmetric window. Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The data type of returned tensor. Only float16, float32 and float64 is allowed. Default: None.

Returns:

A 1-D tensor of size window_length containing the window. Its datatype is set by the attr dtype. If ‘dtype’ is None, output datatype is float32.

Raises:
  • TypeError – If window_length is not a Tensor.

  • TypeError – If periodic is not a bool.

  • TypeError – If dtype is not one of: float16, float32, float64.

  • TypeError – If the type of window_length is not one of: int32, int64.

  • ValueError – If the value range of window_length is not [0, 1000000].

  • ValueError – If the dimension of window_length is not 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = Tensor(10, mindspore.int32)
>>> output = ops.blackman_window(window_length, periodic=True, dtype=mindspore.float32)
>>> print(output)
[-2.9802322e-08  4.0212840e-02  2.0077014e-01  5.0978714e-01
  8.4922993e-01  1.0000000e+00  8.4922981e-01  5.0978690e-01
  2.0077008e-01  4.0212870e-02]
tinyms.primitives.block_diag(*inputs)[source]

Creates a block diagonal matrix from the provided Tensor.

Parameters:

inputs (Tensor) – One or more tensors, the dimension of Tensor should be 0, 1 or 2.

Returns:

Tensor, two-dimensional with all input tensors arranged in order so that their top left and bottom right corners are diagonally adjacent. All other elements are set to 0.

Raises:
  • TypeError – If the input is not a Tensor.

  • ValueError – If the dimension of Tensor is not 0, 1 or 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([[4], [3], [2]], mstype.int32)
>>> x2 = Tensor([7, 6, 5], mstype.int32)
>>> x3 = Tensor(1, mstype.int32)
>>> x4 = Tensor([[5, 4, 3], [2, 1, 0]], mstype.int32)
>>> x5 = Tensor([[8, 7], [7, 8]], mstype.int32)
>>> out = ops.block_diag(x1, x2, x3, x4, x5)
>>> print(out.asnumpy())
[[4 0 0 0 0 0 0 0 0 0]
 [3 0 0 0 0 0 0 0 0 0]
 [2 0 0 0 0 0 0 0 0 0]
 [0 7 6 5 0 0 0 0 0 0]
 [0 0 0 0 1 0 0 0 0 0]
 [0 0 0 0 0 5 4 3 0 0]
 [0 0 0 0 0 2 1 0 0 0]
 [0 0 0 0 0 0 0 0 8 7]
 [0 0 0 0 0 0 0 0 7 8]]
tinyms.primitives.bmm(input_x, mat2)[source]

Computes matrix multiplication between two tensors by batch.

\[ext{output[..., :, :]} = ext{matrix}(input_x[..., :, :]) * ext{matrix}(mat2[..., :, :])\]

The dim of input_x can not be less than 3 and the dim of mat2 can not be less than 2.

Parameters:
  • input_x (Tensor) – The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions.

  • mat2 (Tensor) – The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\).

Returns:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Raises:
  • ValueError – If dim of input_x is less than 3 or dim of mat2 is less than 2.

  • ValueError – If the length of the third dim of input_x is not equal to the length of the second dim of mat2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> input_x = Tensor(np.arange(24).reshape((2, 4, 1, 3)), ms.float32)
>>> mat2 = Tensor(np.arange(72).reshape((2, 4, 3, 3)), ms.float32)
>>> output = ops.bmm(input_x, mat2)
>>> print(output)
[[[[  15.   18.   21.]]
  [[ 150.  162.  174.]]
  [[ 447.  468.  489.]]
  [[ 906.  936.  966.]]]
 [[[1527. 1566. 1605.]]
  [[2310. 2358. 2406.]]
  [[3255. 3312. 3369.]]
  [[4362. 4428. 4494.]]]]
tinyms.primitives.bounding_box_decode(anchor_box, deltas, max_shape, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)[source]

Decode the bounding box locations, calculate the offset, and convert the offset into a Bbox, which is used to mark the target in the subsequent images, etc.

Parameters:
  • anchor_box (Tensor) – Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • deltas (Tensor) – Delta of boxes. Which has the same shape with anchor_box.

  • max_shape (tuple) – The max size limit for decoding box calculation.

  • means (tuple, optional) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple, optional) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • wh_ratio_clip (float, optional) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Returns:

Tensor, decoded boxes. It has the same data type and shape as anchor_box.

Raises:
  • TypeError – If means, stds or max_shape is not a tuple.

  • TypeError – If wh_ratio_clip is not a float.

  • TypeError – If anchor_box or deltas is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[4, 1, 2, 1], [2, 2, 2, 3]], mindspore.float32)
>>> deltas = Tensor([[3, 1, 2, 2], [1, 2, 1, 4]], mindspore.float32)
>>> output = ops.bounding_box_decode(anchor_box, deltas, max_shape=(768, 1280), means=(0.0, 0.0, 0.0, 0.0),
...                                  stds=(1.0, 1.0, 1.0, 1.0), wh_ratio_clip=0.016)
>>> print(output)
[[ 4.1953125  0.         0.         5.1953125]
 [ 2.140625   0.         3.859375  60.59375  ]]
tinyms.primitives.bounding_box_encode(anchor_box, groundtruth_box, means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))[source]

Encode the bounding box locations, calculate the offset between the predicted bounding boxes and the real bounding boxes, and the offset will be used as a variable for the loss.

Parameters:
  • anchor_box (Tensor) – Anchor boxes. The shape of anchor_box must be \((n, 4)\).

  • groundtruth_box (Tensor) – Ground truth boxes. Which has the same shape with anchor_box.

  • means (tuple, optional) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple, optional) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

Returns:

Tensor, encoded bounding boxes. It has the same data type and shape as input anchor_box.

Raises:
  • TypeError – If means or stds is not a tuple.

  • TypeError – If anchor_box or groundtruth_box is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]], mindspore.float32)
>>> groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]], mindspore.float32)
>>> output = ops.bounding_box_encode(anchor_box, groundtruth_box, means=(0.0, 0.0, 0.0, 0.0),
...                                  stds=(1.0, 1.0, 1.0, 1.0))
>>> print(output)
[[ -1.  0.25  0.  0.40551758]
 [ -1.  0.25  0.  0.40551758]]
tinyms.primitives.broadcast_to(input, shape)[source]

Broadcasts input tensor to a given shape. The dim of input shape must be smaller than or equal to that of target shape. Suppose input shape is \((x_1, x_2, ..., x_m)\), target shape is \((*, y_1, y_2, ..., y_m)\), where \(*\) means any additional dimension. The broadcast rules are as follows:

Compare the value of \(x_m\) and \(y_m\), \(x_{m-1}\) and \(y_{m-1}\), …, \(x_1\) and \(y_1\) consecutively and decide whether these shapes are broadcastable and what the broadcast result is.

If the value pairs at a specific dim are equal, then that value goes right into that dim of output shape. With an input shape \((2, 3)\), target shape \((2, 3)\) , the inferred output shape is \((2, 3)\).

If the value pairs are unequal, there are three cases:

Case 1: If the value of the target shape in the dimension is -1, the value of the output shape in the dimension is the value of the corresponding input shape in the dimension. With an input shape \((3, 3)\), target shape \((-1, 3)\), the output shape is \((3, 3)\).

Case 2: If the value of target shape in the dimension is not -1, but the corresponding value in the input shape is 1, then the corresponding value of the output shape is that of the target shape. With an input shape \((1, 3)\), target shape \((8, 3)\), the output shape is \((8, 3)\).

Case 3: If the corresponding values of the two shapes do not satisfy the above cases, it means that broadcasting from the input shape to the target shape is not supported.

So far we got the last m dims of the outshape, now focus on the first \(*\) dims, there are two cases:

If the first \(*\) dims of output shape does not have -1 in it, then fill the input shape with ones until their length are the same, and then refer to Case 2 mentioned above to calculate the output shape. With target shape \((3, 1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), the filled input shape will be \((1, 1, 1, 1, 5, 9)\) and thus the output shape is \((3, 1, 4, 1, 5, 9)\).

If the first \(*\) dims of output shape have -1 in it, it implies this -1 is corresponding to a non-existing dim so they’re not broadcastable. With target shape \((3, -1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), instead of operating the dim-filling process first, it raises errors directly.

Parameters:
  • input (Tensor) – The input Tensor. Supported types are: float16, float32, int32, int8, uint8, bool.

  • shape (tuple) – The target shape to broadcast. Can be fully specified, or have -1 in one position where it will be substituted by the input tensor’s shape in that position, see example.

Returns:

Tensor, with the given shape and the same data type as input.

Raises:
  • TypeError – If shape is not a tuple.

  • ValueError – If the target and input shapes are incompatible, or if a - 1 in the target shape is in an invalid location.

Supported Platforms:

Ascend GPU CPU

Examples

>>> shape = (2, 3)
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 2. 3.]
 [1. 2. 3.]]
>>> shape = (-1, 2)
>>> x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 1.]
 [2. 2.]]
tinyms.primitives.cartesian_prod(*inputs)[source]

Performs a Cartesian product for a given tensor sequence. The behavior is similar to Python’s itertools.product.

Parameters:

inputs (List[Tensor]) – Tensor sequence.

Returns:

Tensor, a Cartesian product for a given tensor sequence.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([1, 2])
>>> x2 = Tensor([5])
>>> out = ops.cartesian_prod(x1, x2)
>>> print(out.asnumpy())
[[1 5]
 [2 5]]
>>> x1 = Tensor([1, 2, 3, 4])
>>> x2 = Tensor([5, 6, 7])
>>> x3 = Tensor([8, 9, 0, 1, 2])
>>> out = ops.cartesian_prod(x1, x2, x3)
>>> print(len(out))
60
tinyms.primitives.cat(tensors, axis=0)[source]

Connect input tensors along with the given axis.

The input data is a tuple or a list of tensors. These tensors have the same rank \(R\). Set the given axis as \(m\), and \(0 \le m < R\). Set the number of input tensors as \(N\). For the \(i\)-th tensor \(t_i\), it has the shape of \((x_1, x_2, ..., x_{mi}, ..., x_R)\). \(x_{mi}\) is the \(m\)-th dimension of the \(t_i\). Then, the shape of the output tensor is

\[(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\]
Parameters:
  • tensors (Union[tuple, list]) – A tuple or a list of input tensors. Suppose there are two tensors in this tuple or list, namely t1 and t2. To perform concat in the axis 0 direction, except for the \(0\)-th axis, all other dimensions should be equal, that is, \(t1.shape[1] = t2.shape[1], t1.shape[2] = t2.shape[2], ..., t1.shape[R-1] = t2.shape[R-1]\), where \(R\) represents the rank of tensor.

  • axis (int) – The specified axis, whose value is in range \([-R, R)\). Default: 0.

Returns:

Tensor, the shape is \((x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\).

The data type is the same with tensors.

Raises:
  • TypeError – If axis is not an int.

  • ValueError – If tensors have different dimension of tensor.

  • ValueError – If axis not in range \([-R, R)\).

  • RuntimeError – If tensor’s shape in tensors except for axis are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = ops.cat((input_x1, input_x2))
>>> print(output)
[[0. 1.]
 [2. 1.]
 [0. 1.]
 [2. 1.]]
>>> output = ops.cat((input_x1, input_x2), 1)
>>> print(output)
[[0. 1. 0. 1.]
 [2. 1. 2. 1.]]
tinyms.primitives.cdist(x1, x2, p=2.0)[source]

Computes p-norm distance between each pair of row vectors of two input Tensors.

Parameters:
  • x1 (Tensor) – Input tensor of shape \((B, P, M)\). Letter \(B\) represents 0 or positive int number. When \(B\) is equal to 0, it means this dimension can be ignored, i.e. shape of the tensor is \((P, M)\). The supported dtype is [float32, float64] on GPU, or [float32] on CPU.

  • x2 (Tensor) – Input tensor of shape \((B, R, M)\), has the same dtype as x1.

  • p (float, optional) – P value for the p-norm distance to calculate between each vector pair, P ∈ [0,∞]. Default: 2.0.

Returns:

Tensor, p-norm distance, has the same dtype as x1, its shape is \((B, P, R)\).

Raises:
  • TypeError – If x1 or x2 is not Tensor.

  • TypeError – If dtype of x1 or x2 is not in [float32, float64] on GPU, or is not in [float32] on CPU.

  • TypeError – If p is not float32.

  • ValueError – If p is negative.

  • ValueError – If dimension of x1 is not the same as x2.

  • ValueError – If dimension of x1 or x2 is neither 2 nor 3.

  • ValueError – If the batch shape of x1 is not the same as the shape of x2.

  • ValueError – If the number of columns of x1 is not the same as the number of x2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[1.0, 1.0], [2.0, 2.0]]]).astype(np.float32))
>>> y = Tensor(np.array([[[3.0, 3.0], [3.0, 3.0]]]).astype(np.float32))
>>> output = ops.cdist(x, y, 2.0)
>>> print(output)
[[[2.8284273 2.8284273]
  [1.4142137 1.4142137]]]
tinyms.primitives.ceil(input)[source]

Rounds a tensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16 or float32.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> output = ops.ceil(x)
>>> print(output)
[ 2.  3. -1.]
tinyms.primitives.celu(x, alpha=1.0)[source]

celu activation function, computes celu (Continuously differentiable exponential linear units) of input tensors element-wise. The formula is defined as follows:

\[\text{CeLU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))\]

For more details, please refer to celu.

Parameters:
  • x (Tensor) – The input of celu with data type of float16 or float32.

  • alpha (float, optional) – The \(\alpha\) value for the Celu formulation. Default: 1.0

Returns:

Tensor, has the same data type and shape as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
>>> output = ops.celu(x, alpha=1.0)
>>> print(output)
[-0.86466473 -0.63212055  1.          2.        ]
tinyms.primitives.channel_shuffle(x, groups)[source]

Divide the channels in a tensor of shape \((*, C, H, W)\) into g groups and rearrange them as \((*, \frac{C}{g}, g, H*W)\), while keeping the original tensor shapes.

Parameters:
  • x (Tensor) – Tensor to be divided, it has shape \((*, C, H, W)\), with float16, float32, int8, int16, int32, int64, uint8, uint16, uint32, uint64 data type.

  • groups (int) – Number of groups to divide channels in.

Returns:

A Tensor, has the same type as the x, and has the shape \((*, C, H, W)\).

Raises:
  • TypeError – If data type of x is not one of the following: float16, float32, int8, int16, int32, int64, uint8, uint16, uint32, uint64.

  • TypeError – If dim of x is < 4.

  • TypeError – If groups is not a positive number.

  • ValueError – If channel number of x is not divisible by groups.

Supported Platforms:

Ascend CPU

Examples

>>> group = 2
>>> x = Tensor(np.arange(1* 4 * 2 * 2).reshape(1, 4, 2, 2).astype(np.int16))
>>> y = mindspore.ops.channel_shuffle(x, group)
>>> print(y)
[[[[ 0  1]
   [ 2  3]]
   [[ 8  9]
   [10 11]]
   [[ 4  5]
   [ 6  7]]
   [[12 13]
   [14 15]]]]
tinyms.primitives.check_valid(bboxes, img_metas)[source]

Checks whether the bounding box is in the image.

bboxes contain several sets of bounding boxes, each represented by two abscissa points \((x0, x1)\) and two ordinate points \((y0, y1)\) . img_metas provides information about the original image, including three parameters \((height, width, ratio)\) , which specify the valid boundary of the image.

when the following conditions are met:

\(x0 >= 0\)

\(y0 >= 0\)

\(x1 <= width * ratio - 1\)

\(y1 <= height * ratio - 1\)

the bounding box is considered to be within the image.

Warning

The bounding box specified by bboxes and the image information specified by img_metas need to be valid, i.e.: \(x0 <= x1\) , \(y0 <= y1\) , and \((height, width, ratio)\) are all positive.

Parameters:
  • bboxes (Tensor) – Bounding boxes tensor with shape \((N, 4)\) . \(N\) indicates the number of bounding boxes, the value 4 indicates four coordinate points \((x0, y0, x1, y1)\) . Data type must be float16 or float32.

  • img_metas (Tensor) – Raw image size information with the format of \((height, width, ratio)\) , specifying the valid boundary \((height * ratio - 1, width * ratio - 1)\) . Data type must be float16 or float32.

Returns:

Tensor, with shape of \((N,)\) and dtype of bool, specifying whether the bounding boxes is in the image. True indicates valid, while False indicates invalid.

Raises:
  • TypeError – If bboxes or img_metas is not a Tensor.

  • TypeError – If dtype of bboxes or img_metas is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> output = ops.check_valid(bboxes, img_metas)
>>> print(output)
[ True False False]
tinyms.primitives.choice_with_mask(input_x, count=256, seed=None)[source]

Generates a random sample as index tensor with a mask tensor from a given tensor.

The input_x must be a tensor whose dimension is not less than 1. If its dimension is greater than or equal to 2, the first dimension specifies the number of samples. The returned index tensor denotes the index of the nonzero sample, the mask tensor denotes which elements in the index tensor are valid.

Parameters:
  • input_x (Tensor[bool]) – The input tensor. The input tensor rank must be greater than or equal to 1 and less than or equal to 5.

  • count (int, optional) – Number of items expected to get and the number must be greater than 0. Default: 256.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output shape is 2-D.

  • mask (Tensor) - The output shape is 1-D.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = ops.choice_with_mask(input_x)
>>> result = output_y.shape
>>> print(result)
(256, 2)
>>> result = output_mask.shape
>>> print(result)
(256,)
tinyms.primitives.cholesky(input_x, upper=False)[source]

Returns the Cholesky decomposition of zero or more batch dimensions consisting of symmetric positive-definite matrices.

If upper is True, returns an upper-triangular matrix, \(U\), and the decomposition has the form:

\[A = U^TU\]

If upper is False, returns a lower-triangular matrix, \(L\), and the decomposition has the form:

\[A = LL^T\]

where A is the symmetric positive-definite matrix.

Parameters:
  • input_x (Tensor) – Tensor of shape \((*, N, N)\), where \(*\) is zero or more batch dimensions consisting of symmetric positive-definite matrices, with float32 or float64 data type.

  • upper (bool) – If upper is True, returns an upper-triangular matrix. If upper is False, returns a lower-triangular matrix. Default: False.

Returns:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If upper is not a bool.

  • TypeError – If dtype of input_x is not one of: float64, float32.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If input_x is not a or a batch of square matrix.

  • ValueError – If input_x is not symmetric positive definite.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 1.0], [1.0, 2.0]]), mindspore.float32)
>>> output = ops.cholesky(input_x, upper=False)
>>> print(output)
[[1. 0.]
 [1. 1.]]
tinyms.primitives.cholesky_inverse(input_x, upper=False)[source]

Returns the inverse of the positive definite matrix using cholesky matrix factorization by its Cholesky factor.

If upper is True, \(U\) is an upper triangular such that the output tensor is

\[inv = (U^{T}U)^{-1}\]

If upper is False, \(U\) is a lower triangular such that the output tensor is

\[inv = (UU^{T})^{-1}\]

Note

The input must be either an upper-triangular matrix or a lower-triangular matrix from Cholesky decomposition.

Parameters:
  • input_x (Tensor) – The input tensor with a rank of 2. Supported dtypes: float32, float64.

  • upper (bool) – If upper is True, return an upper triangular matrix. If upper is False, return a lower-triangular matrix. Default: False.

Returns:

Tensor, has the same shape and dtype as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not one of: float32, float64.

  • ValueError – If the dimension of input_x is not equal to 2.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[2,0,0], [4,1,0], [-1,1,2]]), mindspore.float32)
>>> output = ops.cholesky_inverse(input_x)
>>> print(output)
[[ 5.8125 -2.625   0.625 ]
 [-2.625   1.25   -0.25  ]
 [ 0.625  -0.25    0.25  ]]
tinyms.primitives.chunk(input, chunks, axis=0)[source]

Cut the input Tensor into chunks sub-tensors along the specified axis.

Note

This function may return less then the specified number of chunks!

Parameters:
  • input (Tensor) – A Tensor to be cut.

  • chunks (int) – Number of sub-tensors to cut.

  • axis (int, optional) – Specify the dimensions that you want to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – The sum of chunks is not int.

  • TypeError – If argument axis is not int.

  • ValueError – If argument axis is out of range of \([-input.ndim, input.ndim)\) .

  • ValueError – If argument chunks is not positive number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.chunk(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.clamp(input, min=None, max=None)[source]

Clamps tensor values between the specified minimum value and maximum value.

Limits the value of \(input\) to a range, whose lower limit is min and upper limit is max .

\[\begin{split}out_i= \left\{ \begin{array}{align} max & \text{ if } x_i\ge max \\ x_i & \text{ if } min \lt x_i \lt max \\ min & \text{ if } x_i \le min \\ \end{array}\right.\end{split}\]

Note

  • min and max cannot be None at the same time;

  • When min is None and max is not None, the elements in Tensor larger than max will become max;

  • When min is not None and max is None, the elements in Tensor smaller than min will become min;

  • If min is greater than max, the value of all elements in Tensor will be set to max;

  • The data type of input, min and max should support implicit type conversion and cannot be bool type.

Parameters:
  • input (Union(Tensor, list[Tensor], tuple[Tensor])) – Input data, which type is Tensor or a list or tuple of Tensor. Tensors of arbitrary dimensions are supported.

  • min (Union(Tensor, float, int), optional) – The minimum value. Default: None.

  • max (Union(Tensor, float, int), optional) – The maximum value. Default: None.

Returns:

Union(Tensor, tuple[Tensor], list[Tensor]), a clipped Tensor or a tuple or a list of clipped Tensor. The data type and shape are the same as input.

Raises:
  • ValueError – If both min and max are None.

  • TypeError – If the type of input is not in Tensor or list[Tensor] or tuple[Tensor].

  • TypeError – If the type of min is not in None, Tensor, float or int.

  • TypeError – If the type of max is not in None, Tensor, float or int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: the data type of x is Tensor
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> min_value = Tensor(5, mindspore.float32)
>>> max_value = Tensor(20, mindspore.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clamp(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
>>> # case 2: the data type of x is list[Tensor]
>>> min_value = 5
>>> max_value = 20
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> y = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clamp([x,y], min_value, max_value)
>>> for out in output:
...     print(out)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.clip(x, min=None, max=None)[source]

Alias for mindspore.ops.clamp() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.clip_by_global_norm(x, clip_norm=1.0, use_norm=None)[source]

Clips tensor values by the ratio of the sum of their norms.

Note

  • Input x should be a tuple or list of tensors. Otherwise, it will raise an error.

  • On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input x is the gradient, the gradient norm values on all devices will be automatically aggregated by allreduce inserted after the local square sum of the gradients.

Parameters:
  • x (Union(tuple[Tensor], list[Tensor])) – Input data to clip.

  • clip_norm (Union(float, int)) – The clipping ratio, it should be greater than 0. Default: 1.0

  • use_norm (None) – The global norm. Default: None. Currently only none is supported.

Returns:

tuple[Tensor], a clipped Tensor. It has the same data type as x and each Tensor in the output tuple is the same as the original input shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> x1 = np.array([[2., 3.], [1., 2.]]).astype(np.float32)
>>> x2 = np.array([[1., 4.], [3., 1.]]).astype(np.float32)
>>> input_x = (Tensor(x1), Tensor(x2))
>>> out = ops.clip_by_global_norm(input_x, 1.0)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.98142403e-01,  4.47213590e-01],
 [ 1.49071202e-01,  2.98142403e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.49071202e-01,  5.96284807e-01],
 [ 4.47213590e-01,  1.49071202e-01]]))
tinyms.primitives.clip_by_value(x, clip_value_min=None, clip_value_max=None)[source]

Clips tensor values to a specified min and max.

Limits the value of \(x\) to a range, whose lower limit is clip_value_min and upper limit is clip_value_max .

\[\begin{split}out_i= \left\{ \begin{array}{align} clip\_value\_max & \text{ if } x_i\ge clip\_value\_max \\ x_i & \text{ if } clip\_value\_min \lt x_i \lt clip\_value\_max \\ clip\_value\_min & \text{ if } x_i \le clip\_value\_min \\ \end{array}\right.\end{split}\]

Note

  • clip_value_min and clip_value_max cannot be None at the same time;

  • When clip_value_min is None and clip_value_max is not None, the elements in Tensor larger than clip_value_max will become clip_value_max;

  • When clip_value_min is not None and clip_value_max is None, the elements in Tensor smaller than clip_value_min will become clip_value_min;

  • If clip_value_min is greater than clip_value_max, the value of all elements in Tensor will be set to clip_value_max;

  • The data type of x, clip_value_min and clip_value_max should support implicit type conversion and cannot be bool type.

Parameters:
  • x (Union(Tensor, list[Tensor], tuple[Tensor])) – Input data, which type is Tensor or a list or tuple of Tensor. Tensors of arbitrary dimensions are supported.

  • clip_value_min (Union(Tensor, float, int)) – The minimum value. Default: None.

  • clip_value_max (Union(Tensor, float, int)) – The maximum value. Default: None.

Returns:

(Union(Tensor, tuple[Tensor], list[Tensor])), a clipped Tensor or a tuple or a list of clipped Tensor. The data type and shape are the same as x.

Raises:
  • ValueError – If both clip_value_min and clip_value_max are None.

  • TypeError – If the type of x is not in Tensor or list[Tensor] or tuple[Tensor].

  • TypeError – If the type of clip_value_min is not in None, Tensor, float or int.

  • TypeError – If the type of clip_value_max is not in None, Tensor, float or int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: the data type of x is Tensor
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> min_value = Tensor(5, mindspore.float32)
>>> max_value = Tensor(20, mindspore.float32)
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clip_by_value(x, min_value, max_value)
>>> print(output)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
>>> # case 2: the data type of x is list[Tensor]
>>> min_value = 5
>>> max_value = 20
>>> x = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> y = Tensor(np.array([[1., 25., 5., 7.], [4., 11., 6., 21.]]), mindspore.float32)
>>> output = ops.clip_by_value([x,y], min_value, max_value)
>>> for out in output:
...     print(out)
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
[[ 5. 20.  5.  7.]
 [ 5. 11.  6. 20.]]
tinyms.primitives.coalesce(x_indices: mindspore.common.tensor.Tensor, x_values: mindspore.common.tensor.Tensor, x_shape: mindspore.common.tensor.Tensor) → Tuple[mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor, mindspore.common.tensor.Tensor][source]

Returns the coalesced sparse tensor of the input.

Parameters:
  • x_indices (-) – Supported data type is int64. It’s elements should be non-negative. The shape is \((y, x)\).

  • x_values (-) – Supported data types are float16 and float32. The shape is \((x,)\).

  • x_shape (-) – Supported data type is int64. The shape is \((y,)\).

Returns:

  • y_indices (Tensor) - A 2-D Tensor, represents the indices of the nonzero elements of the sparse tensor. Data type is int64. It’s elements are non-negative. The shape is \((y, z)\). z represents the number of different indices in x_indices.

  • y_values (Tensor) - A 1-D Tensor, represents the values corresponding to the indices in y_indices. Data type is the same as x_values’s. The shape is \((z,)\).

  • y_shape (Tensor) - A 1-D Tensor, specifies the shape of the sparse tensor. Data type is int64. The shape is \((y,)\).

Raises:
  • TypeError – If the data type of x_values is neither float32 nor float16.

  • TypeError – If any of the data types of x_indices and x_shape is not int64.

  • ValueError – If any of x_values and x_shape is not a 1-D tensor.

  • ValueError – If x_indices is not a 2-D tensor.

  • ValueError – If sizes of second dimension of x_indices and first dimension of x_values are not the same.

  • ValueError – If sizes of first dimension of x_indices and first dimension of x_shape are not the same.

  • ValueError – If any of the values of elements of x_indices is negative.

  • ValueError – If any of the values of elements of x_indices exceed the limit set by x_shape.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x_indices = Tensor([[0, 0, 1], [1, 1, 2]], dtype=ms.int64)
>>> x_values = Tensor([1, 5, 4], dtype=ms.float32)
>>> x_shape = Tensor([3, 3], dtype=ms.int64)
>>> y_indices, y_values, y_shape = ops.Coalesce()(x_indices, x_values, x_shape)
>>> print(y_indices)
[[0 1]
 [1 2]]
>>> print(y_values)
[6. 4.]
>>> print(y_shape)
[3 3]
tinyms.primitives.col2im(input_x, output_size, kernel_size, dilation, padding_value, stride)[source]

Combines an array of sliding local blocks into a large containing tensor.

Parameters:
  • input_x (Tensor) – 4D tensor with data type float16 or float.

  • output_size (Tensor) – 1D tensor with 2 elements of data type int.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]]) – The size of the dilation, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding_value (Union[int, tuple[int], list[int]]) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • stride (Union[int, tuple[int], list[int]]) – The size of the stride, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

Returns:

A 4D Tensor, with same type as ‘input_x’.

Raises:
  • TypeError – If kernel_size, dilation, padding_value, stride data type is not in Union[int, tuple[int], list[int]].

  • ValueError – If kernel_size, dilation, padding_value, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding_value value is less than zero or elements number more than 2.

  • ValueError – If input_x.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If input_x.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> output = ops.col2im(x, output_size, [2, 2], [2, 2], [2, 2], [2, 2])
>>> print(output.shape)
(16, 16, 8, 8)
tinyms.primitives.column_stack(tensors)[source]

Stacks 1-D tensors as columns into a 2-D tensor. 2-D tensors are stacked as-is, like ops.hstack.

Parameters:

tensors (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. All of them must have the same shape except the axis to be concatenated.

Returns:

2-D Tensor, formed by stacking the given tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> x1 = Tensor([1, 1, 1])
>>> x2 = Tensor([2, 2, 2])
>>> output = ops.column_stack((x1, x2))
>>> print(output)
[[1 2]
 [1 2]
 [1 2]]
tinyms.primitives.combinations(x, r=2, with_replacement=False)[source]

Returns all r-length subsequences of input Tensor.

When with_replacement is set to False, it works similar to Python’s itertools.combinations, and when with_replacement is set to True, it behaves like itertools.combinations_with_replacement.

Parameters:
  • x (Tensor) – One-dimensional tensors.

  • r (int, optional) – Number of elements to perform combination. Default: 2.

  • with_replacement (bool, optional) – Allow duplication or not. Default: False.

Returns:

Tensor, contains all possible combinations of elements sampled from input Tensor.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 3, -1, 0, 4])
>>> output = ops.combinations(x)
>>> print(output.asnumpy())
[[ 1  3]
 [ 1 -1]
 [ 1  0]
 [ 1  4]
 [ 3 -1]
 [ 3  0]
 [ 3  4]
 [-1  0]
 [-1  4]
 [ 0  4]]
tinyms.primitives.concat(tensors, axis=0)[source]

Alias for mindspore.ops.cat()

tinyms.primitives.conj(input)[source]

Returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form a + bj, where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form a - bj.

If input is real, it is returned unchanged.

Parameters:

input (Tensor) – The input tensor to compute to. Must have numeric type.

Returns:

Tensor, has the same dtype as the input.

Raises:
  • TypeError – If the dtype of input is not a numeric type.

  • TypeError – If the input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3+0.4j)), mindspore.complex64)
>>> output = ops.conj(x)
>>> print(output)
(1.3-0.4j)
tinyms.primitives.conv1d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 1D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is channel number, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{j}, X_i) + b_j,\]

where \(ccor\) is the cross-correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{j}\) is a slice of kernel, and it has shape \((\text{kernal_size})\), where \(\text{kernel_size}\) is the width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{groups}, \text{kernel_size})\), where groups is the group number to split the input in the channel dimension.

If the pad_mode is set to be “valid”, the output width will be \(\left \lfloor{ 1 + \frac{W_{in} + \text{padding[0]} - \text{kernel_size} - (\text{kernel_size} - 1) \times(\text{dilation} - 1)} {\text { stride }}} \right \rfloor\).

where \(dilation\) is spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input. For output width on other pad_mode, please refer to formula on mindspore.nn.Conv1d.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: ConvNets .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when groups>1, condition C_{in} = C_{out} = groups must be satisfied.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, W_{in})\).

  • weight (Tensor) – Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size})\), then the size of kernel is \((\text{kernel_size})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number or a tuple of one int that represents width of movement. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in left and right possiblily. Otherwise, the last extra padding will be calculated from the right side. If this mode is set, padding must be 0.

    • valid: Adopts the way of discarding. The possible largest width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int]), optional) – Implicit paddings on both sides of input, meaning the paddings of left and right are the same, equal to padding or padding[0] when padding is a tuple of 1 integer. Default: 0.

  • dilation (Union(int, tuple[int]), optional) – Gaps between kernel elements. The data type is int or a tuple of 1 integer. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the width of input. Default: 1.

  • groups (int, optional) – Splits input into groups. Default: 1.

Returns:

Tensor, the value that applied 1D convolution. The shape is \((N, C_{out}, W_{out})\).

Raises:
  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeErrorgroups is not an int.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \((C_{out})\) .

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 1.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(64).reshape((4, 4, 4)), mindspore.float32)
>>> weight = Tensor(np.arange(8).rehspe((2, 2, 2)), mindspore.float32)
>>> bias = Tensor([-0.12345, 2.7683], ms.float32)
>>> output = ops.conv1d(x, weight, pad_mode='pad', padding=(1,), bias=bias, groups=2)
>>> print(output.shape)
(4, 2, 5)
tinyms.primitives.conv2d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 2D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is height, \(W\) is width, \(X_i\) is the \(i^{th}\) input value and \(b_i\) indicates the deviation value of the \(i^{th}\) input value. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross-correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel, and it has shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{ kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where groups is the group number to split the input in the channel dimension.

If the pad_mode is set to be “valid”, the output height and width will be \(\left \lfloor{ 1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} - (\text{kernel_size[0]} - 1) \times(\text{dilation[0]} - 1)} {\text { stride[0] }}} \right \rfloor\) and

\(\left \lfloor{1 + \frac{W_{in} + \text{padding[2]} + \text{padding[3]} - \text{kernel_size[1]} - (\text{kernel_size[1]} - 1) \times(\text{dilation[1]} - 1)} {\text { stride[1] }}} \right \rfloor\) respectively.

where \(dilation\) is spacing between kernel elements, \(stride\) is The step length of each step, \(padding\) is zero-padding added to both sides of the input. For output height and width on other pad_mode, please refer to formula on mindspore.nn.Conv2d.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: ConvNets .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when groups>1, condition C_{in} = C_{out} = groups must be satisfied.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) – Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in top and bottom, left and right possiblily. Otherwise, the last extra padding will be calculated from the bottom and the right side. If this mode is set, padding must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int]), optional) – Implicit paddings on both sides of the input x. If padding is one integer, the paddings of top, bottom, left and right are the same, equal to padding. If padding is a tuple with two integers, the padding of top adn bottom is padding[0], and the padding of left and right is padding[1]. Default: 0.

  • dilation (Union(int, tuple[int]), optional) – Gaps between kernel elements.The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • groups (int, optional) – Splits input into groups. Default: 1.

Returns:

Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeErrorgroups is not an int.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \(C_{out}\) .

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 2.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> output = ops.conv2d(x, weight)
>>> print(output.shape)
(10, 32, 30, 30)
tinyms.primitives.conv3d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 3D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H, W\) is feature height and width respectively. the output value of a layer is calculated as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation , \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the jth channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where groups is the number of groups to split input in the channel dimension.

For more details, please refer to the paper Gradient Based Learning Applied to Document Recognition .

Note

  1. On Ascend platform, \(groups = 1\) must be satisfied.

  2. On Ascend dilation on depth only supports the case of 1.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\).

  • weight (Tensor) – Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[1]})\).

  • bias (Tensor) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • padding (Union[int, tuple[int]], optional) – The pad value to be filled. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of 3 integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[0], pad[1], pad[1], pad[2] and pad[2] correspondingly. Default: 0.

  • dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. The value ranges for the depth, height, and width dimensions are [1, D], [1, H], and [1, W], respectively. Default: 1.

  • groups (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default: 1.

Returns:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

Raises:
  • TypeError – If out_channel or groups is not an int.

  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \(C_{out}\).

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> output = ops.conv3d(x, weight, pad_mode="same", padding=0, stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 10, 32, 32)
>>> output = ops.conv3d(x, weight, pad_mode="valid", padding=0, stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 7, 30, 30)
>>> output = ops.conv3d(x, weight, pad_mode="pad", padding=(2, 1, 1), stride=1, dilation=1, groups=1)
>>> print(output.shape)
(16, 32, 11, 32, 32)
tinyms.primitives.conv3d_transpose(inputs, weight, pad_mode='valid', padding=0, stride=1, dilation=1, group=1, output_padding=0)[source]

Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Parameters:
  • inputs (Tensor) – The gradients with respect to the output of the convolution. The shape conforms to the default. data_format \((N, C_{in}, D_{out}, H_{out}, W_{out})\). Currently dout data type only supports float16 and float32.

  • weight (Tensor) – Set size of kernel is \((K_d, K_h, K_w)\), then the shape is \((C_{in}, C_{out}//group, K_d, K_h, K_w)\). Where \(group\) is the Args parameter, \(//\) is the symbol for integer division. Currently weight data type only supports float16 and float32.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possibility. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad and output_padding must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • padding (Union(int, tuple[int])) – The padding value to be filled. Default: 0. If padding is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If padding is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] correspondingly.

  • stride (Union(int, tuple[int])) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 1.

  • dilation (Union(int, tuple[int])) – Specifies the space to use between kernel elements. Default: 1.

  • group (int) – Splits input into groups. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – Add extra size to each dimension of the output. Default: 0.

Outputs:

Tensor, the gradients with respect to the input of convolution 3D. Tensor of shape \((N, C_{out}//group, D_{out}, H_{out}, W_{out})\), where \(group\) is the Args parameter.

Supported Platforms:

Ascend GPU CPU

Raises:
  • TypeError – If group is not an int.

  • TypeError – If stride, padding , dilation or output_padding is neither an int not a tuple.

  • ValueError – If the rank of inputs, weight is not equal to 5.

  • ValueError – If stride or dilation is less than 1.

  • ValueError – if inputs[1], weight[1] and weight[2:5] i.e. in_channel, out_channel and kernel_size is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ nor ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘padding’ and padding is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

  • TypeError – If data type of dout and weight is not float16.

Examples

>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
>>> output = conv3d_transpose(dout, weight)
>>> print(output.shape)
(32, 3, 13, 37, 33)
tinyms.primitives.coo_abs(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns coo_absolute value of a COOTensor element-wise.

\[out_i = |x_i|\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_abs(x)
>>> print(output.values)
[1. 2.]
tinyms.primitives.coo_acos(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes arccosine of input coo_tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_acos(x)
>>> print(output.values)
[3.1415927       nan]
tinyms.primitives.coo_acosh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]

Warning

Given an input COOTensor x, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf].

Parameters:

x (COOTensor) – The input COOTensor of inverse hyperbolic cosine function.

Returns:

COOTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_acosh(x)
>>> print(output.values)
[     nan 1.316958]
tinyms.primitives.coo_add(x1: mindspore.common.sparse_tensor.COOTensor, x2: mindspore.common.sparse_tensor.COOTensor, thresh: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes the sum of x1(COOTensor) and x2(COOTensor), and return a new COOTensor based on the computed result and thresh.

Parameters:
  • x1 (COOTensor) – the first COOTensor to sum.

  • x2 (COOTensor) – the second COOTensor to sum.

  • thresh (Tensor) – A 0-D Tensor, represents the magnitude threshold that determines if an output value/index pair take place. Its dtype should match that of the values if they are real. If output’s value is less than the thresh, it will vanish.

Returns:

A COOTensor, the result of sum.

Raises:
  • ValueError – If any input(x1/x2)’s indices’s dim is not equal to 2.

  • ValueError – If any input(x1/x2)’s values’s dim is not equal to 1.

  • ValueError – If any input(x1/x2)’s shape’s dim is not equal to 1.

  • ValueError – If thresh’s dim is not equal to 0.

  • TypeError – If any input(x1/x2)’s indices’s type is not equal to int64.

  • TypeError – If any input(x1/x2)’s shape’s type is not equal to int64.

  • ValueError – If any input(x1/x2)’s indices’s length is not equal to its values’s length.

  • TypeError – If any input(x1/x2)’s values’s type is not equal to anf of (int8/int16/int32/int64/float32/float64/complex64/complex128).

  • TypeError – If thresh’s type is not equal to anf of (int8/int16/int32/int64/float32/float64).

  • TypeError – If x1’s indices’s type is not equal to x2’s indices’s type.

  • TypeError – If x1’s values’s type is not equal to x2’s values’s type.

  • TypeError – If x1’s shape’s type is not equal to x2’s shape’s type.

  • TypeError – If (x1/x2)’s value’s type is not matched with thresh’s type.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, COOTensor
>>> from mindspore import dtype as mstype
>>> from mindspore import context
>>> from mindspore import ops
>>> indics0 = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values0 = Tensor([1, 2], dtype=mstype.int32)
>>> shape0 = (3, 4)
>>> input0 = COOTensor(indics0, values0, shape0)
>>> indics1 = Tensor([[0, 0], [1, 1]], dtype=mstype.int64)
>>> values1 = Tensor([3, 4], dtype=mstype.int32)
>>> shape1 = (3, 4)
>>> input1 = COOTensor(indics1, values1, shape1)
>>> thres = Tensor(0, dtype=mstype.int32)
>>> out = ops.coo_add(input0, input1, thres)
>>> print(out)
COOTensor(shape=[3, 4], dtype=Int32, indices=Tensor(shape=[4, 2], dtype=Int64, value=
[[0 0]
 [0 1]
 [1 1]
 [1 2]]), values=Tensor(shape=[4], dtype=Int32, value=[3 1 4 2]))
tinyms.primitives.coo_asin(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes arcsine of input coo_tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor. The shape of COOTensor is \((N,*)\) , where \(*\) means,any number of additional dimensions. The data type should be one of the following types: float16, float32, float64, complex64, complex128.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32, float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_asin(x)
>>> print(output.values)
[-1.5707964        nan]
tinyms.primitives.coo_asinh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(input_i)\]
Parameters:

x (COOTensor) – The input COOTensor of inverse hyperbolic sine function.

Returns:

COOTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_asinh(x)
>>> print(output.values)
[-0.8813736  1.4436355]
tinyms.primitives.coo_atan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Parameters:

x (COOTensor) – The data type should be one of the following types: float16, float32.

Returns:

A COOTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_atan(x)
>>> print(output.values)
[-0.7853982  1.1071488]
tinyms.primitives.coo_atanh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = tanh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (COOTensor) – Input COOTensor. The data type should be one of the following types: float16, float32.

Returns:

A COOTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_atanh(x)
>>> print(output.values)
[-inf  nan]
tinyms.primitives.coo_ceil(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Rounds a COOTensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of float16 or float32.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_ceil(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_concat(sp_input, concat_dim=0)[source]

concatenates the input SparseTensor(COO format) along the specified dimension.

Warning

This is an experimental API that is subjected to change or deletion. Only supported on CPU now.

Parameters:
  • sp_input (Union[list(COOTensor), tuple(COOTensor)]) – for COOTensor input.

  • concat_dim (scalar) – decide the dimension to concatenation along. The value must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. Default is 0.

Returns:

  • output (COOtensor) - the result of concatenates the input SparseTensor along the specified dimension. OutShape: OutShape[non concat_dim] is equal to InShape[non concat_dim] and OutShape[concat_dim] is all input concat_dim axis shape accumulate.

Raises:
  • ValueError – If only one sparse tensor input.

  • ValueError – If Input COOTensor shape dim > 3. COOtensor shape dim size must be 2 now

Supported Platforms:

CPU

Examples

>>> indices0 = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values0 = Tensor([1, 2], dtype=mstype.int32)
>>> shape0 = (3, 4)
>>> input0 = COOTensor(indices0, values0, shape0)
>>> indices1 = Tensor([[0, 0], [1, 1]], dtype=mstype.int64)
>>> values1 = Tensor([3, 4], dtype=mstype.int32)
>>> shape1 = (3, 4)
>>> input1 = COOTensor(indices1, values1, shape1)
>>> concat_dim = 1
>>> out = ops.coo_concat((input0, input1), concat_dim)
>>> print(out)
COOTensor(shape=[3, 8], dtype=Int32, indices=Tensor(shape=[4, 2], dtype=Int64, value=
[[0 1]
 [0 4]
 [1 2]
 [1 5]]), values=Tensor(shape=[4], dtype=Int32, value=[1 3 2 4]))
tinyms.primitives.coo_cos(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

If use Float64, there may be a problem of missing precision.

Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_cos(x)
>>> print(output.values)
[ 0.5403023  -0.41614684]
tinyms.primitives.coo_cosh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = \cosh(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

COOTensor, has the same shape as x.

Raises:
  • TypeError – If the dtype of x is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_cosh(x)
>>> print(output.values)
[1.5430807 3.7621956]
tinyms.primitives.coo_exp(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the element-wise exponential of a COOTensor.

\[out_i = e^{x_i}\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_exp(x)
>>> print(output.values)
[0.36787948 7.3890557 ]
tinyms.primitives.coo_expm1(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns exponential then minus 1 of a COOTensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of float16 or float32.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_expm1(x)
>>> print(output.values)
[-0.63212055  6.389056  ]
tinyms.primitives.coo_floor(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Rounds a COOTensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

x (COOTensor) – The input COOTensor, its data type must be float16, float32 or float64.

Returns:

COOTensor, has the same shape as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_floor(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_inv(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes Reciprocal of input COOTensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (COOTensor) – Input COOTensor. Must be one of the following types: float16, float32 or int32.

Returns:

COOTensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_inv(x)
>>> print(output.values)
[-1.   0.5]
tinyms.primitives.coo_isfinite(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True\ \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isfinite(x)
>>> print(output.values)
[ True  True]
tinyms.primitives.coo_isinf(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Inf},\ \ True \\ & \text{ if } x_{i} \ne \text{Inf},\ \ False \end{cases}\end{split}\]

where \(Inf\) means infinitity or negative infinitity.

Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isinf(x)
>>> print(output.values)
[False False]
tinyms.primitives.coo_isnan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_isnan(x)
>>> print(output.values)
[False False]
tinyms.primitives.coo_log(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the natural logarithm of a COOTensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

x (COOTensor) – The value must be greater than 0.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64 on GPU and CPU.

  • TypeError – If dtype of x is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_log(x)
>>> print(output.values)
[       nan 0.69314575]
tinyms.primitives.coo_log1p(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns the natural logarithm of one plus the input COOTensor element-wise.

\[out_i = {log_e}(x_i + 1)\]
Parameters:

x (COOTensor) – The input COOTensor, should have dtype of float16 or float32 and its value should be greater than -1.

Returns:

COOTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_log1p(x)
>>> print(output.values)
[     -inf 1.0986123]
tinyms.primitives.coo_neg(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns a COOTensor with coo_negative values of the input COOTensor element-wise.

\[out_{i} = - x_{i}\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as input.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_neg(x)
>>> print(output.values)
[ 1. -2.]
tinyms.primitives.coo_relu(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes ReLU (Rectified Linear Unit activation function) of input coo_tensors element-wise.

It returns \(\max(x,\ 0)\) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(x) = (x)^+ = max(0, x)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

x (COOTensor) –

Input COOTensor with shape \((N, *)\), where \(*\) means any number of additional dimensions. Its dtype is number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_relu(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.coo_relu6(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input coo_tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_relu6(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.coo_round(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns half to even of a COOTensor element-wise.

\[out_i \approx x_i\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape and type as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_round(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.coo_sigmoid(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{coo_sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)}\]

where \(x_i\) is an element of the x.

Parameters:

x (COOTensor) – Input COOTensor, the data type is float16, float32, float64, complex64 or complex128.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is not float16, float32, float64, complex64 or complex128.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sigmoid(x)
>>> print(output.values)
[0.26894143 0.8807971 ]
tinyms.primitives.coo_sin(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes sine of the input element-wise.

\[out_i = sin(x_i)\]
Parameters:

x (COOTensor) – Input COOTensor.

Returns:

COOTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sin(x)
>>> print(output.values)
[-0.84147096  0.9092974 ]
tinyms.primitives.coo_sinh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor of hyperbolic sine function.

Returns:

COOTensor, has the same shape as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sinh(x)
>>> print(output.values)
[-1.1752012  3.6268604]
tinyms.primitives.coo_softsign(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a COOTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_softsign(x)
>>> print(output.values)
[-0.5        0.6666667]
tinyms.primitives.coo_sqrt(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns sqrt of a COOTensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_sqrt(x)
>>> print(output.values)
[      nan 1.4142135]
tinyms.primitives.coo_square(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Returns square of a COOTensor element-wise.

\[out_{i} = (x_{i})^2\]
Parameters:

x (COOTensor) – The input COOTensor with a dtype of Number.

Returns:

COOTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_square(x)
>>> print(output.values)
[1. 4.]
tinyms.primitives.coo_tan(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes tangent of x element-wise.

\[out_i = tan(x_i)\]
Parameters:

x (COOTensor) – The input COOTensor.

Returns:

COOTensor, has the same shape as x.

Raises:

TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_tan(x)
>>> print(output.values)
[-1.5574077 -2.1850398]
tinyms.primitives.coo_tanh(x: mindspore.common.sparse_tensor.COOTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input COOTensor.

Parameters:

x (COOTensor) – Input COOTensor, with float16 or float32 data type.

Returns:

COOTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a COOTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = COOTensor(indices, values, shape)
>>> output = ops.coo_tanh(x)
>>> print(output.values)
[-0.7615942  0.9640276]
tinyms.primitives.copysign(x, other)[source]

Create a new floating-point tensor with the magnitude of x and the sign of other, element-wise.

Parameters:
  • x (Union[Tensor]) – Values to change the sign of.

  • other (Union[int, float, Tensor]) – The sign of other is copied to x. If x.shape != other.shape, other must be broadcastable to the shape of x (which is also the shape of the output).

Returns:

Tensor. The dtype of the tensor is float. The values of x with the sign of other, the shape is the same as x.

Raises:

TypeError – If dtype of the input is not in the given types or the input can not be converted to tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> import mindspore.ops as ops
>>> x = np.array([[0.3, -0.7], [0.5, 0.5]])
>>> other = np.array([[-0.4, 0.6], [0.4, -0.6]])
>>> out = ops.copysign(x, other)
>>> print(out)
[[-0.3  0.7]
 [ 0.5 -0.5]]
tinyms.primitives.cos(input)[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

Supported dtypes are float16 and float32, and using float64 may cause a problem of missing precision.

Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = ops.cos(x)
>>> print(output)
[0.971338 0.6748758 0.95233357 0.9959527]
tinyms.primitives.cosh(input)[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = cosh(input_i)\]
Parameters:

input (Tensor) – The input tensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

Tensor, has the same shape as input.

Raises:
  • TypeError – If the dtype of input is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = ops.cosh(x)
>>> print(output)
[1.0289385 1.364684 1.048436 1.0040528]
tinyms.primitives.cosine_embedding_loss(input1, input2, target, margin=0.0, reduction='mean')[source]

CosineEmbeddingLoss creates a criterion to measure the similarity between two tensors using cosine distance.

Given two tensors \(input1\), \(input2\), and a Tensor label \(target\) with values 1 or -1:

\[\begin{split}loss(input1, input2, target) = \begin{cases} 1-cos(input1, input2), & \text{if } target = 1\\ max(0, cos(input1, input2)-margin), & \text{if } target = -1\\ \end{cases}\end{split}\]
Parameters:
  • input1 (Tensor) – Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • input2 (Tensor) – Tensor of shape \((N, *)\), same shape and dtype as input1.

  • target (Tensor) – Contains value 1 or -1. Suppose the shape of input1 is \((x_1, x_2, x_3, ..., x_R)\), then the shape of target must be \((x_1, x_3, x_4, ..., x_R)\).

  • margin (float, optional) – Should be in [-1.0, 1.0]. Default 0.0.

  • reduction (str, optional) – Specifies which reduction to be applied to the output. It must be one of “none”, “mean”, and “sum”, meaning no reduction, reduce mean and sum on output, respectively. Default “mean”.

Returns:

Tensor or Scalar, if reduction is “none”, its shape is the same as target. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If margin is not a float.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • ValueError – If margin is not in range [-1, 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> intput1 = Tensor(np.array([[0.3, 0.8], [0.4, 0.3]]), mindspore.float32)
>>> intput2 = Tensor(np.array([[0.4, 1.2], [-0.4, -0.9]]), mindspore.float32)
>>> target = Tensor(np.array([1, -1]), mindspore.int32)
>>> output = ops.cosine_embedding_loss(intput1, intput2, target)
>>> print(output)
0.0003425479
tinyms.primitives.cosine_similarity(x1, x2, dim=1, eps=1e-08)[source]

Calculate cosine similarity between x1 and x2 along the axis, dim.

\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}\]

Note

Currently, broadcast of input is not supported.

Parameters:
  • x1 (Tensor) – The first input Tensor.

  • x2 (Tensor) – The second input Tensor.

  • dim (int, optional) – Axis for calculating cosine similarity. Default: 1.

  • eps (float, optional) – Minimal value to avoid division by zero. Default: 1e-8.

Returns:

Tensor, cosine similarity between x1 and x2.

Raises:

TypeError – If the dtype of x1 or x2 is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x1 = ms.Tensor([[-0.0256, 0.0127, -0.2475, 0.2316, 0.8037],
...                 [0.5809, -1.2712, -0.7038, -0.2558, 0.7494]], dtype=ms.float32)
>>> x2 = ms.Tensor([[-0.6115, -0.1965, -0.8484, 0.2389, 0.2409],
...                 [1.8940, -2.1997, 0.1915, 0.0856, 0.7542]], dtype=ms.float32)
>>> output = ops.cosine_similarity(x1, x2)
>>> print(output)
[0.4843164  0.81647635]
tinyms.primitives.cov(input, *, correction=1, fweights=None, aweights=None)[source]

Given the input and weights, returns the covariance matrix (the square matrix of the covariance of each pair of variables) of input, where the input row is the variable and the column is the observation value.

The diagonal contains each variable and its own covariance. If input is a scalar or 1D vector of a single variable, its variance will be returned.

The unbiased sample covariance of the variables \(a\) and \(b\) is given by the following formula:

\[\text{cov}_w(a,b) = \frac{\sum^{N}_{i = 1}(a_{i} - \bar{a})(b_{i} - \bar{b})}{N~-~1}\]

where \(\bar{a}\) and \(\bar{b}\) are the simple means of the \(a\) and \(b\) respectively.

If fweights and/or aweights are provided, the unbiased weighted covariance is calculated, which is given by:

\[\text{cov}_w(a,b) = \frac{\sum^{N}_{i = 1}w_i(a_{i} - \mu_a^*)(b_{i} - \mu_b^*)}{\sum^{N}_{i = 1}w_i~-~1}\]

where \(w\) denotes fweights or aweights based on whichever is provided, or \(w = fweights \times aweights\) if both are provided, and \(\mu_x^* = \frac{\sum^{N}_{i = 1}w_ix_{i} }{\sum^{N}_{i = 1}w_i}\) is the weighted mean of the variable.

Warning

The values of fweights and aweights cannot be negative, and the negative weight scene result is undefined.

Note

Currently, complex number is not supported.

Parameters:

input (Tensor) – A 2D matrix, or a scalar or 1D vector of a single variable

Keyword Arguments:
  • correction (int, optional) – The difference between sample size and sample degrees of freedom. Defaults to Bessel’s correction, correction = 1 which returns the unbiased estimate, even if both fweights and aweights are specified. correction = 0 will return the simple average. Default: 1.

  • fweights (Tensor, optional) – Scalar or one-dimensional Tensor containing integer frequency weight, indicating the number of repetition of each observation vector. Its numel must equal the number of columns of input. Ignored if None. Default: None.

  • aweights (Tensor, optional) – A scalar or 1D Tensor containing float observation weights represents the importance of each observation vector. The higher the importance, the greater the corresponding value. Its numel must equal the number of columns of input. Must have floating point dtype. Ignored if None. Default: None.

Returns:

Tensor, The covariance matrix Tensor of input.

Raises:
  • ValueError – If the dimensions of input is greater than 2.

  • ValueError – If the dimensions of fweights is greater than 1.

  • ValueError – If the numel of fweights not equal the number of columns of input.

  • ValueError – If the numel of aweights not equal the number of columns of input.

  • ValueError – If the dimensions of aweights is greater than 1.

  • TypeError – If the dtype of input is bool.

  • TypeError – If the dtype of fweights is not an integer type.

  • TypeError – If the dtype of aweights is not a floating point type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x = ms.Tensor([[0., 3.], [5., 5.], [7., 0.]]).T
>>> print(x)
[[0. 5. 7.]
 [3. 5. 0.]]
>>> print(ops.cov(x))
[[13.        -3.5      ]
 [-3.5        6.3333335]]
>>> print(ops.cov(x, correction=0))
[[ 8.666667  -2.3333333]
 [-2.3333333  4.2222223]]
>>> fw = ms.Tensor([5, 2, 4], dtype=ms.int64)
>>> aw = ms.Tensor([0.4588, 0.9083, 0.7616], ms.float32)
>>> print(ops.cov(x, fweights=fw, aweights=aw))
[[10.146146 -3.47241 ]
 [-3.47241   4.716825]]
tinyms.primitives.crop_and_resize(image, boxes, box_indices, crop_size, method='bilinear', extrapolation_value=0.0)[source]

Extracts crops from the input image Tensor and resizes them.

Note

In case that the output shape depends on crop_size, the crop_size must be constant. For now, the backward of the operator only support bilinear method, for other methods, will return 0.

Parameters:
  • image (Tensor) – A 4-D Tensor representing a batch of images. It has shape \((batch, image\_height, image\_width, depth)\).

  • boxes (Tensor) – A 2-D Tensor with shape \((num\_boxes, 4)\) representing the normalized coordinates of the boxes to be cropped. The coordinates are specified in the form \([y1, x1, y2, x2]\), where \((y1, x1)\) is the first corner and \((y2, x2)\) is the second corner of the box. If \(y1 > y2\), the sampled crop is inverted upside down, the width dimensionis treated similarly when \(x1 > x2\). If normalized coordinates are not in range \([0, 1]\), extrapolated input image values are used instead. Supported data type: float32.

  • box_indices (Tensor) – A 1-D Tensor of shape \(\text{num\_boxes}\) representing the batch index for each box. Supported type: int32.

  • crop_size (Tuple[int]) – A tuple of two elements: (crop_height, crop_width), representing the output size of the cropped and resized images. Only positive values are supported. Supported type: int32.

  • method (str, optional) – An optional string that specifies the sampling method for resizing. It can be “bilinear”, “nearest” or “bilinear_v2”. The option “bilinear” stands for standard bilinear interpolation algorithm, while “bilinear_v2” may result in better result in some cases. “nearest” is the nearest neighbor interpolation algorithm. Default: “bilinear”.

  • extrapolation_value (float, optional) – An optional float value used extrapolation, if applicable. Default: 0.0.

Returns:

A 4-D tensor of shape \((num_boxes, crop_height, crop_width, depth)\) with type(float32).

Raises:
  • TypeError – If image or boxes or box_indices is not a Tensor.

  • TypeError – If crop_size is not a Tuple with two int32 elements.

  • TypeError – If dtype of boxes is not float or that of box_indices is not int.

  • TypeError – If method is not a str.

  • TypeError – If extrapolation_value is not a float.

  • ValueError – If the shape rank of image is not 4.

  • ValueError – If the shape rank of boxes is not 2.

  • ValueError – If the second dim of boxes is not 4.

  • ValueError – If the shape rank of box_indices is not 1.

  • ValueError – If the first dim of box_indices is not equal to that of boxes.

  • ValueError – If existing element in box_indices is out of range [0, batch).

  • ValueError – If the data of crop_size is not positive.

  • ValueError – If method is not one of ‘bilinear’, ‘nearest’, ‘bilinear_v2’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> BATCH_SIZE = 1
>>> NUM_BOXES = 5
>>> IMAGE_HEIGHT = 256
>>> IMAGE_WIDTH = 256
>>> CHANNELS = 3
>>> image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
>>> boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
>>> box_indices = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
>>> crop_size = (24, 24)
>>> output = ops.crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_indices), crop_size)
>>> print(output.shape)
 (5, 24, 24, 3)
tinyms.primitives.cross(input, other, dim=None)[source]

Computes the cross product of input and other in dimension dim. input and other must have the same shape, and the size of their dim dimension should be 3. If dim is not specified, it is set to be the first dimension found with the size 3.

Parameters:
  • input (Tensor) – input is a tensor.

  • other (Tensor) – The other Tensor, other must have the same shape and type as input input, and the size of their dim dimension should be 3.

  • dim (int, optional) – dimension to apply cross product in. if dim is None, it is set to be the first dimension found with the size 3. Default: None.

Returns:

Tensor, has the same shape and type as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If other is not a Tensor.

  • TypeError – If the type of input is not the same as that of other.

  • ValueError – If input and other not have the same size, and the size of their dim dimension not be 3.

  • ValueError – If input and other not have the same shape.

  • ValueError – If dim is out of range, dim should be [-len(input.shape), len(input.shape)-1].

Supported Platforms:

Ascend CPU

Examples

>>> # case 1: dim=None.
>>> x = Tensor([[1, 2, 3], [1, 2, 3]])
>>> other = Tensor([[4, 5, 6], [4, 5, 6]])
>>> output = ops.cross(x, other)
>>> print(output)
[[-3  6 -3]
 [-3  6 -3]]
>>> # case 2: dim=1.
>>> x = Tensor([[1, 2, 3], [1, 2, 3]])
>>> other = Tensor([[4, 5, 6], [4, 5, 6]])
>>> output = ops.cross(x, other, dim=1)
>>> print(output)
[[-3  6 -3]
 [-3  6 -3]]
tinyms.primitives.cross_entropy(input, target, weight=None, ignore_index=-100, reduction='mean', label_smoothing=0.0)[source]

The cross entropy loss between input and target.

The cross entropy support two kind of targets:

  • Class indices (int) in the range \([0, C)\) where \(C\) is the number of classes, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}\]

    where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

    If reduction is not ‘none’ (default ‘mean’), then

    \[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}} l_n, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
  • Probabilities (float) for each class, useful when labels beyond a single class per minibatch item are required, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C \exp(x_{n,i})} y_{n,c}\]

    where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

    If reduction is not ‘none’ (default ‘mean’), then

    \[\begin{split}\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). input is expected to be log-probabilities, data type must be float16 or float32.

  • target (Tensor) – \((N)\) or \((N, d_1, d_2, ..., d_K)\) for high-dimensional loss.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\), data type must be float16 or float32. Default: None.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default: -100

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

  • label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.

Returns:

Tensor, the computed loss value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # Case 1: Indices labels
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
>>> output = ops.cross_entropy(inputs, target)
>>> # Case 2: Probability labels
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> output = ops.cross_entropy(inputs, target)
tinyms.primitives.csr_abs(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns csr_absolute value of a CSRTensor element-wise.

\[out_i = |x_i|\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_abs(x)
>>> print(output.values)
[1. 2.]
tinyms.primitives.csr_acos(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes arccosine of input csr_tensors element-wise.

\[out_i = cos^{-1}(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64,

complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_acos(x)
>>> print(output.values)
[3.1415927       nan]
tinyms.primitives.csr_acosh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic cosine of the inputs element-wise.

\[out_i = \cosh^{-1}(input_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of inverse hyperbolic cosine function, its element must be in range [1, inf].

Returns:

CSRTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_acosh(x)
>>> print(output.values)
[     nan 1.316958]
tinyms.primitives.csr_add(a: mindspore.common.sparse_tensor.CSRTensor, b: mindspore.common.sparse_tensor.CSRTensor, alpha: mindspore.common.tensor.Tensor, beta: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes the linear combination of two input CSRTensors a and b.

\[out = alpha * a + beta * b\]

where both \(a\) and \(b\) are CSRTensor, \(alpha\) and \(beta\) are both Tensor

Note

The user need to ensure that the input sparse matrix is legal. Otherwise, the behavior of the operator is undefined. For example, when there are multiple elements in the same position, the operator may return an error of fail execute.

Parameters:
  • a (CSRTensor) – Input sparse CSRTensor.

  • b (CSRTensor) – Input sparse CSRTensor.

  • alpha (Tensor) – Dense Tensor, its shape must be able to broadcast to a.

  • beta (Tensor) – Dense Tensor, its shape must be able to broadcast to b.

Returns:

CSRTensor, a CSRTensor containing the following parts.

  • indptr - Indicates the start and end point for non-zero values in each row.

  • indices - The column positions of all non-zero values of the input.

  • values - The non-zero values of the dense tensor.

  • shape - The shape of the CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> import mindspore.ops as ops
>>> a_indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> a_indices = Tensor([0, 1], dtype=mstype.int32)
>>> a_values = Tensor([1, 2], dtype=mstype.float32)
>>> shape = (2, 6)
>>> b_indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> b_indices = Tensor([0, 1], dtype=mstype.int32)
>>> b_values = Tensor([1, 2], dtype=mstype.float32)
>>> alpha = Tensor(1, mstype.float32)
>>> beta = Tensor(1, mstype.float32)
>>> csra = CSRTensor(a_indptr, a_indices, a_values, shape)
>>> csrb = CSRTensor(b_indptr, b_indices, b_values, shape)
>>> out = ops.csr_add(csra, csrb, alpha, beta)
>>> print(out)
CSRTensor(shape=[2,6], dtype=Float32,
          indptr=Tensor(shape=[3], dtype=Int32, value = [0, 1, 2]),
          indices=Tensor(shape=[2], dtype=Int32, value = [0, 1]),
          values=Tensor(shape=[2], dtype=Float32, value = [2.0, 4.0]))
tinyms.primitives.csr_asin(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes arcsine of input csr_tensors element-wise.

\[out_i = sin^{-1}(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor. The data types should be one of the following types: float16, float32, float64.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32, float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_asin(x)
>>> print(output.values)
[-1.5707964        nan]
tinyms.primitives.csr_asinh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic sine of the input element-wise.

\[out_i = \sinh^{-1}(input_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of inverse hyperbolic sine function.

Returns:

CSRTensor, has the same shape and type as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_asinh(x)
>>> print(output.values)
[-0.8813736  1.4436355]
tinyms.primitives.csr_atan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes the trigonometric inverse tangent of the input element-wise.

\[out_i = tan^{-1}(x_i)\]
Parameters:

x (CSRTensor) – The data type should be one of the following types: float16, float32.

Returns:

A CSRTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_atan(x)
>>> print(output.values)
[-0.7853982  1.1071488]
tinyms.primitives.csr_atanh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes inverse hyperbolic tangent of the input element-wise.

\[out_i = anh^{-1}(x_{i})\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

x (CSRTensor) – Input CSRTensor. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions. The data type should be one of the following types: float16, float32.

Returns:

A CSRTensor, has the same type as the input.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_atanh(x)
>>> print(output.values)
[-inf  nan]
tinyms.primitives.csr_ceil(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Rounds a CSRTensor up to the closest integer element-wise.

\[out_i = \lceil x_i \rceil = \lfloor x_i \rfloor + 1\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of float16 or float32.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16 or float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_ceil(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_cos(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes cosine of input element-wise.

\[out_i = cos(x_i)\]

Warning

Currently support data types float16 and float32. If use Float64, there may be a problem of missing precision.

Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_cos(x)
>>> print(output.values)
[ 0.5403023  -0.41614684]
tinyms.primitives.csr_cosh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic cosine of input element-wise.

\[out_i = \cosh(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of hyperbolic cosine function, its data type must be float16, float32, float64, complex64 or complex128.

Returns:

CSRTensor, has the same shape as x.

Raises:
  • TypeError – If the dtype of x is not one of the following types: float16, float32, float64, complex64, complex128.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_cosh(x)
>>> print(output.values)
[1.5430807 3.7621956]
tinyms.primitives.csr_div(x: mindspore.common.sparse_tensor.CSRTensor, y: mindspore.common.tensor.Tensor) → mindspore.common.tensor.Tensor[source]

Returns x / y where x is CSRTensor and y is Tensor.

Note

This function returns the results of dense Tensor, represents the non-zero values of the CSRTensor. If user expects a CSRTensor as output, please directly use / operator instead. Only support dense tensor broadcast to sparse tensor at the moment.

Parameters:
  • x (CSRTensor) – Sparse CSR Tensor.

  • y (Tensor) – Dense Tensor, its shape must be able to broadcast to x.

Returns:

Dense Tensor, represents the non-zero values of the result.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_exp(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns csr_exponential of a CSRTensor element-wise.

\[out_i = e^{x_i}\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_exp(x)
>>> print(output.values)
[0.36787948 7.3890557 ]
tinyms.primitives.csr_expm1(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns exponential then minus 1 of a CSRTensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of float16 or float32.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_expm1(x)
>>> print(output.values)
[-0.63212055  6.389056  ]
tinyms.primitives.csr_floor(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Rounds a CSRTensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

x (CSRTensor) – The input CSRTensor, its data type must be float16, float32 or float64.

Returns:

CSRTensor, has the same shape as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_floor(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_inv(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes Reciprocal of input CSRTensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (CSRTensor) – Input CSRTensor. Must be one of the following types: float16, float32 or int32.

Returns:

CSRTensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_inv(x)
>>> print(output.values)
[-1.   0.5]
tinyms.primitives.csr_isfinite(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True\ \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isfinite(x)
>>> print(output.values)
[ True  True]
tinyms.primitives.csr_isinf(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Inf},\ \ True \\ & \text{ if } x_{i} \ne \text{Inf},\ \ False \end{cases}\end{split}\]

where \(Inf\) means not a number.

Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isinf(x)
>>> print(output.values)
[False False]
tinyms.primitives.csr_isnan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_isnan(x)
>>> print(output.values)
[False False]
tinyms.primitives.csr_log(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns the natural logarithm of a CSRTensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

x (CSRTensor) – The value must be greater than 0.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64 on GPU and CPU.

  • TypeError – If dtype of x is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_log(x)
>>> print(output.values)
[       nan 0.69314575]
tinyms.primitives.csr_log1p(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns the natural logarithm of one plus the input CSRTensor element-wise.

\[out_i = {log_e}(x_i + 1)\]
Parameters:

x (CSRTensor) – The input CSRTensor. With float16 or float32 data type. The value must be greater than -1.

Returns:

CSRTensor, has the same shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_log1p(x)
>>> print(output.values)
[     -inf 1.0986123]
tinyms.primitives.csr_mm(a: mindspore.common.sparse_tensor.CSRTensor, b: mindspore.common.sparse_tensor.CSRTensor, trans_a: bool = False, trans_b: bool = False, adjoint_a: bool = False, adjoint_b: bool = False)[source]

Return the matrix multiplication result of the right-multiply matrix (dense or CSRTensor) of the CSRTensor. The CSRTensor with shape [M, N] needs to adapt the right matrix with shape [N, K] to get the dense matrix or CSRTensor with result [M, K].

Note

Currently supports GPU backend with right matrix is CSRTensor.

Parameters:
  • a (CSRTensor) – Sparse CSR Tensor, rank should be 2.

  • b (CSRTensor) – Sparse CSR Tensor, rank should be 2.

  • trans_a (bool, optional) – whether to transpose CSRTensor a. Default: False.

  • trans_b (bool, optional) – whether to transpose CSRTensor b. Default: False.

  • adjoint_a (bool, optional) – whether to adjoint CSRTensor a. Default: False.

  • adjoint_b (bool, optional) – whether to adjoint CSRTensor b. Default: False.

Returns:

CSRTensor.

Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> import mindspore.ops as ops
>>> a_shape = (4, 5)
>>> a_indptr = Tensor([0, 1, 1, 3, 4], dtype=mstype.int32)
>>> a_indices = Tensor([0, 3, 4, 0],dtype=mstype.int32)
>>> a_values = Tensor([1.0, 5.0, -1.0, -2.0], dtype=mstype.float32)
>>> b_shape = (5, 3)
>>> b_indptr = Tensor([0, 1, 1, 3, 3, 3], dtype=mstype.int32)
>>> b_indices = Tensor([0, 0, 1],dtype=mstype.int32)
>>> b_values = Tensor([2.0, 7.0, 8.0], dtype=mstype.float32)
>>> a = CSRTensor(a_indptr, a_indices, a_values, a_shape)
>>> b = CSRTensor(b_indptr, b_indices, b_values, b_shape)
>>> c = ops.csr_mm(a, b)
>>> print(c.shape)
(4, 3)
>>> print(c.values)
[2. -4.]
>>> print(c.indptr)
[0 1 1 1 2]
>>> print(c.indices)
[0 0]
tinyms.primitives.csr_mul(x: mindspore.common.sparse_tensor.CSRTensor, y: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns x * y where x is CSRTensor and y is Tensor.

Parameters:
  • x (CSRTensor) – Sparse CSR Tensor.

  • y (Tensor) – Dense Tensor, its shape must be able to broadcast to x.

Returns:

CSRTensor.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_mv(csr_tensor: mindspore.common.sparse_tensor.CSRTensor, dense: mindspore.common.tensor.Tensor) → mindspore.common.tensor.Tensor[source]

Sparse matrix-vector multiplication.

Parameters:
  • csr_tensor (CSRTensor) – Sparse CSR Tensor.

  • dense (Tensor) – Dense Tensor.

Returns:

Dense Tensor.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_neg(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns a CSRTensor with csr_negative values of the input CSRTensor element-wise.

\[out_{i} = - x_{i}\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as input.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_neg(x)
>>> print(output.values)
[ 1. -2.]
tinyms.primitives.csr_reduce_sum(csr_tensor: mindspore.common.sparse_tensor.CSRTensor, axis: int) → mindspore.common.tensor.Tensor[source]

Reduces a dimension of a CSRTensor by summing all elements in the dimension.

Parameters:
  • csr_tensor (CSRTensor) – Sparse CSR Tensor.

  • axis (int) – Axis to be reduced.

Returns:

Dense Tensor, represents the non-zero values of the result.

Supported Platforms:

GPU CPU

tinyms.primitives.csr_relu(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes ReLU (Rectified Linear Unit activation function) of input csr_tensors element-wise.

It returns max(x, 0) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(x) = (x)^+ = max(0, x)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, with the same dtype and shape as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_relu(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.csr_relu6(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input csr_tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_relu6(x)
>>> print(output.values)
[0. 2.]
tinyms.primitives.csr_round(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns half to even of a CSRTensor element-wise.

\[out_i \approx x_i\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape and type as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_round(x)
>>> print(output.values)
[-1.  2.]
tinyms.primitives.csr_sigmoid(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{csr_sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)}\]

where \(x_i\) is an element of the x.

Parameters:

x (CSRTensor) – Input CSRTensor, the data type is float16, float32, float64, complex64 or complex128.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is not float16, float32, float64, complex64 or complex128.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sigmoid(x)
>>> print(output.values)
[0.26894143 0.8807971 ]
tinyms.primitives.csr_sin(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes sine of the input element-wise.

\[out_i = sin(x_i)\]
Parameters:

x (CSRTensor) – Input CSRTensor.

Returns:

CSRTensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is not float16, float32 or float64, complex64,

complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sin(x)
>>> print(output.values)
[-0.84147096  0.9092974 ]
tinyms.primitives.csr_sinh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor of hyperbolic sine function.

Returns:

CSRTensor, has the same shape as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sinh(x)
>>> print(output.values)
[-1.1752012  3.6268604]
tinyms.primitives.csr_softmax(logits: mindspore.common.sparse_tensor.CSRTensor, dtype: <module 'mindspore.common.dtype' from '/home/docs/checkouts/readthedocs.org/user_builds/tinyms/envs/latest/lib/python3.7/site-packages/mindspore-2.0.0rc1-py3.7-linux-x86_64.egg/mindspore/common/dtype.py'>)[source]

Calculates the softmax of a CSRTensorMatrix.

Parameters:
  • logits (CSRTensor) – Input sparse CSRTensor.

  • dtype (dtype) – Input data type.

Returns:

CSRTensor, a CSRTensor containing

  • indptr - Indicates the start and end point for non-zero values in each row.

  • indices - The column positions of all non-zero values of the input.

  • values - The non-zero values of the dense tensor.

  • shape - The shape of the CSRTensor.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> logits_indptr = Tensor([0, 4, 6], dtype=mstype.int32)
>>> logits_indices = Tensor([0, 2, 3, 4, 3, 4], dtype=mstype.int32)
>>> logits_values = Tensor([1, 2, 3, 4, 1, 2], dtype=mstype.float32)
>>> shape = (2, 6)
>>> logits = CSRTensor(logits_indptr, logits_indices, logits_values, shape)
>>> out = ops.csr_softmax(logits, dtype=mstype.float32)
>>> print(out)
CSRTensor(shape=[2, 6], dtype=Float32, indptr=Tensor(shape=[3], dtype=Int32, value=[0 4 6]),
               indices=Tensor(shape=[6], dtype=Int32, value=[0 2 3 4 3 4]),
               values=Tensor(shape=[6], dtype=Float32, value=[ 3.20586003e-02  8.71443152e-02  2.36882806e-01
               6.43914223e-01  2.68941432e-01  7.31058598e-01]))
tinyms.primitives.csr_softsign(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a CSRTensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_softsign(x)
>>> print(output.values)
[-0.5        0.6666667]
tinyms.primitives.csr_sqrt(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns sqrt of a CSRTensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_sqrt(x)
>>> print(output.values)
[      nan 1.4142135]
tinyms.primitives.csr_square(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Returns square of a CSRTensor element-wise.

\[out_{i} = (x_{i})^2\]
Parameters:

x (CSRTensor) – The input CSRTensor with a dtype of Number.

Returns:

CSRTensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_square(x)
>>> print(output.values)
[1. 4.]
tinyms.primitives.csr_tan(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes tangent of x element-wise.

\[out_i = tan(x_i)\]
Parameters:

x (CSRTensor) – The input CSRTensor.

Returns:

CSRTensor, has the same shape as x.

Raises:

TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_tan(x)
>>> print(output.values)
[-1.5574077 -2.1850398]
tinyms.primitives.csr_tanh(x: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input CSRTensor.

Parameters:

x (CSRTensor) – Input CSRTensor, with float16 or float32 data type.

Returns:

CSRTensor, with the same type and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a CSRTensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
>>> indices = Tensor([3, 0], dtype=mstype.int32)
>>> values = Tensor([-1, 2], dtype=mstype.float32)
>>> shape = (3, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_tanh(x)
>>> print(output.values)
[-0.7615942  0.9640276]
tinyms.primitives.csr_to_coo(tensor: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.sparse_tensor.COOTensor[source]

Converts a CSRTensor to COOTensor.

Note

Only 2-D CSRTensor is supported for now.

Parameters:

tensor (CSRTensor) – A CSRTensor, must be 2-D.

Returns:

2D COOTensor, the input tensor stored in COO format.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, CSRTensor
>>> indptr = Tensor([0, 1, 2]).astype("int32")
>>> indices = Tensor([0, 1]).astype("int32")
>>> values = Tensor([2, 1]).astype("float32")
>>> shape = (2, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_to_coo(x)
>>> print(output.indices)
[[0 0]
[1 1]]
tinyms.primitives.csr_to_dense(csr_tensor: mindspore.common.sparse_tensor.CSRTensor) → mindspore.common.tensor.Tensor[source]

Converts a CSRTensor to its dense form.

Note

Only 2-D CSRTensor is supported for now.

Parameters:

csr_tensor (CSRTensor) – A CSRTensor, must be 2-D.

Returns:

Tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, CSRTensor, ops
>>> indptr = Tensor([0, 1, 2]).astype("int32")
>>> indices = Tensor([0, 1]).astype("int32")
>>> values = Tensor([2, 1]).astype("float32")
>>> shape = (2, 4)
>>> x = CSRTensor(indptr, indices, values, shape)
>>> output = ops.csr_to_dense(x)
>>> print(output)
tinyms.primitives.ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True)[source]

Performs greedy decoding on the logits given in inputs.

Parameters:
  • inputs (Tensor) – The input Tensor must be a 3-D tensor whose shape is \((max\_time, batch\_size, num\_classes)\). num_classes must be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved. Default blank label is num_classes - 1. Data type must be float32 or float64.

  • sequence_length (Tensor) – A tensor containing sequence lengths with the shape of \((batch\_size, )\). The type must be int32. Each value in the tensor must be equal to or less than max_time.

  • merge_repeated (bool) – If true, merge repeated classes in output. Default: True.

Returns:

decoded_indices (Tensor), A tensor with shape of \((total\_decoded\_outputs, 2)\). Data type is int64.

decoded_values (Tensor), A tensor with shape of \((total\_decoded\_outputs, )\), it stores the decoded classes. Data type is int64.

decoded_shape (Tensor), A tensor with shape of \((batch\_size, max\_decoded\_length)\). Data type is int64.

log_probability (Tensor), A tensor with shape of \((batch\_size, 1)\), containing sequence log-probability, has the same type as inputs.

Raises:
  • TypeError – If merge_repeated is not a bool.

  • ValueError – If length of shape of inputs is not equal to 3.

  • ValueError – If length of shape of sequence_length is not equal to 1.

  • ValueError – If value in the sequence_length is larger than max_time.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.array([[[0.6, 0.4, 0.2], [0.8, 0.6, 0.3]],
...                           [[0.0, 0.6, 0.0], [0.5, 0.4, 0.5]]]), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> decoded_indices, decoded_values, decoded_shape, log_probability = ops.ctc_greedy_decoder(inputs,
...                                                                                          sequence_length)
>>> print(decoded_indices)
[[0 0]
 [0 1]
 [1 0]]
>>> print(decoded_values)
[0 1 0]
>>> print(decoded_shape)
[2 2]
>>> print(log_probability)
[[-1.2]
 [-1.3]]
tinyms.primitives.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

CTC is a loss function in sequence labeling problems, which is mainly used to deal with the alignment of input and output labels in sequence labeling problems. While traditional sequence labeling algorithms require the input and output symbols to be perfectly aligned at each moment, CTC expands the label collection and adds empty elements. After labeling the sequence using the extended label set, all the prediction sequences that can be converted into real sequences by the mapping function are correct prediction results, that is, the predicted sequence can be obtained without data alignment processing. Its objective function is to maximize the sum of probabilities of all correct prediction sequences.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Parameters:
  • log_probs (Tensor) – A tensor of shape \((T, N, C)\), where T is input length, N is batch size and C is number of classes (including blank).

  • targets (Tensor) – Target sequences. A tensor of shape \((N, S)\), where S is max target length.

  • input_lengths (Union(tuple, Tensor)) – Lengths of the input. A tuple or Tensor of shape(N).

  • target_lengths (Union(tuple, Tensor)) – Lengths of the target. A tuple or Tensor of shape(N).

  • blank (int, optional) – The blank label. Default: 0.

  • reduction (str, optional) – Implements the reduction method to the output with ‘none’, ‘mean’, or ‘sum’, respectively indicate that no calculation is specified, that the mean is used, and that is calculated using summation. Default: ‘mean’.

  • zero_infinity (bool, optional) – Whether to set infinite loss and correlation gradient to 0. Default: False.

Returns:

neg_log_likelihood (Tensor), A loss value with shape \((N)\) , which is differentiable with respect to each input node.

log_alpha (Tensor), The probability of possible trace of input to target with shape \((N, T, 2 * S + 1)\) .

Raises:
  • TypeError – If zero_infinity is not a bool, reduction is not string.

  • TypeError – If the dtype of log_probs is not float or double.

  • TypeError – If the dtype of targets, input_lengths or target_lengths is not int32 or int64.

  • ValueError – If the rank of log_probs is not 3.

  • ValueError – If the rank of targets is not 2.

  • ValueError – If the shape of input_lengths does not match N. N is batch size of log_probs .

  • ValueError – If the shape of target_lengths does not match N. N is batch size of log_probs .

  • TypeError – If the types of targets, input_lengths or target_lengths are different.

  • ValueError – If the value of blank is not in range [0, num_labels|C). C is number of classes of log_probs .

  • RuntimeError – If any value of input_lengths is larger than T. T is the length of log_probs.

  • RuntimeError – If any target_lengths[i] is not in range [0, input_length[i]].

Supported Platforms:

Ascend GPU CPU

Examples

>>> log_probs = Tensor(np.array([[[0.3, 0.6, 0.6]],
...                              [[0.9, 0.4, 0.2]]]).astype(np.float32))
>>> targets = Tensor(np.array([[0, 1]]), mstype.int32)
>>> input_lengths = Tensor(np.array([2]), mstype.int32)
>>> target_lengths = Tensor(np.array([1]), mstype.int32)
>>> loss, log_alpha = ops.ctc_loss(log_probs, targets, input_lengths,
...                                target_lengths, 0, 'mean', True)
>>> print(loss)
-2.2986124
>>> print(log_alpha)
[[[0.3       0.3            -inf      -inf      -inf]
  [1.2       1.8931472 1.2            -inf      -inf]]]
tinyms.primitives.cummax(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative maximum value of input Tensor input along the dimension axis, and indices is the index location of each maximum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = max(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative maximum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> output = ops.cummax(x, axis=0)
>>> print(output[0])
[[ 3.  4.  6. 10.]
 [ 3.  6.  7. 10.]
 [ 4.  6.  8. 10.]
 [ 4.  6.  8. 10.]]
>>> print(output[1])
[[0 0 0 0]
 [0 1 1 0]
 [2 1 2 0]
 [2 1 2 0]]
tinyms.primitives.cummin(input, axis)[source]

Returns a tuple (values,indices) where ‘values’ is the cumulative minimum value of input Tensor input along the dimension axis, and indices is the index location of each minimum value.

\[\begin{split}\begin{array}{ll} \\ y_{i} = min(x_{1}, x_{2}, ... , x_{i}) \end{array}\end{split}\]
Parameters:
  • input (Tensor) – The input Tensor, rank of input > 0.

  • axis (int) – The dimension to do the operation over. The value of axis must be in the range [-input.ndim, input.ndim - 1].

Returns:

tuple [Tensor], tuple of 2 Tensors, containing the cumulative minimum of elements and the index, The shape of each output tensor is the same as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not an int.

  • ValueError – If axis is out the range of [-input.ndim, input.ndim - 1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> a = Tensor([-0.2284, -0.6628,  0.0975,  0.2680, -1.3298, -0.4220], mindspore.float32)
>>> output = ops.cummin(a, axis=0)
>>> print(output[0])
[-0.2284 -0.6628 -0.6628 -0.6628 -1.3298 -1.3298]
>>> print(output[1])
[0 1 1 1 4 4]
tinyms.primitives.cumprod(input, dim, dtype=None)[source]

Computes the cumulative product of the input tensor along dimension dim. For example, if input is a vector of size N, the result will also be a vector of size N, with elements.

\[y_i = x_1 * x_2 * x_3 * ... * x_i\]
Parameters:
  • input (Tensor[Number]) – The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

  • dim (int) – The dimensions to compute the cumulative product. Only constant value is allowed.

  • dtype (mindspore.dtype, optional) – The desired data type of output. If not specified, remains the same as the original Tensor. Default: None.

Returns:

Tensor, has the same shape and dtype as the input unless dtype is specified.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3], np.float32))
>>> output = ops.cumprod(x, 0)
>>> print(output)
[1. 2. 6.]
tinyms.primitives.cumsum(x, axis, dtype=None)[source]

Computes the cumulative sum of input Tensor along axis.

\[y_i = x_1 + x_2 + x_3 + ... + x_i\]

Note

On Ascend, the dtype of x only support :int8, uint8, int32, float16 or float32 in case of static shape. For the case of dynamic shape, the dtype of x only support int32, float16 or float32.

Parameters:
  • x (Tensor) – The input Tensor to accumulate.

  • axis (int) – Axis along which the cumulative sum is computed.

  • dtype (mindspore.dtype, optional) – The desired dtype of returned Tensor. If specified, the input Tensor will be cast to dtype before the computation. This is useful for preventing overflows. If not specified, stay the same as original Tensor. Default: None.

Returns:

Tensor, the shape of the output Tensor is consistent with the input Tensor’s.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[3, 4, 6, 10], [1, 6, 7, 9], [4, 3, 8, 7], [1, 3, 7, 9]]).astype(np.float32))
>>> # case 1: along the axis 0
>>> y = ops.cumsum(x, 0)
>>> print(y)
[[ 3.  4.  6. 10.]
 [ 4. 10. 13. 19.]
 [ 8. 13. 21. 26.]
 [ 9. 16. 28. 35.]]
>>> # case 2: along the axis 1
>>> y = ops.cumsum(x, 1)
>>> print(y)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
tinyms.primitives.deformable_conv2d(x, weight, offsets, kernel_size, strides, padding, bias=None, dilations=(1, 1, 1, 1), groups=1, deformable_groups=1, modulated=True)[source]

Given 4D tensor inputs x, weight and offsets, compute a 2D deformable convolution. The deformable convolution operation can be expressed as follow:

Deformable Convolution v1:

\[y(p)=\sum_{k=1}^{K}w_{k}\cdot x(p+p_{k}+\Delta{p_{k}})\]

Deformable Convolution v2:

\[y(p)=\sum_{k=1}^{K}w_{k}\cdot x(p+p_{k}+\Delta{p_{k}})\cdot \Delta{m_{k}}\]

Where \(\Delta{p_{k}}\) and \(\Delta{m_{k}}\) are the learnable offset and modulation scalar for the k-th location. For details, please refer to Deformable ConvNets v2: More Deformable, Better Results and Deformable Convolutional Networks.

Parameters:
  • x (Tensor) – A 4D tensor of input image. With the format “NCHW”, the shape is \((N, C_{in}, H_{in}, W_{in})\). Dtype: float16 or float32.

  • weight (Tensor) – A 4D tensor of learnable filters. Must have the same type as x. The shape is \((C_{out}, C_{in} / groups, H_{f}, W_{f})\).

  • offsets (Tensor) – A 4D tensor of x-y coordinates offset and mask. With the format “NCHW”, the shape is \((batch, 3 * deformable\_groups * H_{f} * W_{f}, H_{out}, W_{out})\). Note the C dimension is stored in the order of (offset_x, offset_y, mask). Must have the same type as x.

  • kernel_size (tuple[int]) – A tuple of 2 integers. The size of kernel.

  • strides (tuple[int]) – A tuple of 4 integers. The stride of the sliding window for each dimension of input. The dimension order is interpreted according to the data format of x. The N and C dimensions must be set to 1.

  • padding (tuple[int]) – A tuple of 4 integers. The number of pixels to add to each (top, bottom, left, right) side of the input.

  • bias (Tensor, optional) – An 1D tensor of additive biases to the filter outputs. The shape is \((C_{out})\). Defaults to None.

  • dilations (tuple[int], optional) – A tuple of 4 integers. The dilation factor for each dimension of input. The dimension order is interpreted according to the data format of x. The N and C dimensions must be set to 1. Defaults to (1, 1, 1, 1).

  • groups (int, optional) – An integer of type int32. The number of blocked connections from input channels to output channels. In_channels and out_channels must both be divisible by groups. Defaults to 1.

  • deformable_groups (int, optional) – An integer of type int32. The number of deformable group partitions. In_channels must be divisible by deformable_groups. Defaults to 1.

  • modulated (bool, optional) – Specifies version of DeformableConv2D, True means v2, False means v1, currently only supports v2. Defaults to True.

Returns:

Tensor, A 4D Tensor of output feature map. With the same type as x. With the format “NCHW”, the shape is \((N, C_{out}, H_{out}, W_{out})\).

\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[0] + padding[1] - (H_{f} - 1) \times \text{dilations[2]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[2] + padding[3] - (W_{f} - 1) \times \text{dilations[3]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

Raises:
  • TypeError – If strides, padding, kernel_size or dilations is not a tuple with integer elements.

  • TypeError – If modulated is not a bool.

  • ValueError – If the tuple size of strides, padding, kernel_size or dilations is not expected.

  • ValueError – The N or C dimensions of ‘strides’ or dilations is not set to 1.

  • ValueError – If modulated is not set to True.

Warning

This is an experimental API that is subject to change or deletion.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones((4, 3, 10, 10)), mstype.float32)
>>> kh, kw = 3, 3
>>> weight = Tensor(np.ones((5, 3, kh, kw)), mstype.float32)
>>> offsets = Tensor(np.ones((4, 3 * kh * kw, 8, 8)), mstype.float32)
>>> output = ops.deformable_conv2d(x, weight, offsets, (kh, kw), (1, 1, 1, 1), (0, 0, 0, 0))
>>> print(output.shape)
(4, 5, 8, 8)
tinyms.primitives.deg2rad(x)[source]

Converts angles in degrees to angles in radians element-wise.

Parameters:

x (Tensor) – The input tensor. With float16, float32 or float64 data type.

Returns:

Tensor, has the same dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x isn’t float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[90.0, -90.0], [180.0, -180.0], [270.0, -270.0]]).astype(np.float32))
>>> output = ops.deg2rad(x)
>>> print(output)
[[ 1.5707964 -1.5707964]
 [ 3.1415927 -3.1415927]
 [ 4.712389  -4.712389 ]]
tinyms.primitives.dense_to_sparse_coo(tensor: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.COOTensor[source]

Convert a Tensor to COOTensor.

Note

Only 2-D tensor is supported for now.

Parameters:

tensor (Tensor) – A dense tensor, must be 2-D.

Returns:

COOTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indices (Tensor): 2-D integer tensor, indicates the positions of values of the dense tensor.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the COOTensor, is the same as the original dense tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore as ms
>>> x = Tensor([[1, 0], [-5, 0]], ms.float32)
>>> output = ops.dense_to_sparse_coo(x)
>>> print(output.indices)
[[0 0]
[1 0]]
>>> print(output.values)
[ 1. -5.]
>>> print(output.shape)
(2, 2)
tinyms.primitives.dense_to_sparse_csr(tensor: mindspore.common.tensor.Tensor) → mindspore.common.sparse_tensor.CSRTensor[source]

Convert a Tensor to CSRTensor.

Note

Only 2-D tensor is supported for now.

Parameters:

tensor (Tensor) – A dense tensor, must be 2-D.

Returns:

CSRTensor, a sparse representation of the original dense tensor, containing the following parts.

  • indptr (Tensor): 1-D integer tensor, indicates the start and end point for values in each row.

  • indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.

  • values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.

  • shape (tuple(int)): the shape of the CSRTensor, is the same as the original dense tensor.

Raises:
Supported Platforms:

GPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore as ms
>>> x = Tensor([[1, 0], [-5, 0]], ms.float32)
>>> output = ops.dense_to_sparse_csr(x)
>>> print(output.indptr)
[0 1 2]
>>> print(output.indices)
[0 0]
>>> print(output.shape)
(2, 2)
tinyms.primitives.derivative(fn, primals, order)[source]

This function is designed to calculate the higher order differentiation of given composite function. To figure out order-th order differentiations, original inputs and order must be provided together. In particular, the value of input first order derivative is set to 1, while the other to 0.

Note

If primals is Tensor of int type, it will be converted to Tensor of float type.

Parameters:
  • fn (Union[Cell, function]) – Function to do TaylorOperation.

  • primals (Union[Tensor, tuple[Tensor]]) – The inputs to fn.

  • order (int) – For each Tensor, the order-th order of derivative of output with respect to the inputs will be figured out.

Returns:

Tuple, tuple of out_primals and out_series.

  • out_primals (Union[Tensor, list[Tensor]]) - The output of fn(primals).

  • out_series (Union[Tensor, list[Tensor]]) - The order-th order of derivative of output with respect to the inputs.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.sin = ops.Sin()
...         self.exp = ops.Exp()
...     def construct(self, x):
...         out1 = self.sin(x)
...         out2 = self.exp(out1)
...         return out2
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> order = 3
>>> net = Net()
>>> out_primals, out_series = ops.derivative(net, primals, order)
>>> print(out_primals, out_series)
[[2.319777  2.4825778]
 [1.1515628 0.4691642]] [[-4.0515366   3.6724353 ]
 [ 0.5053504  -0.52061415]]
tinyms.primitives.det(input)[source]

Computes the determinant of one or more square matrices.

Parameters:

input (Tensor) – A matrix to be calculated, its shape should be \([..., M, M]\) who must have at least two dimensions, and the last two dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.

Returns:

Tensor. The shape is \(input.shape[:-2]\), and the dtype is same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input not float32, float64, complex64 or complex128.

  • ValueError – If the last two dimensions of input is not same size.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> output = ops.det(input)
>>> print(output)
[-16.5 21. ]
Supported Platforms:

Ascend GPU CPU

tinyms.primitives.diag(input)[source]

Constructs a diagonal tensor with a given diagonal values.

Assume input has dimensions \((D_1,... D_k)\) , the output is a tensor of rank 2k with dimensions \((D_1,..., D_k, D_1,..., D_k)\) where: \(output[i_1,..., i_k, i_1,..., i_k] = input[i_1,..., i_k]\) and 0 everywhere else.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same dtype as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> input_x = Tensor([1, 2, 3, 4]).astype('int32')
>>> output = ops.diag(input_x)
>>> print(output)
[[1 0 0 0]
 [0 2 0 0]
 [0 0 3 0]
 [0 0 0 4]]
tinyms.primitives.diag_embed(input, offset=0, dim1=-2, dim2=-1)[source]

Creates a tensor with diagonals filled by input. The remaining elements are filled by 0. If the shape of input is \([x_{0}, x_{1}, ..., x_{n-1}, x_{n}]\), the output shape is: the vector obtained by inserting \(x_{n}+|offset|\) into the vector \([x_{0}, x_{1}, ..., x_{n-1}]\) at position dim1 and dim2.

Parameters:
  • input (Tensor) – Values to fill diagonal.

  • offset (int, optional) –

    Offset of the diagonal. \(offset=0\) refers to the main diagonal. Default: 0.

    • If \(offset>0\), fill the diagonals that are offset units upward from the main diagonal.

    • If \(offset<0\), fill the diagonals that are |offset| units downward from the main diagonal.

  • dim1 (int, optional) – The first dimension in input with respect to which to fill diagonal. Default: -2.

  • dim2 (int, optional) – The second dimension in input with respect to which to fill diagonal. Default: -1.

Returns:

Tensor, has the same dtype as input, but the shape of output is one dimension higher than the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not supported.

  • TypeError – If offset is not an int.

  • TypeError – If dim1 or dim2 is not an int.

  • ValueError – If the dimension of input is not 1D-6D.

  • ValueError – If dim1 is not in range of [-len(input.shape) - 1, len(input.shape)].

  • ValueError – If dim2 is not in range of [-len(input.shape) - 1, len(input.shape)].

  • ValueError – If dim1 and dim2 are identical.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2,3,4]), mindspore.float32)
>>> output = ops.diag_embed(x)
>>> print(output)
[[2. 0. 0.]
 [0. 3. 0.]
 [0. 0. 4.]]
tinyms.primitives.diagflat(input, offset=0)[source]

Create a 2-D Tensor which diagonal is the flattened input .

Parameters:
  • input (Tensor) – Input Tensor, which is flattened and set as the diagonal of the output.

  • offset (int, optional) –

    offset controls which diagonal to choose. Default: 0.

    • When offset is zero, the diagonal chosen is the main diagonal.

    • When offset is a positive integer, the diagonal chosen is up the main diagonal.

    • When offset is a negative integer, the diagonal chosen is down the main diagonal.

Returns:

The 2-D Tensor, whose diagonal is the flattened input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 2], mindspore.float32)
>>> output = ops.diagflat(x, 1)
>>> print(output)
[[0. 1. 0.]
 [0. 0. 2.]
 [0. 0. 0.]]
tinyms.primitives.diagonal(input, offset=0, dim1=0, dim2=1)[source]

Returns specified diagonals of input.

If input is 2-D, returns the diagonal of input with the given offset. If input has more than two dimensions, then the axes specified by dim1 and dim2 are used to determine the 2-D sub-array whose diagonal is returned. In this case, remove the dim1 and dim2 dimensions of input and insert the last dimension of input by the diagonal elements determined by dim1 and dim2.

Parameters:
  • input (Tensor) – Array from which the diagonals are taken.

  • offset (int, optional) – Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults: 0.

  • dim1 (int, optional) – Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0).

  • dim2 (int, optional) – Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis (1).

Returns:

Tensor, if input is 2-D, then input 1-D array containing the diagonal. If input.ndim > 2, then the dimensions specified by dim1 and dim2 are removed, and a new axis inserted at the end corresponding to the diagonal.

Raises:

ValueError – if the input tensor has less than two dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[0, 1], [2, 3]], mstype.float32)
>>> output = ops.diagonal(x)
>>> print(output)
[0 3]
tinyms.primitives.diff(x, n=1, axis=-1, prepend=None, append=None)[source]

Computes the n-th discrete difference along a specified axis of a given input x.

The first difference is calculated as \(out[i] = x[i+1] - x[i]\) along the specified axis. To compute higher differences, the function is called recursively using the output from the previous iteration as input.

Note

Zero-shaped Tensor is not supported, a value error is raised if an empty Tensor is encountered. Any dimension of an Tensor is 0 is considered an empty Tensor. Tensor with shape of \((0,)\), \((1, 2, 0, 4)\) are all empty Tensor.

Parameters:
  • x (Tensor) – Input tensor. Full support for signed integers, partial support for floats and complex numbers

  • n (int, optional) – The number of times values are differenced. If zero, the input is returned as-is. Currently only 1 is supported. Default: 1.

  • axis (int, optional) – The axis along which the difference is taken, default is the last axis. Default: -1.

  • prepend (Tensor, optional) – Values to prepend to x along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array along all other axis. Otherwise the dimension and shape must match x except along axis. Default: None.

  • append (Tensor, optional) – Values to append to x along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array along all other axis. Otherwise the dimension and shape must match x except along axis. Default: None.

Returns:

Tensor, the n-th differences of input. The shape of the output is the same as x except along axis where the size is reduced by n. The type of the output is the same as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 3, -1, 0, 4])
>>> out = ops.diff(x)
>>> print(out.asnumpy())
[ 2 -4  1  4]
tinyms.primitives.digamma(input)[source]

Computes the grad of the lgamma function on input.

\[P(input) = grad(\ln \Gamma(input))\]

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

input (Tensor) – The input tensor. With type of float16 or float32 or float64.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1.5, 0.5, 9]).astype(np.float16))
>>> output = ops.digamma(x)
>>> print(output)
[ 0.0365 -1.964   2.14  ]
tinyms.primitives.dist(input, other, p=2)[source]

Computes batched the \(p\)-norm distance between each pair of the two collections of row vectors.

Note

Since only normalization for integer \(p\)-normal form is supported in MindSpore, a type error will be raised if \(p\) is not an integer.

Parameters:
  • input (Tensor) – The first input tensor. The dtype must be float16 or float32.

  • other (Tensor) – The second input tensor. The dtype must be float16 or float32.

  • p (int, optional) – The order of norm. p is greater than or equal to 0. Default: 2.

Returns:

Tensor, has the same dtype as input, which shape is \((1)\).

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input or other is neither float16 nor float32.

  • TypeError – If p is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1.0, 1.0], [2.0, 2.0]]])
>>> input_y = Tensor([[[3.0, 3.0], [3.0, 3.0]]])
>>> out = ops.dist(input_x, input_y)
>>> print(out.asnumpy())
3.1622777
tinyms.primitives.div(input, other, *, rounding_mode=None)[source]

Divides the first input tensor by the second input tensor in floating-point type element-wise.

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = input_{i} / other_{i}\]
Parameters:
  • input (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • other (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Keyword Arguments:

rounding_mode (str, optional) –

Type of rounding applied to the result. Three types are defined as,

  • None: Default behavior, which is the same as true division in Python or true_divide in NumPy.

  • ”floor”: Rounds the division of the inputs down, which is the same as floor division in Python or floor_divide in NumPy.

  • ”trunc”: Rounds the division of the inputs towards zero, which is the same as C-style integer division.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not one of the following: Tensor, Number, bool.

  • ValueError – If rounding_mode value is not None, “floor” or “trunc”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> output = ops.div(x, y)
>>> print(output)
[0.25 0.4 0.5]
tinyms.primitives.divide(input, other, *, rounding_mode=None)[source]

Alias for mindspore.ops.div() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.dot(input, other)[source]

Computation a dot product between samples in two tensors.

Parameters:
  • input (Tensor) – First tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

  • other (Tensor) – Second tensor in Dot op with datatype float16 or float32, The rank must be greater than or equal to 2.

Returns:

Tensor, dot product of input and other.

Raises:
  • TypeError – If type of input and other are not the same.

  • TypeError – If dtype of input or other is not float16 or float32.

  • ValueError – If rank of input or other less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor(np.ones(shape=[2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[3. 3.]]
 [[3. 3.]]]
>>> print(output.shape)
(2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]]
  [[3. 3.]]]]
>>> print(output.shape)
(1, 2, 1, 2)
>>> input = Tensor(np.ones(shape=[1, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[3. 3.]
   [3. 3.]]
  [[3. 3.]
   [3. 3.]]]]
>>> print(output.shape)
(1, 2, 2, 2)
>>> input = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> other = Tensor(np.ones(shape=[2, 1, 3, 2]), mindspore.float32)
>>> output = ops.dot(input, other)
>>> print(output)
[[[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]
 [[[[3. 3.]]
   [[3. 3.]]]
  [[[3. 3.]]
   [[3. 3.]]]]]
>>> print(output.shape)
(3, 2, 2, 1, 2)
tinyms.primitives.dropout(input, p=0.5, training=True, seed=None)[source]

During training, randomly zeroes some of the elements of the input tensor with probability p from a Bernoulli distribution. It plays the role of reducing neuron correlation and avoid overfitting. The meaning of probability here is opposite to that in ops.Dropout and nn.Dropout.

Parameters:
  • input (Tensor) – The input of Dropout, a Tensor of any shape with data type of float16 or float32.

  • p (float, optional) – The dropping rate, between 0 and 1, e.g. p = 0.1, means dropping out 10% of input units. Default: 0.5.

  • training (bool) – Apply dropout if is True. Default: True.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

  • output (Tensor) - Zeroed tensor, with the same shape and data type as input.

Raises:
  • TypeError – If p is not a float.

  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(((20, 16), (50, 50)), mindspore.float32)
>>> output = ops.dropout(input, p=0.5)
>>> print(output.shape)
(2, 2)
tinyms.primitives.dropout1d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 3-dimensional tensor with a shape of \(NCL\), the channel feature map refers to a 1-dimensional feature map with the shape of \(L\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 1D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p.

The parper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

dropout1d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A tensor with shape \((N, C, L)\) or \((C, L)\), where N is the batch size, C is the number of channels, L is the feature length. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float, optional) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means an 80% chance of clearing. Default: 0.5.

  • training (bool, optional) – Apply dropout if is True. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 2D or 3D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.random.randn(4, 3), mindspore.float32)
>>> output = ops.dropout1d(input_x, 0.5)
>>> print(output.shape)
(4, 3)
tinyms.primitives.dropout2d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 4-dimensional tensor with a shape of \(NCHW\), the channel feature map refers to a 2-dimensional feature map with the shape of \(HW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 2D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p. The parper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

dropout2d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A 4D tensor with shape \((N, C, H, W)\), where N is the batch size, C is the number of channels, H is the feature height, and W is the feature width. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means dropping out 80% of channels. Default: 0.5.

  • training (bool) – If training is True, applying dropout, otherwise, not applying. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 4D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output = ops.dropout2d(input, 0.5)
>>> print(output.shape)
(2, 1, 2, 3)
tinyms.primitives.dropout3d(input, p=0.5, training=True)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution(For a 5-dimensional tensor with a shape of \(NCDHW\), the channel feature map refers to a 3-dimensional feature map with a shape of \(DHW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 3D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p.

dropout3d can improve the independence between channel feature maps.

Parameters:
  • input (Tensor) – A 5D tensor with shape \((N, C, D, H, W)\), where N is the batch size, C is the number of channels, D is the feature depth, H is the feature height, and W is the feature width. The data type must be int8, int16, int32, int64, float16, float32 or float64.

  • p (float) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means dropping out 80% of channels. Default: 0.5.

  • training (bool) – If training is True, applying dropout, otherwise, not applying. Default: True.

Returns:

Tensor, output, with the same shape and data type as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not int8, int16, int32, int64, float16, float32 or float64.

  • TypeError – If the data type of p is not float.

  • ValueError – If p is out of the range [0.0, 1.0].

  • ValueError – If input shape is not 5D.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output = ops.dropout3d(input, 0.5)
>>> print(output.shape)
(2, 1, 2, 1, 2)
tinyms.primitives.dsplit(input, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors along the 3rd axis. It is equivalent to ops.tensor_split with \(axis=2\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(6).reshape((1, 2, 3)).astype('float32')
>>> output = ops.dsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 0.00000000e+00], [ 3.00000000e+00]]]),
 Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 1.00000000e+00], [ 4.00000000e+00]]]),
 Tensor(shape=[1, 2, 1], dtype=Float32, value=[[[ 2.00000000e+00], [ 5.00000000e+00]]]))
tinyms.primitives.dstack(inputs)[source]

Stacks tensors along the third axis.

1-D tensors \((N,)\) should be reshaped to \((1,N,1)\). 2-D tensors \((M,N)\) should be reshaped to \((M,N,1)\) before concatenation.

Parameters:

inputs (Union(List[Tensor], Tuple[Tensor])) – A sequence of tensors. The tensors must have the same shape along all but the third axis. 1-D or 2-D tensors must have the same shape.

Returns:

Stacked Tensor, will be at least 3-D. The output shape is similar to the output of numpy.dstack() function.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.arange(1, 7).reshape(2, 3))
>>> x2 = Tensor(np.arange(7, 13).reshape(2, 3))
>>> out = ops.dstack([x1, x2])
>>> print(out.asnumpy())
[[[ 1.  7.]
  [ 2.  8.]
  [ 3.  9.]]
 [[ 4. 10.]
  [ 5. 11.]
  [ 6. 12.]]]
tinyms.primitives.dyn_shape(input_x)[source]

Returns the shape of the input tensor.

Parameters:

input_x (Tensor) – The input Tensor.

Returns:

Tensor, the shape of input_x .

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.dyn_shape(input_x)
>>> print(output)
[3 2 1]
tinyms.primitives.eig(A)[source]

Computes the eigenvalues and eigenvectors of a square matrix(batch square matrices).

Warning

This is an experimental API that is subject to change or deletion.

Parameters:

A (Tensor) – Square matrices of shape \((*, N, N)\), with float32, float64, complex64 or complex128 data type.

Returns:

  • eigen_values (Tensor) - Shape \((*, N)\). eigenvalues of the corresponding matrix. The eigenvalues may not have an order.

  • eigen_vectors (Tensor) - Shape \((*, N, N)\),columns of eigen vectors represent

  • normalized (unit length) eigenvectors of corresponding eigenvalues.

Raises:
  • TypeError – If dtype of A is not one of: float64, float32, complex64 or complex128.

  • TypeError – If A is not a Tensor.

  • ValueError – If A is not a square(batch squares).

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[1.0, 0.0], [0.0, 2.0]]), mindspore.float32)
>>> u, v = ops.eig(input_x)
>>> print(u)
[1.+0.j 2.+0.j]
>>> print(v)
[[1.+0.j 0.+0.j]
 [0.+0.j 1.+0.j]]
tinyms.primitives.einsum(equation, *operands)[source]

According to the Einstein summation Convention (Einsum), the product of the input tensor elements is summed along the specified dimension. You can use this operator to perform diagonal, reducesum, transpose, matmul, mul, inner product operations, etc.

Note

The sublist format is also supported. For example, ops.einsum(op1, sublist1, op2, sublist2, …, sublist_out). In this format, equation can be derived by the sublists which are made up of Python’s Ellipsis and list of integers in [0, 52). Each operand is followed by a sublist and an output sublist is at the end.

Parameters:
  • equation (str) – Notation based on the Einstein summation convention, represent the operation you want to do. the value can contain only letters, commas, ellipsis and arrow. The letters represent input tensor dimension, commas represent separate tensors, ellipsis indicates the tensor dimension that you do not care about, the left of the arrow indicates the input tensors, and the right of it indicates the desired output dimension.

  • operands (Tensor) – Input tensor used for calculation. The dtype of the tensor must be the same.

Returns:

Tensor, the shape of it can be obtained from the equation , and the dtype is the same as input tensors.

Raises:
  • TypeError – If equation is invalid, or the equation does not match the input tensor.

  • ValueError – If the number in sublist is not in [0, 52) in sublist format.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> equation = "i->"
>>> output = ops.einsum(equation, x)
>>> print(output)
[7.]
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> equation = "i,i->i"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[ 2. 8. 12.]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> y = Tensor(np.array([[2.0, 3.0], [1.0, 2.0], [4.0, 5.0]]), mindspore.float32)
>>> equation = "ij,jk->ik"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[[16. 22.]
 [37. 52.]]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->ji"
>>> output = ops.einsum(equation, x)
>>> print(output)
[[1. 4.]
 [2. 5.]
 [3. 6.]]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "ij->j"
>>> output = ops.einsum(equation, x)
>>> print(output)
[5. 7. 9.]
>>> x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> equation = "...->"
>>> output = ops.einsum(equation, x)
>>> print(output)
[21.]
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 1.0]), mindspore.float32)
>>> equation = "j,i->ji"
>>> output = ops.einsum(equation, x, y)
>>> print(output)
[[ 2. 4. 1.]
 [ 4. 8. 2.]
 [ 6. 12. 3.]]
>>> x = mindspore.Tensor([1, 2, 3, 4], mindspore.float32)
>>> y = mindspore.Tensor([1, 2], mindspore.float32)
>>> output = ops.einsum(x, [..., 1], y, [..., 2], [..., 1, 2])
>>> print(output)
[[1. 2.]
 [2. 4.]
 [3. 6.]
 [4. 8.]]
tinyms.primitives.elu(input_x, alpha=1.0)[source]

Exponential Linear Unit activation function.

Applies the exponential linear unit function element-wise. The activation function is defined as:

\[\begin{split}\text{ELU}(x)= \left\{ \begin{array}{align} \alpha(e^{x} - 1) & \text{if } x \le 0\\ x & \text{if } x \gt 0\\ \end{array}\right.\end{split}\]

Where \(x\) is the element of input Tensor input_x, \(\alpha\) is param alpha, it determines the smoothness of ELU. The picture about ELU looks like this ELU .

Parameters:
  • input_x (Tensor) – The input of ELU is a Tensor of any dimension with data type of float16 or float32.

  • alpha (float, optional) – The alpha value of ELU, the data type is float. Only support ‘1.0’ currently. Default: 1.0.

Returns:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.elu(x)
>>> print(output)
[[-0.63212055  4.         -0.99966455]
 [ 2.         -0.99326205  9.        ]]
tinyms.primitives.equal(input, other)[source]

Computes the equivalence between two tensors element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} = y_{i} \\ & \text{False, if } x_{i} \ne y_{i} \end{cases}\end{split}\]

Note

  • input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, Number]) – The first input is a number or a tensor whose data type is number.

  • other (Union[Tensor, Number]) – The second input is a number when the first input is a tensor or a tensor whose data type is number. The data type is the same as the first input.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: The shape of two inputs are different
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = ops.equal(x, 2.0)
>>> print(output)
[False True False]
>>> # case 2: The shape of two inputs are the same
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> output = ops.equal(x, y)
>>> print(output)
[ True  True False]
tinyms.primitives.erf(input)[source]

Computes the Gauss error function of input element-wise.

\[erf(x)=\frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Parameters:

input (Tensor) – The input tensor of Gaussian error function. Its data type must be float16 float32 or float64.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> output = ops.erf(x)
>>> print(output)
[-0.8427168   0.          0.8427168   0.99530876  0.99997765]
tinyms.primitives.erfc(input)[source]

Computes the complementary error function of input element-wise.

\[erfc(x) = 1 - \frac{2} {\sqrt{\pi}} \int\limits_0^{x} e^{-t^{2}} dt\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16, float32 or float64.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> output = ops.erfc(x)
>>> print(output)
[1.8427168e+00 1.0000000e+00 1.5728319e-01 4.6912432e-03 2.2351742e-05]
tinyms.primitives.erfinv(input)[source]

Returns the result of the inverse error function with input, which is defined in the range (-1, 1) as:

\[erfinv(erf(x)) = x\]

where \(x\) is the input.

Parameters:

input (Tensor) – The input tensor to compute to, with data type float32, float16 or float64.

Returns:

Tensor, has the same shape and dtype as input.

Raises:

TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> output = ops.erfinv(x)
>>> print(output)
[ 0.          0.47695306 -1.1630805 ]
tinyms.primitives.exp(input)[source]

Returns exponential of a tensor element-wise.

\[out_i = e^{x_i}\]
Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.exp(x)
>>> print(output)
[ 2.718282  7.389056 54.598152]
tinyms.primitives.exp2(input)[source]

Computes base two exponential of Tensor input element-wise.

\[out_i = 2^{input_i}\]
Parameters:

input (Tensor) – Input tensor.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 3, 4]), mindspore.float32)
>>> output = ops.exp2(x)
>>> print(output)
[ 4.  8. 16.]
tinyms.primitives.expand(input_x, size)[source]

Returns a new tensor where the dimension of size is expanded to a larger size.

Note

  • If the size for a dimension is -1, it means no change for the size of that dimension.

  • When a Tensor is expanded to a larger number of dimensions, the new ones will be appended at the front, and for the new dimensions, the size can not be -1.

Parameters:
  • input_x (Tensor) – A Tensor to be expanded. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • size (Tensor) – The expanded shape of input_x.

Returns:

y (Tensor) - Tensor after expansion whose shape is size.

Raises:
  • TypeError – If input_x or size is not Tensor.

  • TypeError – If the type of size is not one of the following dtype: int16, int32, int64.

  • ValueError – If the size of size is less than the size of input_x.shape.

  • ValueError – If size is not a 1-D tensor.

  • ValueError – If the expanded size is not equal to the existing shape of input_x at a dimension that is not 1.

  • ValueError – If the expanded size < 0 and it is in a leading position, corresponding to a non-existing dimension in input_x.

  • ValueError – If the number of elements of output is more than 1000000.

Supported Platforms:

Ascend CPU

Examples

>>> input_x = Tensor(np.array([[2], [3], [4]]), mindspore.float32)
>>> size = Tensor(np.array([3,4]), mindspore.int32)
>>> y = ops.expand(input_x, size)
>>> print(y)
[[2. 2. 2. 2.]
 [3. 3. 3. 3.]
 [4. 4. 4. 4.]]
tinyms.primitives.expand_dims(input_x, axis)[source]

Adds an additional dimension to input_x at the given axis.

Note

If the specified axis is a negative number, the index is counted backward from the end and starts at 1.

Parameters:
  • input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (int) – Specifies the dimension index at which to expand the shape of input_x. The value of axis must be in the range [-input_x.ndim-1, input_x.ndim]. Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((1, x_1, x_2, ..., x_R)\) if the value of axis is 0. It has the same data type as input_x.

Raises:
  • TypeError – If axis is not an int.

  • ValueError – If axis is not in the valid range \([-a.ndim-1, a.ndim]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.expand_dims(input_tensor, 0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
tinyms.primitives.expm1(input)[source]

Returns exponential then minus 1 of a tensor element-wise.

\[out_i = e^{x_i} - 1\]
Parameters:

input (Tensor) – The input tensor with a dtype of float16 or float32.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.expm1(x)
>>> print(output)
[ 0.        1.718282  6.389056 53.598152]
tinyms.primitives.eye(n, m=None, dtype=None)[source]

Creates a tensor with ones on the diagonal and zeros in the rest.

Note

Combines ReverseV2 operator to get an anti-diagonal Tensor, but ReverseV2 only supports Ascend and GPU platforms currently.

Parameters:
  • n (int) – The number of rows of returned tensor. Constant value only.

  • m (int) – The number of columns of returned tensor. Constant value only. Default: if None, the number of columns is as the same as n.

  • dtype (mindspore.dtype) – MindSpore’s dtype, the data type of the returned tensor. The data type can be bool or Number. Default: None, the data type of the returned tensor is mindspore.float32.

Returns:

Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of output depends on the user’s Inputs n and m. And the data type depends on Inputs dtype.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.eye(2, 2, mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = ops.eye(1, 2, mindspore.float64)
>>> print(output)
[[1. 0.]]
>>> print(output.dtype)
Float64
>>> output = ops.eye(2, dtype=mindspore.int32)
>>> print(output)
[[1 0]
 [0 1]]
>>> print(output.dtype)
Int32
>>> output = ops.eye(2)
>>> print(output)
[[1. 0.]
 [0. 1.]]
>>> print(output.dtype)
Float32
tinyms.primitives.fast_gelu(x)[source]

Fast Gaussian Error Linear Units activation function.

FastGeLU is defined as follows:

\[\text{output} = \frac {x} {1 + \exp(-1.702 * \left| x \right|)} * \exp(0.851 * (x - \left| x \right|)),\]

where \(x\) is the element of the input.

Parameters:

x (Tensor) – Input to compute the FastGeLU with data type of float16 or float32.

Returns:

Tensor, with the same type and shape as x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
tinyms.primitives.fill(type, shape, value)[source]

Create a Tensor of the specified shape and fill it with the specified value.

Parameters:
  • type (mindspore.dtype) –

    The specified type of output tensor. The data type only supports bool_ and number .

  • shape (Union(Tensor, tuple[int])) – The specified shape of output tensor.

  • value (Union(Tensor, number.Number, bool)) – Value to fill the returned tensor.

Returns:

Tensor.

Raises:

TypeError – If shape is not a tuple or a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.fill(mindspore.float32, (2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ops.fill(mindspore.float32, (3, 3), 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.fills(x, value)[source]

fills is deprecated, please use ops.fill instead.

tinyms.primitives.flatten(input, order='C', *, start_dim=1, end_dim=-1)[source]

Flatten a tensor along dimensions from start_dim to start_dim.

Parameters:
  • input (Tensor) – The input Tensor.

  • order (str, optional) – Only ‘C’ and ‘F’ are supported. ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. Default: ‘C’.

Keyword Arguments:
  • start_dim (int, optional) – The first dimension to flatten. Default: 1.

  • end_dim (int, optional) – The last dimension to flatten. Default: -1.

Returns:

Tensor. If no dimensions are flattened, returns the original input, otherwise return the flattened Tensor. If input is a 0-dimensional Tensor, a 1-dimensional Tensor will be returned.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If order is not string type.

  • ValueError – If order is string type, but not ‘C’ or ‘F’.

  • TypeError – If start_dim or end_dim is not int.

  • ValueError – If start_dim is greater than end_dim after canonicalized.

  • ValueError – If start_dim or end_dim is not in range of [-input.dim, input.dim-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> output = ops.flatten(input_x)
>>> print(output.shape)
(1, 24)
tinyms.primitives.flip(input, dims)[source]

Reverses the order of elements in a tensor along the given axis.

The shape of the tensor is preserved, but the elements are reordered.

Parameters:
  • input (Tensor) – Input tensor.

  • dims (Union[list[int], tuple[int]]) – Axis or axes along which to flip over. Flipping is performed on all of the axes specified in the tuple, If dims is a tuple of integers contains negative, it counts from the last to the first axis.

Returns:

Tensor, with the entries of dims reversed.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.flip(input, (0, 2))
>>> print(output)
[[[6 5]
  [8 7]]
 [[2 1]
  [4 3]]]
tinyms.primitives.fliplr(input)[source]

Flips the elements of each row in the left/right direction, while preserving the columns of the input tensor.

Parameters:

input (Tensor) – Input tensor.

Returns:

Tensor after the flip.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.fliplr(input)
>>> print(output)
[[[3 4]
  [1 2]]
 [[7 8]
  [5 6]]]
tinyms.primitives.flipud(input)[source]

Flips the elements of each column in the up/down direction, while preserving the rows of the input tensor.

Parameters:

input (Tensor) – Input array.

Returns:

Tensor after the flip.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.arange(1, 9).reshape((2, 2, 2)))
>>> output = ops.flipud(input)
>>> print(output)
[[[5 6]
  [7 8]]
 [[1 2]
  [3 4]]]
tinyms.primitives.float_power(input, exponent)[source]

Computes input to the power of the exponent. For the real number type, cast input and exponent to mindspore.float64 to calculate. Currently, complex type calculation is not supported.

Parameters:
  • input (Union[Tensor, Number]) – The first input is a tensor or a number.

  • exponent (Union[Tensor, Number]) – The second input, if the first input is Tensor, the second input can be Number or Tensor. Otherwise, it must be a Tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting. For the complex type, the return value type is the same as the input type. For the real number type, the return value type is mindspore.float64.

Raises:
  • TypeError – If neither input nor exponent is a Tensor.

  • TypeError – If the data type of input or exponent is not in Tensor and Number.

Supported Platforms:

GPU CPU

Examples

>>> input = Tensor(np.array([-1.5, 0., 2.]))
>>> output = ops.float_power(input, 2)
>>> print(output)
[2.25 0.   4.  ]
tinyms.primitives.floor(input)[source]

Rounds a tensor down to the closest integer element-wise.

\[out_i = \lfloor x_i \rfloor\]
Parameters:

input (Tensor) – The input tensor, its data type must be float16, float32 or float64.

Returns:

Tensor, has the same shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not in [float16, float32, float64].

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> output = ops.floor(x)
>>> print(output)
[ 1.  2. -2.]
tinyms.primitives.floor_div(x, y)[source]

Divides the first input tensor by the second input tensor element-wise and round down to the closest integer.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} = \text{floor}( \frac{x_i}{y_i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the mindspore.ops.Floor operator.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.floor_div(x, y)
>>> print(output)
[ 0  1 -1]
tinyms.primitives.floor_mod(x, y)[source]

Computes the remainder of division element-wise. It’s a flooring divide. E.g. \(floor(x / y) * y + mod(x, y) = x\).

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be both bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[out_{i} =\text{floor}(x_{i} // y_{i})\]

where the \(floor\) indicates the Floor operator, for more details, please refer to the mindspore.ops.Floor operator.

Warning

  • Data of input y should not be 0, or the maximum value of its dtype will be returned.

  • When the elements of input exceeds 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as \((D1, D2 ..., Dn)\), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision of the two inputs.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.floor_mod(x, y)
>>> print(output)
[2 1 2]
tinyms.primitives.fmax(input, other)[source]

Computes the maximum of input tensors element-wise.

\[output_i = max(x1_i, x2_i)\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • Shapes of input and other should be able to broadcast.

  • If one of the elements to be compared is NaN, another element is returned.

Parameters:
  • input (Tensor) – The first tensor. The supported dtypes are: float16, float32, float64, int32, int64.

  • other (Tensor) – The second tensor. The supported dtypes are: float16, float32, float64, int32, int64.

Returns:

A Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input or other is not Tensor.

  • TypeError – If dtype of input or other is not one of: float16, float32, float64, int32, int64.

  • ValueError – If the shape of input and other can not broadcast.

Supported Platforms:

CPU

Examples

>>> x1 = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> x2 = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.fmax(x1, x2)
>>> print(output)
[4. 5. 6.]
tinyms.primitives.fmin(input, other)[source]

Computes the minimum of input tensors element-wise.

\[output_i = min(input_i, other_i)\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • Shapes of input and other should be able to broadcast.

  • If one of the elements to be compared is NaN, another element is returned.

Parameters:
  • input (Tensor) – The first tensor. The supported dtypes are: float16, float32, float64, int32, int64.

  • other (Tensor) – The second tensor. The supported dtypes are: float16, float32, float64, int32, int64.

Returns:

A Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input or other is not Tensor.

  • TypeError – If dtype of input or other is not one of: float16, float32, float64, int32, int64.

  • ValueError – If the shape of input and other can not broadcast.

Supported Platforms:

Examples

>>> input = Tensor(np.array([1.0, 5.0, 3.0]), mstype.float32)
>>> other = Tensor(np.array([4.0, 2.0, 6.0]), mstype.float32)
>>> output = ops.fmin(input, other)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.fmod(input, other)[source]

Computes the floating-point remainder of the division operation input/other.

\[out = input - n * other\]

Where \(n\) is \(input/other\) with its fractional part truncated. The returned value has the same sign as input and is less than other in magnitude.

Parameters:
  • input (Union[Tensor, Number]) – the dividend.

  • other (Union[Tensor, Number]) – the divisor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-4., -3.5, 0, 3.5, 4]), mindspore.float32)
>>> output = ops.fmod(input, 2.5)
>>> print(output)
[-1.5 -1.   0.   1.   1.5]
tinyms.primitives.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)[source]

Combines an array of sliding local blocks into a large containing tensor.

Warning

  • Currently, only 4-D output tensors (batched image-like tensors) are supported.

Parameters:
  • input (Tensor) – 4-D Tensor with data type float16 or float32.

  • output_size (Tensor) – 1D tensor with 2 elements of data type int.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The size of the dilation, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The size of the padding, should be two int for height and width. If type is int, it means that height equal with width. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The size of the stride, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

Returns:

A Tensor, with same type as input , format of the Tensor is (N, C, H, W).

Raises:
  • TypeError – If kernel_size, dilation, padding, stride data type is not int, tuple or list.

  • ValueError – If kernel_size, dilation, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding value is less than zero or elements number more than 2.

  • ValueError – If input.shape[2] != kernel_size[0] * kernel_size[1].

  • ValueError – If input.shape[3] does not match the calculated number of sliding blocks.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
>>> output_size = Tensor(input_data=[8, 8], dtype=mstype.int32)
>>> output = ops.fold(x, output_size, [2, 2], [2, 2], [2, 2], [2, 2])
>>> print(output.shape)
(16, 16, 8, 8)
tinyms.primitives.frac(x)[source]

Calculates the fractional part of each element in the input

Parameters:

x (Tensor) – x is a tensor.

Returns:

Tensor, has the same shape and type as input.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.common import dtype as mstype
>>> import mindspore.ops as ops
>>> x = Tensor([2, 4.2, -2.5], mstype.float16)
>>> output = ops.frac(x)
>>> print(output)
[ 0.      0.1992 -0.5   ]
tinyms.primitives.fractional_max_pool2d(input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 2D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

Parameters:
  • input (Tensor) – Tensor of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\), with float16, float32, float64, int32, int64 data type.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. The value must be a positive integer.

  • output_size (Union[int, tuple[int]], optional) – The shape of the target output_size, is an int number that represents height and width, or a tuple of two int numbers that represent height and width respectively. The value must be a positive integer. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type: float16, float32, double, and value is between (0, 1). Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool2d, which is a 3D tensor. Tensor of data type: float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 2)\) or \((1, C, 2)\). Default: None.

Returns:

  • y (Tensor) - Has the same type as the input. Has the shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\) , where \((H_{out}, W_{out})\) = output_size or \((H_{out}, W_{out})\) = output_ratio * \((H_{in}, W_{in})\).

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int64 data type. It will output only when return_indices is True.

Raises:
  • TypeError – If data type of input is not one of the following: float16, float32, float64, int32, int64.

  • TypeError – If data type of _random_samples is not one of the following: float16, float32, float64.

  • ValueError – If kernel_size is not a number and kernel_size is not a tuple of length 2.

  • ValueError – If output_size is not a number and output_size is not a tuple of length 2.

  • ValueError – If the sum of kernel_size , output_size and -1 is larger than the corresponding dimension of input.

  • ValueError – If the dimension of _random_samples is not 3.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 2.

Supported Platforms:

CPU

Examples

>>> input = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.9581, 0.4760, 0.9014, 0.8522, 0.3664,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([1, 1, 5, 5]), mstype.float32)
>>> _random_samples = Tensor(np.array([[[0.8, 0.8]]]), mstype.float32)
>>> y, argmax = ops.fractional_max_pool2d(input, kernel_size=2, output_size=(2, 2),
...                                       _random_samples=_random_samples, return_indices=True)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
>>> y, argmax = ops.fractional_max_pool2d(input, kernel_size=2, output_ratio=(0.5, 0.5),
...                                       _random_samples=_random_samples, return_indices=True)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
tinyms.primitives.fractional_max_pool3d(input, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 3D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

The input and output data format can be “NCDHW”. N is the batch size, C is the number of channels, D the feature depth, H is the feature height, and W is the feature width.

Parameters:
  • input (Tensor) – The input of FractionalMaxPool3d, which is a 4D or 5D tensor. Tensor of data type: float16, float32, double, int32, int64. Supported shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer.

  • output_size (Union[int, tuple[int]], optional) – The Shape of the target output_size, is an int number that represents depth, height and width, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type: float16, float32, double, and value is between (0, 1). Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool3d, which is a 3D tensor. Tensor of data type: float16, float32, double, and value is between (0, 1). Supported shape \((N, C, 3)\) or \((1, C, 3)\) .

Returns:

  • y (Tensor) - A tensor, the output of FractionalMaxPool3d. Has the same data type with input. Has the shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\) , where \((D_{out}, H_{out}, W_{out})\) = output_size or \((D_{out}, H_{out}, W_{out})\) = output_ratio * \((D_{in}, H_{in}, W_{in})\) .

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int32 data type. It will output only when return_indices is True.

Raises:
  • TypeError – If input is not a 4D or 5D tensor.

  • TypeError – If _random_samples is not a 3D tensor.

  • TypeError – If data type of input is not float16, float32, double, int32, int64.

  • TypeError – If dtype of _random_samples is not float16, float32, double.

  • TypeError – If dtype of argmax is not int32, int64.

  • ValueError – If output_size is a tuple and if output_size length is not 3.

  • ValueError – If kernel_size is a tuple and if kernel_size length is not 3.

  • ValueError – If numbers in output_size or kernel_size is not positive.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 3.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...            .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> _random_samples = Tensor(np.array([0.7, 0.7, 0.7]).reshape([1, 1, 3]), mstype.float32)
>>> output, argmax = ops.fractional_max_pool3d(x, kernel_size=(1.0, 1.0, 1.0), output_size=(1, 1, 3),
...                                            _random_samples=_random_samples, return_indices=True)
>>> print(output)
[[[[[13. 14. 16.]]]]]
>>> print(argmax)
[[[[[12 13 15]]]]]
>>> output, argmax = ops.fractional_max_pool3d(x, kernel_size=(1.0, 1.0, 1.0), output_ratio=(0.5, 0.5, 0.5),
...                                            _random_samples=_random_samples, return_indices=True)
>>> print(output)
[[[[[13. 16.]]]]]
>>> print(argmax)
[[[[[12 15]]]]]
tinyms.primitives.full(size, fill_value, *, dtype=None)[source]

Create a Tensor of the specified shape and fill it with the specified value.

Parameters:
  • size (Union(tuple[int], list[int])) – The specified shape of output tensor.

  • fill_value (number.Number) – Value to fill the returned tensor. Complex numbers are not supported for now.

Keyword Arguments:

dtype (mindspore.dtype) – The specified type of output tensor. bool_ and number are supported, for details, please refer to mindspore.dtype . Default: None.

Returns:

Tensor.

Raises:
  • TypeError – If size is not a tuple or list.

  • ValueError – The element in size is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.full((2, 2), 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> output = ops.full((3, 3), 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.full_like(input, fill_value, *, dtype=None)[source]

Return a Tensor of the same shape as input and filled with fill_value.

Parameters:
  • input (Tensor) – input Tensor and the output Tensor have the same shape as input.

  • fill_value (Number) – Value to fill the returned Tensor. Complex numbers are not supported for now.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. bool_ and number are supported, for details, please refer to mindspore.dtype . Default: None.

Returns:

Tensor.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([[0, 1], [2, 1]], dtype=mindspore.int32)
>>> output = ops.full_like(input, 1)
>>> print(output)
[[1. 1.]
 [1. 1.]]
>>> input = Tensor([[0, 1, 1], [2, 1, 2], [1, 3, 4]], dtype=mindspore.int32)
>>> output = ops.full_like(input, 0)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.gamma(shape, alpha, beta, seed=None)[source]

Generates random numbers according to the Gamma random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. It should be greater than 0 with float32 data type.

  • beta (Tensor) – The \(\beta\) distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of alpha and beta. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If neither alpha nor beta is a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha and beta is not float32.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: alpha_shape is (2, 2)
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> # case 2: alpha_shape is (2, 3), so shape is (3, 1, 3)
>>> shape = (3, 1, 3)
>>> alpha = Tensor(np.array([[1, 3, 4], [2, 5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> # case 3: beta_shape is (1, 2), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([1.0, 2]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 2.2132034  5.8855834]]
 [ 3.3981476  7.5805717]
[[ 3.3981476  7.5805717]]
 [ 3.7190282 19.941492]
[[ 2.9512358  2.5969937]]
 [ 3.786061   5.160872 ]]]
>>> # case 4: beta_shape is (2, 1), the output is different.
>>> shape = (3, 1, 2)
>>> alpha = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> beta = Tensor(np.array([[1.0], [2.0]]), mindspore.float32)
>>> output = ops.gamma(shape, alpha, beta, seed=5)
>>> result = output.shape
>>> print(output)
[[[ 5.6085486  7.8280783]]
 [ 15.97684  16.116285]
[[ 1.8347423  1.713663]]
 [ 3.2434065 15.667398]
[[ 4.2922077  7.3365674]]
 [ 5.3876944  13.159832 ]]]
tinyms.primitives.gather(input_params, input_indices, axis, batch_dims=0)[source]

Returns the slice of the input tensor corresponding to the elements of input_indices on the specified axis.

The following figure shows the calculation process of Gather commonly:

tinyms/Gather.png

where params represents the input input_params, and indices represents the index to be sliced input_indices.

Note

  1. The value of input_indices must be in the range of [0, input_param.shape[axis]), the result is undefined out of range.

  2. The data type of input_params cannot be bool_ on Ascend platform currently.

Parameters:
  • input_params (Tensor) – The original Tensor. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_indices (Tensor) – Index tensor to be sliced, the shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. The data type can be int32 or int64.

  • axis (int) – Specifies the dimension index to gather indices. It must be greater than or equal to batch_dims.

  • batch_dims (int) – Specifies the number of batch dimensions. It must be less than or euqal to the rank of input_indices. Default: 0.

Returns:

Tensor, the shape of tensor is \(input\_params.shape[:axis] + input\_indices.shape[batch\_dims:] + input\_params.shape[axis + 1:]\).

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If input_params is not a tensor.

  • TypeError – If input_indices is not a tensor of type int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: input_indices is a Tensor with shape (5, ).
>>> input_params = Tensor(np.array([1, 2, 3, 4, 5, 6, 7]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 4, 2, 6]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[1. 3. 5. 3. 7.]
>>> # case2: input_indices is a Tensor with shape (2, 2). When the input_params has one dimension,
>>> # the output shape is equal to the input_indices shape.
>>> input_indices = Tensor(np.array([[0, 2], [2, 6]]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[[1. 3.]
 [3. 7.]]
>>> # case3: input_indices is a Tensor with shape (2, ) and
>>> # input_params is a Tensor with shape (3, 4) and axis is 0.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> axis = 0
>>> output = ops.gather(input_params, input_indices, axis)
>>> print(output)
[[ 1.  2.  3.  4.]
 [ 9. 10. 11. 12.]]
>>> # case4: input_indices is a Tensor with shape (2, ) and
>>> # input_params is a Tensor with shape (3, 4) and axis is 1, batch_dims is 1.
>>> input_params = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]), mindspore.float32)
>>> input_indices = Tensor(np.array([0, 2, 1]), mindspore.int32)
>>> axis = 1
>>> batch_dims = 1
>>> output = ops.gather(input_params, input_indices, axis, batch_dims)
>>> print(output)
[ 1.  7. 10.]
tinyms.primitives.gather_d(x, dim, index)[source]

Gathers elements along an axis specified by dim.

Refer to mindspore.ops.gather_elements() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = ops.gather_d(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
tinyms.primitives.gather_elements(input, dim, index)[source]

Gathers elements along an axis specified by dim.

For a 3-D tensor, the output is:

output[i][j][k] = x[index[i][j][k]][j][k]  # if dim == 0

output[i][j][k] = x[i][index[i][j][k]][k]  # if dim == 1

output[i][j][k] = x[i][j][index[i][j][k]]  # if dim == 2

input and index have the same length of dimensions, and all dimensions except dim have the same size. If dim = i, input is an n-D tensor with shape \((z_0, z_1, ..., z_i, ..., z_{n-1})\), the index must be an n-D tensor with shape \((z_0, z_1, ..., y, ..., z_{n-1})\) where y>=1 and the output will have the same shape with index.

Parameters:
  • input (Tensor) – The input tensor.

  • dim (int) – The axis along which to index. It must be int32 or int64. The value range is [-input.ndim, input.ndim).

  • index (Tensor) – The indices of elements to gather. It can be one of the following data types: int32, int64. The value range of each index element is [-input.shape(dim), input.shape(dim)).

Returns:

Tensor, has the same shape as index tensor, the shape of tensor is \((z_0, z_1, ..., y, ..., z_{n-1})\), and has the same data type with input.

Raises:
  • TypeError – If dtype of dim or index is neither int32 nor int64.

  • ValueError – If length of shape of input is not equal to length of shape of index.

  • ValueError – If the size of the dimension except dim is not equal between input and index.

  • ValueError – If the value of dim is not in the expected range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
>>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
>>> dim = 1
>>> output = mindspore.ops.gather_elements(x, dim, index)
>>> print(output)
[[1 1]
 [4 3]]
tinyms.primitives.gather_nd(input_x, indices)[source]

Gathers slices from a tensor by indices.

Using given indices to gather slices from a tensor with a specified shape.

indices is an K-dimensional integer tensor. Supposes it as a (K-1)-dimensional tensor and each element of it defines a slice of input_x:

\[output[(i_0, ..., i_{K-2})] = input\_x[indices[(i_0, ..., i_{K-2})]]\]

The last dimension of indices can not more than the rank of input_x: \(indices.shape[-1] <= input\_x.rank\).

Parameters:
  • input_x (Tensor) – The target tensor to gather values.

  • indices (Tensor) – The index tensor, with int32 or int64 data type.

Returns:

Tensor, has the same type as input_x and the shape is \(indices\_shape[:-1] + input\_x\_shape[indices\_shape[-1]:]\).

Raises:

ValueError – If length of shape of input_x is less than the last dimension of indices.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> output = ops.gather_nd(input_x, indices)
>>> print(output)
[-0.1  0.5]
tinyms.primitives.gaussian_nll_loss(x, target, var, full=False, eps=1e-06, reduction='mean')[source]

Gaussian negative log likelihood loss.

The target values are considered to be samples from a Gaussian distribution, where the expectation and variance are predicted by a neural network. For labels modeled on a Gaussian distribution, logits to record expectations, and the variance var (elements are all positive), the calculated loss is:

\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{x} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]

where \(eps\) is used for stability of \(log\). When \(full=True\), a constant will be added to the loss. If the shape of \(var\) and \(logits\) are not the same (due to a homoscedastic assumption), their shapes must allow correct broadcasting.

Parameters:
  • x (Tensor) – Tensor of shape \((N, *)\) or \((*)\) where \(*\) means any number of additional dimensions.

  • target (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as the x, or same shape as the x but with one dimension equal to 1 (to allow broadcasting).

  • var (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as x, or same shape as the x but with one dimension equal to 1, or same shape as the x but with one fewer dimension (to allow for broadcasting).

  • full (bool, optional) – Include the constant term in the loss calculation. When \(full=True\), the constant term will be \(const = 0.5*log(2\pi)\). Default: False.

  • eps (float, optional) – Used to improve the stability of log function must be greater than 0. Default: 1e-6.

  • reduction (str, optional) – Apply specific reduction method to the output: “none”, “mean”, or “sum”. Default: “mean”.

Returns:

Tensor or Tensor scalar, the computed loss depending on \(reduction\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> import mindspore.common.dtype as mstype
>>> arr1 = np.arange(8).reshape((4, 2))
>>> arr2 = np.array([2, 3, 1, 4, 6, 4, 4, 9]).reshape((4, 2))
>>> x = Tensor(arr1, mstype.float32)
>>> var = Tensor(np.ones((4, 1)), mstype.float32)
>>> target = Tensor(arr2, mstype.float32)
>>> output = ops.gaussian_nll_loss(x, target, var)
>>> print(output)
1.4374993
Reference:

Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.

tinyms.primitives.gcd(input, other)[source]

Computes greatest common divisor of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64

Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type input or other is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([7, 8, 9]))
>>> x2 = Tensor(np.array([14, 6, 12]))
>>> y = ops.gcd(x1, x2)
>>> print(y)
[7 2 3]
tinyms.primitives.ge(x, y)[source]

Computes the boolean value of \(x >= y\) element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If the input Tensor can be broadcast, the low dimension will be extended to the corresponding high dimension in another input by copying the value of the dimension.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>=y_{i} \\ & \text{False, if } x_{i}<y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.ge(x, y)
>>> print(output)
[True True False]
tinyms.primitives.gelu(input_x, approximate='none')[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

When approximate argument is none, GeLU is defined as follows:

\[GELU(x_i) = x_i*P(X < x_i),\]

where \(P\) is the cumulative distribution function of the standard Gaussian distribution, \(x_i\) is the input element.

When approximate argument is tanh, GeLU is estimated with:

\[GELU(x_i) = 0.5 * x_i * (1 + tanh(\sqrt(2 / \pi) * (x_i + 0.044715 * x_i^3)))\]
Parameters:
  • input_x (Tensor) – The input of the activation function GeLU, the data type is float16, float32 or float64.

  • approximate (str) – the gelu approximation algorithm to use. Acceptable vaslues are ‘none’ and ‘tanh’. Default: ‘none’.

Returns:

Tensor, with the same type and shape as input_x.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not float16, float32 or float64.

  • ValueError – If approximate value is neither none or tanh.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1.0, 2.0, 3.0], mindspore.float32)
>>> result = ops.gelu(x)
>>> print(result)
[0.841192 1.9545976 2.9963627]
tinyms.primitives.geqrf(input)[source]

Decomposes a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R. The process is called QR decomposition: \(A = QR\).

Both Q and R matrices are stored in the same output tensor y. The elements of R are stored on and above the diagonal, whereas elementary reflectors (or Householder vectors) implicitly defining matrix Q are stored below the diagonal.

This function returns two tensors (y, tau).

Parameters:

input (Tensor) – Tensor of shape \((*, m, n)\), input must be a matrix greater than or equal to 2D, with dtype of float32, float64, complex64, complex128.

Returns:

  • y (Tensor) - Tensor of shape \((*, m, n)\), has the same dtype as the x.

  • tau (Tensor) - Tensor of shape \((*, p)\) and \(p = min(m, n)\), has the same dtype as the x.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is neither float32, float64, complex64, complex128.

  • ValueError – If input dimension is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-2.0, -1.0], [1.0, 2.0]]).astype(np.float32))
>>> y, tau = ops.geqrf(input_x)
>>> print(y)
[[ 2.236068   1.7888544]
 [-0.236068   1.3416407]]
>>> print(tau)
[1.8944271 0.       ]
tinyms.primitives.ger(input, vec2)[source]

Ger product of input and vec2. Calculate the outer product of two arrays. If input is a 1D Tensor of shape \((m,)\) and vec2 is a 1D Tensor of shape \((n,)\), then output must be a 2D Tensor of shape \((m, n)\).

Note

Currently Ascend does not support float64 data input.

Parameters:
  • input (Tensor) – input Tensor, with dtype of float16, float32 or float64.

  • vec2 (Tensor) – input Tensor, with dtype of float16, float32 or float64, must have the same dtype as input.

Returns:

Tensor, output matrix with the same dtype as inputs. With input shape \((m,)\) and vec2 shape of \((n,)\), the output has shape \((m, n)\).

Raises:
  • TypeError – If input or vec2 is not a 1-D Tensor.

  • TypeError – If the dtype of input and vec2 is not float16, float32 or float64.

  • TypeError – If the dtype of input and vec2 are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([1., 2., 3., 4.], mindspore.float32)
>>> vec2 = Tensor([1., 2., 3.], mindspore.float32)
>>> output = ops.ger(input, vec2)
>>> print(output)
[[ 1.  2.  3.]
 [ 2.  4.  6.]
 [ 3.  6.  9.]
 [ 4.  8. 12.]]
tinyms.primitives.get_grad(gradients, identifier)[source]

When return_ids of mindspore.grad() is set to True, use its return value as gradients. Then find the specific gradient from gradients according to identifier .

As for gradient, two typical cases are included:

  1. identifier is the position of the specific tensor to get gradient.

  2. identifier is a parameter of a network.

Parameters:
  • gradients (Union[tuple[int, Tensor], tuple[tuple, tuple]]) – The return value of mindspore.grad() when return_ids is set to True.

  • identifier (Union[int, Parameter]) – The position number of a tensor, or a parameter that is used in mindspore.grad().

Returns:

The gradient of the tensor on the position or in the parameter that specified by the identifier.

Raises:
  • RuntimeError – If gradient is not found.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad, get_grad
>>>
>>>  # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> out_grad = grad(net, grad_position=(1, 2), return_ids=True)(x, y, z)
>>> output = get_grad(out_grad, 1)
>>> print(output)
[0. 6.]
tinyms.primitives.glu(x, axis=-1)[source]

Computes GLU (Gated Linear Unit activation function) of input tensors.

\[{GLU}(a, b)= a \otimes \sigma(b)\]

where \(a\) is the first half of the input matrices and \(b\) is the second half.

Here \(\sigma\) is the sigmoid function, and \(\otimes\) is the Hadamard product. See Language Modeling with Gated Convluational Networks.

Parameters:
  • x (Tensor) – Tensor to be splited. Its dtype is Number, and shape is \((\ast_1, N, \ast_2)\) where * means, any number of additional dimensions.

  • axis (int, optional) – the axis to split the input. It must be int. Default: -1, the last axis of x.

Returns:

Tensor, the same dtype as the x, with the shape \((\ast_1, M, \ast_2)\) where \(M=N/2\).

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> input = Tensor([[0.1,0.2,0.3,0.4],[0.5,0.6,0.7,0.8]])
>>> output = ops.glu(input)
>>> print(output)
[[0.05744425 0.11973753]
 [0.33409387 0.41398472]]
tinyms.primitives.grad(fn, grad_position=0, weights=None, has_aux=False, return_ids=False)[source]

A wrapper function to generate the gradient function for the input function.

As for gradient, three typical cases are included:

  1. gradient with respect to inputs. In this case, grad_position is not None while weights is None.

  2. gradient with respect to weights. In this case, grad_position is None while weights is not None.

  3. gradient with respect to inputs and weights. In this case, grad_position and weights are not None.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[NoneType, int, tuple[int]]) – Index to specify which inputs to be differentiated. If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. grad_position begins with 0. If None, none derivative of any input will be figured out, and in this case, weights is required. Default: 0.

  • weights (Union[ParameterTuple, Parameter, list[Parameter]]) – The parameters of the training network that need to calculate the gradient. weights can be got through weights = net.trainable_params() . Default: None.

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

  • return_ids (bool) – Whether return the tuple made by gradients and the index to specify which inputs to be differentiated or the name of parameters of the training network that need to calculate the gradient. If True, the output gradients will be replaced by the tuples made by gradients and the index to specify which inputs to be differentiated or the name of parameters of the training network. Default: False.

Returns:

Function, the gradient function to calculate gradient for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (gradient, out2) and out2 does not contribute to the differentiation, otherwise gradient. When return_ids is set to True, The format of the output will be the same with the output of grad when return_ids is set to false, but every gradient in the output will be replaced by a tuple of position id or parameter name and its gradient.

Raises:
  • ValueError – If both grad_position and weights are None.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> output = grad(net, grad_position=(1, 2))(x, y, z)
>>> print(output)
(Tensor(shape=[2], dtype=Float32, value=[ 0.00000000e+00,  6.00000000e+00]),
 Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00,  6.00000000e+00]))
>>>
>>> # Function object to be differentiated
>>> def fn(x, y, z):
...     res = x * ops.exp(y) * ops.pow(z, 2)
...     return res, z
>>> x = Tensor([3, 3], mindspore.float32)
>>> y = Tensor([0, 0], mindspore.float32)
>>> z = Tensor([5, 5], mindspore.float32)
>>> gradient, aux = grad(fn, (1, 2), None, True)(x, y, z)
>>> print(gradient)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 3.00000000e+01,  3.00000000e+01]))
>>> print(aux)
(Tensor(shape=[2], dtype=Float32, value= [ 5.00000000e+00,  5.00000000e+00]),)
>>>
>>> # For given network to be differentiated with both inputs and weights, there are 4 cases.
>>> net = nn.Dense(10, 1)
>>> loss_fn = nn.MSELoss()
>>> def forward(inputs, labels):
...     logits = net(inputs)
...     loss = loss_fn(logits, labels)
...     return loss, logits
>>> inputs = Tensor(np.random.randn(16, 10).astype(np.float32))
>>> labels = Tensor(np.random.randn(16, 1).astype(np.float32))
>>> weights = net.trainable_params()
>>> # Case 1: gradient with respect to inputs.
>>> # Aux value does not contribute to the gradient.
>>> grad_fn = grad(forward, grad_position=(0, 1), weights=None, has_aux=True)
>>> inputs_gradient, (aux_logits,) = grad_fn(inputs, labels)
>>> print(len(inputs_gradient))
2
>>> print(aux_logits.shape)
(16, 1)
>>>
>>> # Case 2: gradient with respect to weights.
>>> grad_fn = grad(forward, grad_position=None, weights=weights, has_aux=True)
>>> params_gradient, (aux_logits,) = grad_fn(inputs, labels)
>>> print(len(weights), len(params_gradient))
2 2
>>> print(aux_logits.shape)
(16, 1)
>>>
>>> # Case 3: gradient with respect to inputs and weights.
>>> grad_fn = grad(forward, grad_position=0, weights=weights, has_aux=False)
>>> inputs_gradient, params_gradient = grad_fn(inputs, labels)
>>> print(len(weights), len(params_gradient))
2 2
>>> # Case 4: return the gradient with ids.
>>> import numpy as np
>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, ops
>>> from mindspore import grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> output = grad(net, grad_position=(1, 2), return_ids = True)(x, y, z)
>>> print(output)
((1, Tensor(shape=[2], dtype=Float32, value=[ 0.00000000e+00,  6.00000000e+00])),
 (2, Tensor(shape=[2], dtype=Float32, value=[-2.00000000e+00,  6.00000000e+00])))
tinyms.primitives.greater(input, other)[source]

Computes the boolean value of \(input > other\) element-wise.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.greater(x, y)
>>> print(output)
[False True False]
tinyms.primitives.greater_equal(input, other)[source]

Computes the boolean value of \(input \geq other\) element-wise.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.greater_equal(x, y)
>>> print(output)
[True True False]
tinyms.primitives.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=False)[source]

Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Only spatial (4-D) and volumetric (5-D) input is supported.

In the spatial (4-D) case, for input with shape \((N, C, H_{in}, W_{in})\) and grid with shape \((N, H_{out}, W_{out}, 2)\), the output will have shape \((N, C, H_{out}, W_{out})\).

For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. In the case of 5D inputs, grid[n, d, h, w], specifies the x, y, z pixel locations for interpolating output[n, :, d, h, w]. And mode argument specifies “nearest” or “bilinear” or “bicubic” (supported in 4D case only) interpolation method to sample the input pixels.

grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of \([-1, 1]\).

If grid has values outside the range of \([-1, 1]\), the corresponding outputs are handled as defined by padding_mode. If padding_mode is set to be “zeros”, use \(0\) for out-of-bound grid locations. If padding_mode is set to be “border”, use border values for out-of-bound grid locations. If padding_mode is set to be “reflection”, use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound.

Parameters:
  • input (Tensor) – input with shape of \((N, C, H_{in}, W_{in})\) (4-D case) or \((N, C, D_{in}, H_{in}, W_{in})\) (5-D case) and dtype of float32 or float64.

  • grid (Tensor) – flow-field with shape of \((N, H_{out}, W_{out}, 2)\) (4-D case) or \((N, D_{out}, H_{out}, W_{out}, 3)\) (5-D case) and same dtype as input.

  • mode (str) – An optional string specifying the interpolation method. The optional values are “bilinear”, “nearest” or “bicubic”. Default: “bilinear”. Note: bicubic supports only 4-D input. When mode=”bilinear” and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legistimately be bilinear.

  • padding_mode (str) – An optional string specifying the pad method. The optional values are “zeros”, “border” or “reflection”. Default: “zeros”.

  • align_corners (bool) – An optional bool. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. Default: False.

Returns:

Tensor, dtype is the same as input and whose shape is \((N, C, H_{out}, W_{out})\) (4-D) and \((N, C, D_{out}, H_{out}, W_{out})\) (5-D).

Raises:
  • TypeError – If input or grid is not a Tensor.

  • TypeError – If the dtypes of input and grid are inconsistent.

  • TypeError – If the dtype of input or grid is not a valid type.

  • TypeError – If align_corners is not a boolean value.

  • ValueError – If the rank of input or grid is not equal to 4(4-D case) or 5(5-D case).

  • ValueError – If the first dimension of input is not equal to that of grid.

  • ValueError – If the last dimension of grid is not equal to 2(4-D case) or 3(5-D case).

  • ValueError – If mode is not “bilinear”, “nearest”, “bicubic” or a string value.

  • ValueError – If padding_mode is not “zeros”, “border”, “reflection” or a string value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.arange(16).reshape((2, 2, 2, 2)).astype(np.float32))
>>> grid = Tensor(np.arange(0.2, 1, 0.1).reshape((2, 2, 1, 2)).astype(np.float32))
>>> output = ops.grid_sample(input_x, grid, mode='bilinear', padding_mode='zeros',
...                          align_corners=True)
>>> print(output)
[[[[ 1.9      ]
   [ 2.1999998]]
  [[ 5.9      ]
   [ 6.2      ]]]
 [[[10.5      ]
   [10.8      ]]
  [[14.5      ]
   [14.8      ]]]]
tinyms.primitives.gt(x, y)[source]

Compare the value of the input parameters \(x,y\) element-wise, and the output result is a bool value.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}>y_{i} \\ & \text{False, if } x_{i}<=y_{i} \end{cases}\end{split}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If the input Tensor can be broadcast, the low dimension will be extended to the corresponding high dimension in another input by copying the value of the dimension.

Parameters:
  • x (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_ .

  • y (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.gt(x, y)
>>> print(output)
[False True False]
tinyms.primitives.gumbel_softmax(logits, tau=1, hard=False, dim=-1)[source]

Returns the samples from the Gumbel-Softmax distribution and optionally discretizes. If hard = True, the returned samples will be one-hot, otherwise it will be probability distributions that sum to 1 across dim.

Parameters:
  • logits (Tensor) – Unnormalized log probabilities. The data type must be float16 or float32.

  • tau (float) – The scalar temperature, which is a positive number. Default: 1.0.

  • hard (bool) – if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd. Default: False.

  • dim (int) – Dim for softmax to compute. Default: -1.

Returns:

Tensor, has the same dtype and shape as logits.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> output = ops.gumbel_softmax(input_x, 1.0, True, -1)
>>> print(output.shape)
(2, 3)
tinyms.primitives.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None)[source]

Returns the Hamming window.

\[w[n]=\alpha − \beta \cos \left( \frac{2 \pi n}{N - 1} \right),\]

where \(N\) is the full window size.

Parameters:
  • window_length (int) – The size of returned window. Must be a non negative integer.

  • periodic (bool, optional) – If True, return a periodic window. If False, return a symmetric window.

  • alpha (float, optional) – The coefficient α.

  • beta (float, optional) – The coefficient β.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type. Default: None.

Returns:

Tensor, a 1-D tensor of size (window_length) containing the window.

Raises:
  • TypeError – If window_length is a negative integer.

  • TypeError – If periodic is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> print(ops.hamming_window(6, False))
[0.08 0.39785218 0.91214782  0.91214782  0.39785218 0.08]
tinyms.primitives.hann_window(window_length, periodic=True, *, dtype=None)[source]

Generates a Hann Window.

The Hann window is defined as

\[w(n) = \frac{1}{2} - \frac{1}{2} \cos\left(\frac{2\pi{n}}{M-1}\right),\qquad 0 \leq n \leq M-1\]
Parameters:
  • window_length (int) – Length of window.

  • periodic (bool, optional) – When set to True, generates a periodic window for spectral analysis. When set to False, generates a symmetric window for filter design.Default: True.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type, it must be float. Default: None.

Returns:

Tensor, a Hann window.

Raises:
  • TypeError – If window_length is not an integer.

  • TypeError – If periodic is not a variable of Boolean type.

  • ValueError – If window_length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = 5
>>> out = ops.hann_window(window_length)
>>> print(out.asnumpy())
[0.        0.3454915 0.9045085 0.9045085 0.3454915]
tinyms.primitives.hardshrink(x, lambd=0.5)[source]

Hard Shrink activation function. Calculates the output according to the input elements.

The formula is defined as follows:

\[\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • x (Tensor) – The input of Hard Shrink with data type of float16 or float32.

  • lambd (float) – The threshold \(\lambda\) defined by the Hard Shrink formula. Default: 0.5.

Returns:

Tensor, has the same data type and shape as the input x.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If x is not a tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 0.5,  1,  2.0], [0.0533,0.0776,-2.1233]]), mindspore.float32)
>>> output = ops.hardshrink(x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
tinyms.primitives.hardsigmoid(input)[source]

Hard sigmoid activation function.

Applies hard sigmoid activation element-wise. The input is a Tensor with any valid shape.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{x_{i} + 3}{6}))\]

where \(x_i\) is an element of the input Tensor.

Parameters:

input (Tensor) – Hard Sigmoid input, with float16, float32 or float64 data type.

Returns:

A Tensor whose dtype and shape are the same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([ -3.5,  0,  4.3]), mindspore.float32)
>>> output = ops.hardsigmoid(x)
>>> print(output)
[0.  0.5 1. ]
tinyms.primitives.hardswish(x)[source]

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6}\]

where \(x_i\) is an element of the input Tensor.

Parameters:

x (Tensor) – The input to compute the Hard Swish.

Returns:

Tensor, has the same data type and shape as the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> output = ops.hardswish(x)
>>> print(output)
[-0.3333  -0.3333  0  1.666  0.6665]
tinyms.primitives.hardtanh(input, min_val=-1.0, max_val=1.0)[source]

Applies the hardtanh activation function element-wise. The activation function is defined as:

\[\begin{split}\text{hardtanh}(input) = \begin{cases} max\_val, & \text{ if } input > max\_val \\ min\_val, & \text{ if } input < min\_val \\ input, & \text{ otherwise. } \end{cases}\end{split}\]

Linear region range \([min\_val, max\_val]\) can be adjusted using min_val and max_val.

Parameters:
  • input (Tensor) – Input Tensor.

  • min_val (Union[int, float]) – Minimum value of the linear region range. Default: -1.0.

  • max_val (Union[int, float]) – Maximum value of the linear region range. Default: 1.0.

Returns:

Tensor, with the same dtype and shape as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of min_val is neither float nor int.

  • TypeError – If dtype of max_val is neither float nor int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([-1, -2, 0, 2, 1], mindspore.float16)
>>> output = ops.hardtanh(x, min_val=-1.0, max_val=1.0)
>>> print(output)
[-1. -1.  0.  1.  1.]
tinyms.primitives.heaviside(input, values)[source]

Computes the Heaviside step function for each element in input.

\[\begin{split}\text { heaviside }(\text { input, values })=\left\{\begin{array}{ll} 0, & \text { if input }<0 \\ \text { values, } & \text { if input }=0 \\ 1, & \text { if input }>0 \end{array}\right.\end{split}\]
Parameters:
  • input (Tensor) – The input tensor. With real number data type.

  • values (Tensor) – The values to use where input is zero. Values can be broadcast with input . input should have the same dtype with values .

Returns:

Tensor, has the same type as input and values.

Raises:
  • TypeError – If input or values is not Tensor.

  • TypeError – If data type input and values is different.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-5., 1., 0., 2., 0.]))
>>> values = Tensor(np.array([3.]))
>>> y = ops.heaviside(input, values)
>>> print(y)
[0. 1. 3. 1. 3.]
tinyms.primitives.hinge_embedding_loss(inputs, targets, margin=1.0, reduction='mean')[source]

Measures Hinge Embedding Loss given an input Tensor intputs and a labels Tensor targets (containing 1 or -1).

The loss function for \(n\)-th sample in the mini-batch is

\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]

and the total loss functions is

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(L = \{l_1,\dots,l_N\}^\top\).

Parameters:
  • inputs (Tensor) – Predicted values, represented as \(x\) in the formula.

  • targets (Tensor) – Label values, represented as \(y\) in the formula. Has the same shape as inputs, contains -1 or 1.

  • margin (float, int) – Threshold defined by Hinge Embedding Loss \(margin\). Represented as \(\Delta\) in the formula. Default: 1.0.

  • reduction (str) – Specify the computing method to be applied to the outputs: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

Returns:

Tensor or Tensor scalar, the computed loss depending on \(reduction\).

Raises:
  • TypeError – If inputs is not a Tensor.

  • TypeError – If targets is not a Tensor.

  • TypeError – If margin is not a float or int.

  • ValueError – If targets does not have the same shape as inputs or they could not broadcast to each other.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.common.dtype as mstype
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> arr1 = np.array([0.9, -1.2, 2, 0.8, 3.9, 2, 1, 0, -1]).reshape((3, 3))
>>> arr2 = np.array([1, 1, -1, 1, -1, 1, -1, 1, 1]).reshape((3, 3))
>>> logits = Tensor(arr1, mstype.float32)
>>> labels = Tensor(arr2, mstype.float32)
>>> loss = ops.hinge_embedding_loss(logits, labels, margin=1.0, reduction='mean')
>>> print(loss)
0.16666666
tinyms.primitives.histc(input, bins=100, min=0.0, max=0.0)[source]

Computes the histogram of a tensor.

The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Elements lower than min or higher than max are ignored.

Parameters:
  • input (Tensor) – the input tensor, type support list \([float16, float32, int32]\).

  • bins (int, optional) – Number of histogram bins, optional. Default 100. If specified, must be positive.

  • min (int, float, optional) – An optional float of the lower end of the range (inclusive). Default value is 0.0.

  • max (int, float, optional) – An optional float of the upper end of the range (inclusive). Default value is 0.0.

Returns:

Tensor, 1-D Tensor with type int32.

Raises:
Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor([1., 2, 1])
>>> y = ops.histc(x, bins=4, min=0.0, max=3.0)
>>> print(y)
[0 2 1 0]
tinyms.primitives.hsplit(input, indices_or_sections)[source]

Splits a tensor into multiple sub-tensors horizontally. It is equivalent to ops.tensor_split with \(axis=1\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(6).reshape((2, 3)).astype('float32')
>>> output = ops.hsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[2, 1], dtype=Float32, value=[[ 0.00000000e+00], [ 3.00000000e+00]]),
 Tensor(shape=[2, 1], dtype=Float32, value=[[ 1.00000000e+00], [ 4.00000000e+00]]),
 Tensor(shape=[2, 1], dtype=Float32, value=[[ 2.00000000e+00], [ 5.00000000e+00]]))
tinyms.primitives.hstack(tensors)[source]

Stacks tensors in sequence horizontally. This is equivalent to concatenation along the second axis, except for 1-D tensors where it concatenates along the first axis.

Parameters:

tensors (Union[Tensor, tuple, list]) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the second axis, except 1-D tensors which can be any length.

Returns:

Stacked Tensor, formed by stacking the given tensors.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> x1 = Tensor([1, 1, 1])
>>> x2 = Tensor([2, 2, 2])
>>> output = ops.hstack((x1, x2))
>>> print(output)
[1. 1. 1. 2. 2. 2.]
tinyms.primitives.huber_loss(input, target, reduction='mean', delta=1.0)[source]

Calculates the error between the predicted value and the target value, which has the best of both the loss of l1 and the loss of mse.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), the reduction parameter is set to “none” then calculate the loss of \(x\) and \(y\) without dimensionality reduction. The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top\]

with

\[\begin{split}l_n = \begin{cases} 0.5 * (x_n - y_n)^2, & \text{if } |x_n - y_n| < delta; \\ delta * (|x_n - y_n| - 0.5 * delta), & \text{otherwise. } \end{cases}\end{split}\]

where \(N\) is the batch size.

If reduction is “mean” or “sum”, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{"mean";}\\ \operatorname{sum}(L), & \text{if reduction} = \text{"sum".} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, has same dtype and shape as the input in common cases. However, when the shape of target is different from the shape of input, and they should be broadcasted to each other.

  • reduction (str) – Type of reduction to be applied to loss. The optional values are “mean”, “sum” and “none”. Default: “mean”.

  • delta (Union[int, float]) – The threshold to change between two type of loss. The value must be greater than zero. Default: 1.0.

Returns:

Tensor or Scalar, if reduction is “none”, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input or target is not a Tensor.

  • TypeError – If dtype of delta is neither float nor int.

  • ValueError – If delta is less than or equal to 0.

  • ValueError – If reduction is not one of “none”, “mean”, “sum”.

  • ValueError – If input and target have different shapes and cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([1, 2, 10, 2], mindspore.float32)
>>> target = Tensor([1, 5, 1, 20], mindspore.float32)
>>> output = ops.huber_loss(x, target, reduction="mean", delta=2)
>>> print(output)
13.5
tinyms.primitives.hypot(input, other)[source]

Computes hypotenuse of input tensors element-wise as legs of a right triangle. The shape of two inputs should be broadcastable, and data type of them should be one of: float32, float64

\[out_i = \sqrt{input_i^2 + other_i^2}\]
Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher precision in the two inputs.

Raises:
  • TypeError – If data type input or other is not float32 or float64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([3., 5., 7.]))
>>> other = Tensor(np.array([4., 12., 24.]))
>>> y = ops.hypot(input, other)
>>> print(y)
[ 5. 13. 25.]
tinyms.primitives.i0(input)[source]

Alias for mindspore.ops.bessel_i0() .

Supported Platforms:

GPU CPU

tinyms.primitives.igamma(input, other)[source]

Calculates lower regularized incomplete Gamma function.

If we define input as a and other as x, the lower regularized incomplete Gamma function is defined as:

\[P(a, x) = Gamma(a, x) / Gamma(a) = 1 - Q(a, x)\]

where

\[Gamma(a, x) = \int_0^x t^{a-1} \exp^{-t} dt\]

is the lower incomplete Gamma function.

Above \(Q(a, x)\) is the upper regularized complete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • input (Tensor) – The first input tensor. With type of float32 or float64.

  • other (Tensor) – The second input tensor. With float32 or float64 type. other should have the same dtype with input.

Returns:

Tensor, has the same dtype as input and other.

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input other and a is not float32 nor float64.

  • TypeError – If other has different dtype with input.

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> output = ops.igamma(a, x)
>>> print(output)
[0.593994 0.35276785 0.21486944 0.13337152]
tinyms.primitives.igammac(input, other)[source]

Calculates upper regularized incomplete Gamma function.

If we define input as a and other as x, the upper regularized incomplete Gamma function is defined as:

\[Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\]

where

\[Gamma(a, x) = \int_{x}^{\infty} t^{a-1} exp(-t) dt\]

is the upper incomplete Gama function.

Above \(P(a, x)\) is the lower regularized complete Gamma function.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • input (Tensor) – The first input tensor. With type of float32 or float64.

  • other (Tensor) – The second input tensor. With float32 or float64 type. other should have the same dtype with input.

Returns:

Tensor, has the same dtype as input and other.

Raises:
  • TypeError – If input or other is not a Tensor.

  • TypeError – If dtype of input other and a is not float32 nor float64.

  • TypeError – If other has different dtype with input.

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> output = ops.igammac(a, x)
>>> print (output)
[0.40600586 0.6472318 0.7851304 0.8666283]
tinyms.primitives.imag(input)[source]

Returns a new tensor containing imaginary value of the input. If input is real, it will return zeros.

Parameters:

input (Tensor) – The input tensor to compute to.

Returns:

Tensor, the shape is the same as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.asarray(np.complex(1.3 + 0.4j)), mindspore.complex64)
>>> output = ops.imag(x)
>>> print(output)
0.4
tinyms.primitives.index_add(x, indices, y, axis, use_lock=True, check_index_bound=True)[source]

Adds tensor y to specified axis and indices of Parameter x. The axis should be in [0, len(x.dim) - 1], and indices should be in [0, x.shape[axis] - 1] at the axis dimension.

Parameters:
  • x (Parameter) – The input Parameter to add to.

  • indices (Tensor) – Add the value of x and y along the dimension of the axis according to the specified index value, with data type int32. The indices must be 1D with the same size as the size of y in the axis dimension. The values of indices should be in [0, b), where the b is the size of x in the axis dimension.

  • y (Tensor) – The input tensor with the value to add. Must have same data type as x. The shape must be the same as x except the axis th dimension.

  • axis (int) – The dimension along which to index.

  • use_lock (bool) – Whether to enable a lock to protect the updating process of variable tensors. If true, when updating the value of x, this process will be protected by a lock by using atomic operation. If false, the result may be unpredictable. Default: True.

  • check_index_bound (bool) – If true, check index boundary. If false, don’t check index boundary. Default: True.

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If x is not a Parameter.

  • TypeError – If neither indices nor y is a Tensor.

  • ValueError – If axis is out of x rank’s range.

  • ValueError – If x rank is not the same as y rank.

  • ValueError – If shape of indices is not 1D or size of indices is not equal to dimension of y[axis].

  • ValueError – If y’s shape is not the same as x except the axis th dimension.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> x = Parameter(Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32), name="name_x")
>>> indices = Tensor(np.array([0, 2]), mindspore.int32)
>>> y = Tensor(np.array([[0.5, 1.0], [1.0, 1.5], [2.0, 2.5]]), mindspore.float32)
>>> output = ops.index_add(x, indices, y, 1)
>>> print(output)
[[ 1.5  2.   4. ]
 [ 5.   5.   7.5]
 [ 9.   8.  11.5]]
tinyms.primitives.index_fill(x, axis, index, value)[source]

Fills the elements under the axis dimension of the input Tensor x with the input value by selecting the indices in the order given in index.

Parameters:
  • x (Tensor) – Input Tensor. The supported data type is Number or Bool.

  • axis (Union[int, Tensor]) – Dimension along which to fill the input Tensor. Only supports an int number or a 0-dimensional Tensor, whose data type is int32 or int64.

  • index (Tensor) – Indices of the input Tensor to fill in. The dtype must be int32.

  • value (Union[bool, int, float, Tensor]) – Value to fill the returned Tensor. If value is a Tensor, it must be a 0-dimensional Tensor and has the same dtype as x. Otherwise, the value will be cast to a 0-dimensional Tensor with the same data type as x.

Returns:

Tensor, has the same dtype and shape as input Tensor.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If axis is neither int number nor Tensor.

  • TypeError – When axis is a Tensor, its dtype is not int32 or int64.

  • TypeError – If index is not a Tensor.

  • TypeError – If dtype of index is not int32.

  • TypeError – If value is not a bool, int, float, or Tensor.

  • TypeError – When value is a Tensor, the dtype of x and value are not the same.

  • ValueError – If axis is a Tensor and its rank is not equal to 0.

  • ValueError – If the rank of index is greater than 1D.

  • ValueError – When value is a Tensor and its rank is not equal to 0.

  • RuntimeError – If the value of axis is out the range of [-x.ndim, x.ndim - 1].

  • RuntimeError – If the values of index are out the range of [-x.shape[axis], x.shape[axis]-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32))
>>> index = Tensor([0, 2], mindspore.int32)
>>> value = Tensor(-2.0, mindspore.float32)
>>> y = ops.index_fill(x, 1, index, value)
>>> print(y)
[[-2. 2. -2.]
 [-2. 5. -2.]
 [-2. 8. -2.]]
tinyms.primitives.index_select(input, axis, index)[source]

Generates a new Tensor that accesses the values of input along the specified axis dimension using the indices specified in index. The new Tensor has the same number of dimensions as input, with the size of the axis dimension being equal to the length of index, and the size of all other dimensions will be unchanged from the original input Tensor.

Note

The value of index must be in the range of [0, input.shape[axis]), the result is undefined out of range.

Parameters:
  • input (Tensor) – The input Tensor.

  • axis (int) – The dimension to be indexed.

  • index (Tensor) – A 1-D Tensor with the indices to access in input along the specified axis.

Returns:

Tensor, has the same dtype as input Tensor.

Raises:
  • TypeError – If input or index is not a Tensor.

  • TypeError – If axis is not int number.

  • ValueError – If the value of axis is out the range of [-input.ndim, input.ndim - 1].

  • ValueError – If the dimension of index is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> import numpy as np
>>> input = Tensor(np.arange(16).astype(np.float32).reshape(2, 2, 4))
>>> print(input)
[[[ 0.  1.  2.  3.]
  [ 4.  5.  6.  7.]]
 [[ 8.  9. 10. 11.]
  [12. 13. 14. 15.]]]
>>> index = Tensor([0,], mindspore.int32)
>>> y = ops.index_select(input, 1, index)
>>> print(y)
[[[ 0.  1.  2.  3.]]
 [[ 8.  9. 10. 11.]]]
tinyms.primitives.inner(input, other)[source]

Returns the inner product of two tensors.

For 1-D tensors (without complex conjugation), returns the ordinary inner product of vectors.

For higher dimensions, returns a sum product over the last axis.

Note

If input or other is a Tensor scalar, mindspore.ops.inner() will be the same as mindspore.ops.mul() .

Parameters:
  • input (Tensor) – First input.

  • other (Tensor) – Second input.

Returns:

Tensor, the result of the inner product.

Raises:

ValueError – If neither input nor other is scalar, and the last dimension of the two input tensors do not match.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1: 2 1D tensors
>>> input = ms.Tensor([1, 2, 3], mstype.float32)
>>> y = ms.Tensor([4, 5, 6], mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
32
>>> # case2: Tensor scalar and tensor
>>> input = ms.Tensor([[[1, 2, 3], [3, 2, 1]], [[4, 5, 6], [4, 5, 6]]], mstype.float32)
>>> y = ms.Tensor(2, mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
[[[ 2.  4.  6.]
  [ 6.  4.  2.]]
 [[ 8. 10. 12.]
  [ 8. 10. 12.]]]
>>> # case3: Two tensors
>>> input = ms.Tensor([[[1, 2, 3], [3, 2, 1]], [[4, 5, 6], [4, 5, 6]]], mstype.float32)
>>> y = ms.Tensor([[2, 3, 4], [4, 3, 2]], mstype.float32)
>>> output = ops.inner(input, y)
>>> print(output)
[[[20. 16.]
  [16. 20.]]
 [[47. 43.]
  [47. 43.]]]
tinyms.primitives.inplace_add(x, v, indices)[source]

Adds v into specified rows of x. Computes y = x; y[i,] += v.

Note

indices refers to the left-most dimension.

Parameters:
  • x (Tensor) – The first input is a tensor whose data type is float16, float32, float64 or int32.

  • v (Tensor) – The second input is a tensor that has the same dimension sizes as x except the first dimension, which must be the same as indices’ size. It has the same data type with x.

  • indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to add with v. It is an integer or a tuple, whose value is in [0, the first dimension size of x).

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If the rank of x is not equal to the rank of v.

  • ValueError – If the length of indices is not equal to v.shape[0].

  • ValueError – If the values of indices are not in range of [0, x.shape[0]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_add(x, input_v, indices)
>>> print(output)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
tinyms.primitives.inplace_index_add(var, indices, updates, axis)[source]

Adds Tensor updates to specified axis and indices of Tensor var element-wise.

Parameters:
  • var (Parameter) – The input Parameter to add to, with data type uint8, int8, int16, int32, float16, float32, float64.

  • indices (Tensor) – The indies along axis to perform the addition. A 1D Tensor of shape \((updates.shape[axis],)\), every value of it should be in range \([0, var.shape[axis])\) with data type int32.

  • updates (Tensor) – The input Tensor with the value to add. Must have same data type as var. The shape must be the same as var except the axis th dimension.

  • axis (int) – The dimension along which to index. It should be in range \([0, len(var.dim))\).

Returns:

Tensor, updated result, has the same shape and dtype as var.

Raises:
  • TypeError – If var is not a Parameter.

  • TypeError – If neither indices nor updates is a Tensor.

  • ValueError – If axis is out of valid range.

  • ValueError – If var rank is not the same as updates rank.

  • ValueError – If shape of indices is not \((updates.shape[axis],)\).

  • ValueError – If updates’s shape is not the same as var except the axis th dimension.

Supported Platforms:

Ascend CPU

Examples

>>> var = Parameter(Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32))
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> var = ops.inplace_index_add(var, indices, updates, axis=0)
>>> print(var)
[[1.5 3. ]
 [4.  5.5]
 [5.  6. ]]
tinyms.primitives.inplace_sub(x, v, indices)[source]

Subtracts v into specified rows of x. Computes \(y = x\); \(y[i,] -= input\_v\).

Note

indices refers to the left-most dimension.

Parameters:
  • x (Tensor) – The first input is a tensor whose data type is float16, float32, float64 or int32. Tensors of arbitrary dimensions are supported.

  • v (Tensor) – The second input is a tensor who has the same dimension sizes as x except the first dimension, which must be the same as indices’ size. It has the same data type with x.

  • indices (Union[int, tuple]) – Indices into the left-most dimension of x, and determines which rows of x to subtract with v. It is an int or tuple, whose value is in [0, the first dimension size of x).

Returns:

Tensor, has the same shape and dtype as x.

Raises:
  • TypeError – If indices is neither int nor tuple.

  • TypeError – If indices is a tuple whose elements are not all int.

  • ValueError – If the rank of x is not equal to the rank of v.

  • ValueError – If the length of indices is not equal to v.shape[0].

  • ValueError – If the values of indices are not in range of [0, x.shape[0]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> input_v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_sub(x, input_v, indices)
>>> print(output)
[[0.5 1. ]
 [2.  2.5]
 [5.  6. ]]
tinyms.primitives.inplace_update(x, v, indices)[source]

Updates specified values in x to v according to indices.

Warning

This is an experimental API that is subject to change or deletion.

Note

indices can only be indexed along the highest dimension.

Parameters:
  • x (Tensor) – A tensor which to be inplace updated. It can be one of the following data types: float32, float16 and int32.

  • v (Tensor) – A tensor with the same type as x and the same dimension size as x except the first dimension, which must be the same as the size of indices.

  • indices (Union[int, tuple[int], Tensor]) – Determines which rows of x to update with v, should be several int. It is an int or tuple or tensor with one dimension, whose value is in [-x.shape[0], x.shape[0]). If it is a tuple or Tensor, the size of ‘indices’ should be the same as the first dimension of ‘v’.

Returns:

Tensor, with the same type and shape as the input x.

Raises:
  • TypeError – If indices is neither int nor tuple nor Tensor.

  • TypeError – If indices is a tuple or Tensor, but its element is not an int.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> indices = (0, 1)
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
>>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
>>> output = ops.inplace_update(x, v, indices)
>>> print(output)
[[0.5 1. ]
 [1.  1.5]
 [5.  6. ]]
tinyms.primitives.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]

Samples the input Tensor to the given size or scale_factor by using one of the interpolate algorithms.

Parameters:
  • input (Tensor) – Tensor to be resized. Input tensor must be a 3-D, 4-D, or 5-D tensor with shape \((N, C, [optional D], [optional H], W)\) , with data type of float.

  • size (Union[int, tuple[int], list[int]], optional) – The target size. If size is a tuple or list, size must have the same dimensions as input. One and only one of size and scale_factor can be set to None. Default: None.

  • scale_factor (Union[float, tuple[float], list[float]], optional) – The scale factor of new size of the tensor. If size is a tuple or list, size must have the same dimensions as input. One and only one of size and scale_factor can be set to None. Default: None.

  • mode (str) – The sampling algorithm. One of ‘nearest’(3D and 4D), ‘linear’ (3D only), ‘bilinear’ (4D only), ‘bicubic’ (4D only), ‘area’, ‘nearest-exact’(3D and 4D). Default: ‘nearest’.

  • align_corners (bool) –

    If True, rescale input by \((new\_height - 1) / (height - 1)\), which exactly aligns the corners of data and resized data. If False, rescale by \(new\_height / height\).

    old_i = new_length != 1 ? new_i * (old_length - 1) / (new_length - 1) : 0   # 'align_corners' = True
    
    old_i = new_length > 1 ? (new_x + 0.5) * old_length / new_length - 0.5 : 0  # 'align_corners' = False
    

    This is only valid for ‘linear’, ‘bilinear’, or ‘bicubic’ modes. Default: False.

  • recompute_scale_factor (bool, optional) – Recalculate scale_factor. If True, the parameter size will be calculated using the value of the scale_factor, and finally scaled using the value of size. If False, the value of size or scale_factor will be used for direct interpolation. Default: None.

Note

The ‘nearest-exact’ mode is the same as the nearest-neighbor interpolation algorithm used in scikit-image and PIL. The ‘nearest’ mode produces the same results as the INTER_NEAREST interpolation algorithm used in OpenCV.

Args Support List and Supported Platforms:

mode

input.dim

align_corners

scale_factor

device

nearest

3

-

×

Ascend,GPU,CPU

4

-

×

Ascend,GPU,CPU

linear

3

×

GPU,CPU

bilinear

4

×

Ascend,GPU,CPU

bicubic

4

×

GPU,CPU

area

3

-

Ascend,GPU,CPU

4

-

GPU

5

-

GPU,CPU

nearest-exact

3

-

×

Ascend,CPU

4

-

×

Ascend,CPU

  • - indicates that there is no such parameter.

  • × indicates that this parameter is not currently supported.

  • indicates that this parameter is supported.

Returns:

Tensor, resized, whose dimensions and dtype are the same as input.

Raises:
  • TypeErrorinput is not a Tensor.

  • ValueError – Both size and scale_factor are not empty.

  • ValueError – Both size and scale_factor are empty.

  • ValueError – When size is a tuple or list, its length is not equal to input.ndim - 2.

  • ValueError – When scale_factor is a tuple or list, its length is not equal to input.ndim - 2.

  • ValueErrormode is not in the list of supported modes.

  • ValueErrorinput.ndim is not in the list of supported dimensions for the corresponding mode.

  • ValueErrorsize is not empty, recompute_scale_factor is not empty.

  • ValueErrorscale_factor is not in the corresponding list of supported values.

  • ValueErroralign_corners is not in the corresponding list of supported values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> input = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
>>> output = ops.interpolate(input, size=(6,), mode='nearest')
>>> print(output)
    [[[1. 1. 2. 2. 3. 3.]
      [4. 4. 5. 5. 6. 6.]]]
tinyms.primitives.intopk(x1, x2, k)[source]

Determines whether the targets are in the top k predictions.

Parameters:
  • x1 (Tensor) – A 2D Tensor defines the predictions of a batch of samples with float16 or float32 data type.

  • x2 (Tensor) – A 1D Tensor defines the labels of a batch of samples with int32 data type. The size of x2 must be equal to the first dimension of x1. The values of x2 can not be negative and must be equal to or less than index of x1’s second dimension.

  • k (int) – Specifies the number of top elements to be used for computing precision along the last dimension.

Returns:

Tensor has 1 dimension of type bool and the same shape with x2. For labeling sample i in x2, if the label in the first k predictions for sample i is in x1, then the value is True, otherwise False.

Raises:
  • TypeError – If k is not an int.

  • TypeError – If x1 or x2 is not a Tensor.

  • TypeError – If dtype of x1 is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([[1, 8, 5, 2, 7], [4, 9, 1, 3, 5]]), mindspore.float32)
>>> x2 = Tensor(np.array([1, 3]), mindspore.int32)
>>> output = ops.intopk(x1, x2, 3)
>>> print(output)
[ True  False]
tinyms.primitives.inv(x)[source]

Computes Reciprocal of input tensor element-wise.

\[out_i = \frac{1}{x_{i} }\]
Parameters:

x (Tensor) – Tensor of any dimension. Must be one of the following types: float16, float32 or int32.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of float16, float32, int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.25, 0.4, 0.31, 0.52]), mindspore.float32)
>>> output = ops.inv(x)
>>> print(output)
[4.        2.5       3.2258065 1.923077 ]
tinyms.primitives.inverse(input)[source]

Compute the inverse of the input matrix.

Parameters:

input (Tensor) – A matrix to be calculated. Input input must be at least two dimensions, and the size of the last two dimensions must be the same size.

Returns:

Tensor, has the same type and shape as input input.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the size of the last two dimensions of input is not the same.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([[1., 2.], [3., 4.]], mstype.float32)
>>> print(ops.inverse(x))
[[-2.   1. ]
 [ 1.5 -0.5]]
tinyms.primitives.invert(x)[source]

Flips all bits of input tensor element-wise.

\[out_i = \sim x_{i}\]
Parameters:

x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type should be one of the following types: int16, uint16.

Returns:

Tensor, has the same shape as x.

Raises:

TypeError – If dtype of x is neither int16 nor uint16.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([25, 4, 13, 9]), mindspore.int16)
>>> output = ops.invert(x)
>>> print(output)
[-26 -5 -14 -10]
tinyms.primitives.iou(anchor_boxes, gt_boxes, mode='iou')[source]

Calculates intersection over union for boxes.

Computes the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

\[ \begin{align}\begin{aligned}\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\\\text{IOF} = \frac{\text{Area of Overlap}}{\text{Area of Ground Truth}}\end{aligned}\end{align} \]

Warning

In Ascend, only computation of float16 data is supported. To avoid overflow, the input length and width are scaled by 0.2 internally.

Parameters:
  • anchor_boxes (Tensor) – Anchor boxes, tensor of shape \((N, 4)\) . “N” indicates the number of anchor boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be either float16, float32 or float64.

  • gt_boxes (Tensor) – Ground truth boxes, tensor of shape \((M, 4)\) . “M” indicates the number of ground truth boxes, and the value “4” refers to “x0”, “y0”, “x1”, and “y1”. Data type must be either float16, float32 or float64.

  • mode (string) – The mode is used to specify the calculation method, now supporting ‘iou’ (intersection over union) or ‘iof’ (intersection over foreground) mode. Default: ‘iou’.

Returns:

Tensor, the ‘iou’ values, tensor of shape \((M, N)\) , with the same data type as anchor_boxes.

Raises:

KeyError – When mode is not ‘iou’ or ‘iof’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
>>> mode = 'iou'
>>> output = ops.iou(anchor_boxes, gt_boxes, mode)
>>> print(output.shape)
(3, 3)
tinyms.primitives.is_complex(input)[source]

Return True if the data type of the tensor is complex, otherwise return False.

Parameters:

input (Tensor) – The input tensor.

Returns:

Bool, return whether the data type of the tensor is complex.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> input = Tensor([1, 1+1j, 2+2j], mstype.complex64)
>>> output = ops.is_complex(input)
>>> print(output)
True
tinyms.primitives.is_floating_point(input)[source]

Judge whether the data type of input is a floating point data type i.e., one of mindspore.float64, mindspore.float32, mindspore.float16.

Parameters:

input (Tensor) – The input Tensor.

Returns:

Bool. If the dtype of input is a floating point data type, return True. Otherwise, return False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = ms.Tensor([1, 2, 3], ms.float32)
>>> y = ms.Tensor([1, 2, 3], ms.int64)
>>> output = ops.is_floating_point(x)
>>> output2 = ops.is_floating_point(y)
>>> print(output)
True
>>> print(output2)
False
tinyms.primitives.is_tensor(obj)[source]

Check whether the input object is a mindspore.Tensor .

Parameters:

obj (Object) – input object.

Returns:

Bool. Return True if obj is a Tensor, otherwise, return False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> a = Tensor([1.9, 2.2, 3.1])
>>> ops.is_tensor(a)
True
tinyms.primitives.isclose(x1, x2, rtol=1e-05, atol=1e-08, equal_nan=False)[source]

Returns a new Tensor with boolean elements representing if each element of x1 is “close” to the corresponding element of x2. Closeness is defined as:

\[∣x1−x2∣ ≤ atol + rtol × ∣x2∣\]
Parameters:
  • x1 (Tensor) – First Tensor to compare, with data type belongs to float32, float16, int32.

  • x2 (Tensor) – Second Tensor to compare, with data type belongs to float32, float16, int32.

  • rtol (float, optional) – Relative tolerance. Default: 1e-05.

  • atol (float, optional) – Absolute tolerance. Default: 1e-08.

  • equal_nan (bool, optional) – If True, then two NaNs will be considered equal. Default: False.

Returns:

A bool Tensor, with the shape as broadcasted result of the input x1 and x2.

Raises:
  • TypeError – If either of x1 and x2 is not Tensor.

  • TypeError – If either of x1 and x2 is not float16, float32 or int32.

  • TypeError – If either of atol and rtol is not float.

  • TypeError – If equal_nan is not bool.

  • TypeError – If the dtype of x1 is not same as the x2.

  • ValueError – If x1 and x2 can not be broadcast.

  • ValueError – If either of atol and rtol is less than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1.3, 2.1, 3.2, 4.1, 5.1]), mindspore.float16)
>>> other = Tensor(np.array([1.3, 3.3, 2.3, 3.1, 5.1]), mindspore.float16)
>>> output = ops.isclose(input, other)
>>> print(output)
[ True False False False  True]
tinyms.primitives.isfinite(x)[source]

Determines which elements are finite for each position.

\[\begin{split}out_i = \begin{cases} & \text{ if } x_{i} = \text{Finite},\ \ True \\ & \text{ if } x_{i} \ne \text{Finite},\ \ False \end{cases}\end{split}\]
Parameters:

x (Tensor) – The input tensor. \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isfinite(x)
>>> print(output)
[False  True False]
tinyms.primitives.isinf(input)[source]

Determines which elements are inf or -inf for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Inf} \\ & \ False,\ \text{ if } x_{i} \ne \text{Inf} \end{cases}\end{split}\]

where \(Inf\) means not a number.

Parameters:

input (Tensor) – The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isinf(x)
>>> print(output)
[False False True]
tinyms.primitives.isnan(x)[source]

Determines which elements are NaN for each position.

\[\begin{split}out_i = \begin{cases} & \ True,\ \text{ if } x_{i} = \text{Nan} \\ & \ False,\ \text{ if } x_{i} \ne \text{Nan} \end{cases}\end{split}\]

where \(Nan\) means not a number.

Parameters:

x (Tensor) – The input tensor.

Returns:

Tensor, has the same shape of input, and the dtype is bool.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> output = ops.isnan(x)
>>> print(output)
[ True False False]
tinyms.primitives.isneginf(input)[source]

Tests element-wise for negative infinity.

Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, true where input is negative infinity, false otherwise.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.isneginf(Tensor([[-float("inf"), float("inf")], [1, -float("inf")]], mstype.float32))
>>> print(output)
[[ True False]
 [False  True]]
tinyms.primitives.isposinf(input)[source]

Tests element-wise for positive infinity.

Parameters:

input (Tensor) – Input values.

Returns:

Tensor, true where input is positive infinity, false otherwise.

Raises:

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.isposinf(Tensor([[-float("inf"), float("inf")], [1, float("inf")]], mstype.float32))
>>> print(output)
[[False  True]
 [False  True]]
tinyms.primitives.isreal(input)[source]

Tests element-wise for real number. A complex value is considered real when its imaginary part is 0.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, true where input is real number, false otherwise.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> from mindspore import dtype as mstype
>>> x = Tensor([1, 1+1j, 2+0j], mstype.complex64)
>>> output = ops.isreal(x)
>>> print(output)
[ True False True]
tinyms.primitives.jacfwd(fn, grad_position=0, has_aux=False)[source]

Compute Jacobian via forward mode, corresponding to forward-mode differentiation. When number of outputs is much greater than that of inputs, it’s better to calculate Jacobian via forward mode than reverse mode to get better performance.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[int, tuple[int]], optional) – If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. ‘grad_position’ begins with 0. Default: 0.

  • has_aux (bool, optional) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the Jacobian function for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (Jacobian, out2) and out2 does not contribute to the differentiation, otherwise Jacobian .

Raises:

TypeErrorgrad_position or has_aux does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import jacfwd
>>> from mindspore import Tensor
>>> class MultipleInputsMultipleOutputsNet(nn.Cell):
...     def construct(self, x, y, z):
...         return x ** 2 + y ** 2 + z ** 2, x * y * z
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> z = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> net = MultipleInputsMultipleOutputsNet()
>>> jac, aux = jacfwd(net, grad_position=0, has_aux=True)(x, y, z)
>>> print(jac)
[[[[ 2.,  0.]
   [ 0.,  0.]]
  [[ 0.,  4.]
   [ 0.,  0.]]]
 [[[ 0.,  0.]
   [ 6.,  0.]]
  [[ 0.,  0.]
   [ 0.,  8.]]]]
>>> print(aux)
[[ 1.  4.]
 [ 9. 16.]]
tinyms.primitives.jacrev(fn, grad_position=0, has_aux=False)[source]

Compute Jacobian via reverse mode, corresponding to reverse-mode differentiation. When number of inputs is much greater than that of outputs, it’s better to calculate Jacobian via reverse mode than forward mode to get better performance.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[int, tuple[int]], optional) – If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. ‘grad_position’ begins with 0. Default: 0.

  • has_aux (bool, optional) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the Jacobian function for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True, gradient function will return outputs like (Jacobian, out2) and out2 does not contribute to the differentiation, otherwise Jacobian .

Raises:

TypeErrorgrad_position or has_aux does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import jacrev
>>> from mindspore import Tensor
>>> class MultipleInputsMultipleOutputsNet(nn.Cell):
...     def construct(self, x, y, z):
...         return x ** 2 + y ** 2 + z ** 2, x * y * z
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> z = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> net = MultipleInputsMultipleOutputsNet()
>>> jac, aux = jacrev(net, grad_position=0, has_aux=True)(x, y, z)
>>> print(jac)
[[[[ 2.,  0.]
   [ 0.,  0.]]
  [[ 0.,  4.]
   [ 0.,  0.]]]
 [[[ 0.,  0.]
   [ 6.,  0.]]
  [[ 0.,  0.]
   [ 0.,  8.]]]]
>>> print(aux)
[[ 1.  4.]
 [ 9. 16.]]
tinyms.primitives.jet(fn, primals, series)[source]

This function is designed to calculate the higher order differentiation of given composite function. To figure out first to n-th order differentiations, original inputs and first to n-th order derivative of original inputs must be provided together. Generally, it is recommended to set the values of given first order derivative to 1, while the other to 0, which is like the derivative of origin input with respect to itself.

Note

If primals is Tensor of int type, it will be converted to Tensor of float type.

Parameters:
  • fn (Union[Cell, function]) – Function to do TaylorOperation.

  • primals (Union[Tensor, tuple[Tensor]]) – The inputs to fn.

  • series (Union[Tensor, tuple[Tensor]]) – If tuple, the length and type of series should be the same as inputs. For each Tensor, the length of first dimension i represents the 1 to i+1-th order of derivative of output with respect to the inputs will be figured out.

Returns:

Tuple, tuple of out_primals and out_series.

  • out_primals (Union[Tensor, list[Tensor]]) - The output of fn(primals).

  • out_series (Union[Tensor, list[Tensor]]) - The 1 to i+1-th order of derivative of output with respect to the inputs.

Raises:
  • TypeError – If primals is not a tensor or tuple of tensors.

  • TypeError – If type of primals is not the same as type of series.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.sin = ops.Sin()
...         self.exp = ops.Exp()
...     def construct(self, x):
...         out1 = self.sin(x)
...         out2 = self.exp(out1)
...         return out2
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> series = Tensor(np.array([[[1, 1], [1, 1]], [[0, 0], [0, 0]], [[0, 0], [0, 0]]]).astype(np.float32))
>>> net = Net()
>>> out_primals, out_series = ops.jet(net, primals, series)
>>> print(out_primals, out_series)
[[2.319777  2.4825778]
 [1.1515628 0.4691642]] [[[ 1.2533808  -1.0331168 ]
  [-1.1400385  -0.3066662 ]]
 [[-1.2748207  -1.8274734 ]
  [ 0.966121    0.55551505]]
 [[-4.0515366   3.6724353 ]
  [ 0.5053504  -0.52061415]]]
tinyms.primitives.jvp(fn, inputs, v, has_aux=False)[source]

Compute the jacobian-vector-product of the given network. jvp matches forward-mode differentiation.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single Tensor or tuple of Tensors.

  • inputs (Union[Tensor, tuple[Tensor], list[Tensor]]) – The inputs to fn .

  • v (Union[Tensor, tuple[Tensor], list[Tensor]]) – The vector in jacobian-vector-product. The shape and type of v should be the same as inputs .

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

  • net_output (Union[Tensor, tuple[Tensor]]) - The output of fn(inputs) . Specially, when has_aux is set True, netout is the first output of fn(inputs) .

  • jvp (Union[Tensor, tuple[Tensor]]) - The result of jacobian-vector-product.

  • aux_value (Union[Tensor, tuple[Tensor]], optional) - When has_aux is True, aux_value will be returned. It means the second to last outputs of fn(inputs) . Specially, aux_value does not contribute to gradient.

Raises:

TypeErrorinputs or v does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import jvp
>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def construct(self, x, y):
...         return x**3 + y
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> v = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> output = jvp(Net(), (x, y), (v, v))
>>> print(output[0])
[[ 2. 10.]
 [30. 68.]]
>>> print(output[1])
[[ 4. 13.]
 [28. 49.]]
>>>
>>> def fn(x, y):
...     return x ** 3 + y, y
>>> output, jvp_out, aux = jvp(fn, (x, y), (v, v), has_aux=True)
>>> print(output)
[[ 2. 10.]
 [30. 68.]]
>>> print(jvp_out)
[[ 4. 13.]
 [28. 49.]]
>>> print(aux)
[[ 1. 2.]
 [3. 4.]]
tinyms.primitives.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None)[source]

Generates a Kaiser window, which is also known as the Kaiser-Bessel window.

The Kaiser window is defined as

\[w(n) = \frac{I_{0}\left( \beta\sqrt{1 - \frac{4n^{2}}{(M - 1)^{2}}} \right)}{I_{0}(\beta)}\]

with

\[- \frac{M - 1}{2} \leq n \leq \frac{M - 1}{2}\]

where \(I_0\) is the modified zeroth-order Bessel function.

Parameters:
  • window_length (int) – Length of window.

  • periodic (bool, optional) – When set to True, generates a periodic window for spectral analysis. When set to False, generates a symmetric window for filter design. Default: True.

  • beta (float, optional) – Shape parameter, when beta gets large, the window narrows. Default: 12.0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The output window data type, it must be float. Default: None.

Returns:

Tensor, a Kaiser window.

Raises:
  • TypeError – If window_length or beta is not an integer.

  • TypeError – If periodic is not a variable of Boolean type.

  • ValueError – If window_length is negative.

Supported Platforms:

Ascend GPU CPU

Examples

>>> window_length = 5
>>> out = ops.kaiser_window(window_length)
>>> print(out.asnumpy())
[5.27734413e-05 1.01719688e-01 7.92939834e-01 7.92939834e-01
 1.01719688e-01]
tinyms.primitives.kl_div(logits, labels, reduction='mean')[source]

Computes the Kullback-Leibler divergence between the logits and the labels.

For input tensors \(x\) and \(target\) with the same shape, the updating formulas of KLDivLoss algorithm are as follows,

\[L(x, target) = target \cdot (\log target - x)\]

Then,

\[\begin{split}\ell(x, target) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{batchmean}(L), & \text{if reduction} = \text{'batchmean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

where \(x\) represents logits. \(target\) represents labels. \(\ell(x, target)\) represents output.

Note

  • Currently it does not support float64 input on Ascend.

  • The output aligns with the mathematical definition of Kullback-Leibler divergence only when reduction is set to ‘batchmean’.

Parameters:
  • logits (Tensor) – The input Tensor. The data type must be float16, float32 or float64.

  • labels (Tensor) – The label Tensor which has the same shape and data type as logits.

  • reduction (str) – Specifies the reduction to be applied to the output. Its value must be one of ‘none’, ‘mean’, ‘batchmean’ or ‘sum’. Default: ‘mean’.

Returns:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and has the same shape as logits. Otherwise, it is a scalar.

Raises:
  • TypeError – If reduction is not a str.

  • TypeError – If neither logits nor labels is a Tensor.

  • TypeError – If dtype of logits or labels is not float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> labels = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> output = mindspore.ops.kl_div(logits, labels, 'mean')
>>> print(output)
-0.23333333
tinyms.primitives.kron(x, y)[source]

Computes the Kronecker product \(x ⊗ y\), denoted by ⊗, of x and y.

If x is a \((a_{0}\) x \(a_{1}\) x … x \(a_{n})\) Tensor and y is a \((b_{0}\) x \(b_{1}\) x … x \(b_{n})\) Tensor, the result will be a \((a_{0}*b_{0}\) x \(a_{1}*b_{1}\) x … x \(a_{n}*b_{n})\) Tensor with the following entries:

\[(x ⊗ y)_{k_{0},k_{1},...k_{n}} = x_{i_{0},i_{1},...i_{n}} * y_{j_{0},j_{1},...j_{n}},\]

where \(k_{t} = i_{t} * b_{t} + j_{t}\) for 0 ≤ tn. If one Tensor has fewer dimensions than the other it is unsqueezed until it has the same number of dimensions.

Note

Supports real-valued and complex-valued inputs.

Parameters:
  • x (Tensor) – Input Tensor, has the shape \((r0, r1, ... , rN)\).

  • y (Tensor) – Input Tensor, has the shape \((s0, s1, ... , sN)\).

Returns:

Tensor, has the shape \((r0 * s0, r1 * s1, ... , rN * sN)\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]])).astype(np.float32)
>>> y = Tensor(np.array([[-1, -2, -3], [-4, -6, -8]])).astype(np.float32)
>>> output = ops.kron(x, y)
>>> print(output)
[[  0.   0.   0.  -1.  -2.  -3.  -2.  -4.  -6.]
 [  0.   0.   0.  -4.  -6.  -8.  -8. -12. -16.]
 [ -3.  -6.  -9.  -4.  -8. -12.  -5. -10. -15.]
 [-12. -18. -24. -16. -24. -32. -20. -30. -40.]]
tinyms.primitives.l1_loss(input, target, reduction='mean')[source]

Calculate the mean absolute error between the input value and the target value.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), reduction is set to “none” , then calculate the loss of \(x\) and \(y\) without dimensionality reduction.

The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad \text{with } l_n = \left| x_n - y_n \right|,\]

where \(N\) is the batch size.

If reduction is mean or sum, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, usually has the same shape as the input. If input and target have different shape, make sure they can broadcast to each other.

  • reduction (str, optional) – Type of reduction to be applied to loss. The optional value is “mean”, “sum” or “none”. Default: “mean”.

Returns:

Tensor or Scalar, if reduction is “none”, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If target is not a Tensor.

  • ValueError – If reduction is not one of “none”, “mean” or “sum”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = ms.Tensor([[1, 2, 3], [4, 5, 6]], mstype.float32)
>>> target = ms.Tensor([[6, 5, 4], [3, 2, 1]], mstype.float32)
>>> output = ops.l1_loss(x, target, reduction="mean")
>>> print(output)
3.0
tinyms.primitives.laplace(shape, mean, lambda_param, seed=None)[source]

Generates random numbers according to the Laplace random number distribution. It is defined as:

\[\text{f}(x;μ,λ) = \frac{1}{2λ}\exp(-\frac{|x-μ|}{λ}),\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter, which specifies the location of the peak. With float32 data type.

  • lambda_param (Tensor) – The parameter used for controlling the variance of this random distribution. The variance of Laplace distribution is equal to twice the square of lambda_param. With float32 data type.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be the broadcasted shape of input shape and shapes of mean and lambda_param. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore import ops as ops
>>> shape = (2, 3)
>>> mean = Tensor(1.0, mindspore.float32)
>>> lambda_param = Tensor(1.0, mindspore.float32)
>>> output = ops.laplace(shape, mean, lambda_param, seed=5)
>>> print(output.shape)
(2, 3)
tinyms.primitives.lcm(input, other)[source]

Computes least common multiplier of input tensors element-wise. The shape of two inputs should be broadcastable, and data type of them should be one of: int32, int64

Parameters:
  • input (Tensor) – The first input tensor.

  • other (Tensor) – The second input tensor.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is one with higher digits in the two inputs.

Raises:
  • TypeError – If data type input or other is not int32 or int64.

  • ValueError – If shape of two inputs are not broadcastable.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([7, 8, 9]))
>>> other = Tensor(np.array([14, 6, 12]))
>>> y = ops.lcm(input, other)
>>> print(y)
[14 24 36]
tinyms.primitives.ldexp(x, other)[source]

Multiplies input Tensor by \(2^{other}\) element-wise.

It takes two arguments, a mantissa x and an exponent other, and returns their product as a floating-point number:

\[out_{i} = x_{i} * ( 2 ^{other_{i}} )\]

Note

This function is commonly used to construct floating-point numbers from their component parts, or to scale a floating-point number by a power of two.

Parameters:
  • x (Tensor) – The input Tensor.

  • other (Tensor) – A Tensor of integers that represent exponents.

Returns:

Tensor, the output Tensor.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> x = Tensor(np.array([1.]), mindspore.float32)
>>> other = Tensor(np.array([1, 2, 3, 4]), mindspore.int32)
>>> out = ops.ldexp(x, other)
>>> print(out)
[ 2.  4.  8. 16.]
>>> x = Tensor(np.array([[1.], [2]]), mindspore.float32)
>>> other = Tensor(np.array([[1.], [2]]), mindspore.int32)
>>> out = ops.ldexp(x, other)
>>> print(out)
[[2.]
 [8.]]
tinyms.primitives.le(x, y)[source]

Computes the boolean value of \(x <= y\) element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<=y_{i} \\ & \text{False, if } x_{i}>y_{i} \end{cases}\end{split}\]

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • x (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • y (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.le(x, y)
>>> print(output)
[ True False  True]
tinyms.primitives.leaky_relu(input, alpha=0.2)[source]

leaky_relu activation function. The element of input less than 0 times alpha .

The activation function is defined as:

\[\text{leaky_relu}(input) = \begin{cases}input, &\text{if } input \geq 0; \cr {\alpha} * input, &\text{otherwise.}\end{cases}\]

where \(\alpha\) represents the alpha parameter.

For more details, see Rectifier Nonlinearities Improve Neural Network Acoustic Models.

Parameters:
  • input (Tensor) – The input of leaky_relu is a Tensor of any dimension.

  • alpha (Union[int, float]) – Slope of the activation function when the element of input is less than 0. Default: 0.2.

Returns:

Tensor, has the same type and shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If alpha is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> print(ops.leaky_relu(x, alpha=0.2))
[[-0.2  4.  -1.6]
 [ 2.  -1.   9. ]]
tinyms.primitives.lerp(input, end, weight)[source]

Does a linear interpolation of two tensors input and end based on a float or tensor weight.

If weight is a tensor, the shapes of three inputs need to be broadcast; If weight is a float, the shapes of input and end need to be broadcast.

\[output_{i} = input_{i} + weight_{i} * (end_{i} - input_{i})\]
Parameters:
  • input (Tensor) – The tensor with the starting points. Data type must be float16 or float32.

  • end (Tensor) – The tensor with the ending points. Data type must be the same as input.

  • weight (Union[float, Tensor]) – The weight for the interpolation formula. Must be a float or a scalar tensor with float16 or float32 data type.

Returns:

Tensor, has the same type and shape as input input.

Raises:
  • TypeError – If input or end is not a tensor.

  • TypeError – If weight is neither scalar(float) nor tensor.

  • TypeError – If dtype of input or end is neither float16 nor float32.

  • TypeError – If dtype of weight is neither float16 nor float32 when it is a tensor.

  • TypeError – If input and end have different data types.

  • TypeError – If input, end and weight have different data types when weight is a tensor.

  • ValueError – If end could not be broadcast to a tensor with shape of input.

  • ValueError – If weight could not be broadcast to tensors with shapes of input and end when it is a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> end = Tensor(np.array([10., 10., 10., 10.]), mindspore.float32)
>>> output = ops.lerp(input, end, 0.5)
>>> print(output)
[5.5 6. 6.5 7. ]
tinyms.primitives.less(x, y)[source]

Computes the boolean value of \(x < y\) element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i}<y_{i} \\ & \text{False, if } x_{i}>=y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.less(x, y)
>>> print(output)
[False False True]
tinyms.primitives.less_equal(input, other)[source]

Computes the boolean value of \(input <= other\) element-wise.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } input_{i}<=other_{i} \\ & \text{False, if } input_{i}>other_{i} \end{cases}\end{split}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, Number, bool]) –

    The first input is a Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, Number, bool]) – The second input, when the first input is a Tensor, the second input should be a Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> other = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> output = ops.less_equal(x, other)
>>> print(output)
[ True False  True]
tinyms.primitives.lgamma(input)[source]

Computes the natural logarithm of the absolute value of the gamma function on input.

\[\text{out}_{i} = \ln \Gamma(|\text{input}_{i}|)\]
Parameters:

input (Tensor) – The input tensor. With type of float16 or float32 or float64.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([0.5, 3.2, 8.5]), mindspore.float32)
>>> output = ops.lgamma(x)
>>> print(output)
[0.5723649 0.8854049 9.549267 ]
tinyms.primitives.linearize(fn, inputs)[source]

Produces a linear approximation to fun using jvp() and partial eval. This function is mainly useful if you want to apply jvp multiple times.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single tensor or tuple of Tensors.

  • inputs (Union[Tensor, Tuple or List of Tensors]) – The inputs to fn.

Returns:

Tuple, tuple of output and jvp_fn.

  • netout (Tensor or Tuple of Tensors) - The output of “fn(inputs)”.

  • jvp_fn (Function) - The function that evaluates the Jacobian-vector product.

Raises:

TypeError – If the input is not a tensor or tuple or list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, Parameter, ops
>>> from mindspore import nn
>>> from mindspore.ops.functional import linearize
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = ops.MatMul()
...     def construct(self, x, y):
...         out = self.matmul(x, y)
...         return out
>>> x = Tensor(np.array([[1, 2, 3], [3, 4, 5]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4], [5, 6]]).astype(np.float32))
>>> v = (Tensor(np.array([[1, 1, 1], [1, 1, 1]]).astype(np.float32)),
...      Tensor(np.array([[1, 1], [1, 1], [0, 0]]).astype(np.float32)))
>>> output, jvp_fn = linearize(Net(), (x, y))
>>> print(output)
[[22. 28.]
 [40. 52.]]
>>> jvp = jvp_fn(v)
>>> print(jvp)
[[12. 15.]
 [16. 19.]]
tinyms.primitives.linspace(start, end, steps)[source]

Returns a Tensor whose value is steps evenly spaced in the interval start and end (including start and end), and the length of the output Tensor is steps.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [start, start+step, start+2*step, ... , end] \end{aligned}\end{split}\]
Parameters:
  • start (Union[Tensor, int, float]) – Start value of interval. The tensor data type must be float32 or float64 and with shape of 0-D.

  • end (Union[Tensor, int, float]) – Last value of interval. The tensor data type must be float32 or float64 and with shape of 0-D.

  • steps (Union[Tensor, int]) – Number of ticks in the interval, inclusive of start and end. Must be positive int number or 0D int32/int64 Tensor.

Returns:

Tensor, has the same dtype as start, and the shape of \((steps)\).

Raises:
  • TypeError – If start or end is not a Tensor.

  • TypeError – If dtype of start or dtype of end is not float32 or float64.

  • ValueError – If shape of start or shape of end is not 0-D.

  • TypeError – If steps is not int or 0D int32/int64 Tensor.

  • ValueError – If steps is not positive int number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> steps = 5
>>> output = ops.linspace(start, end, steps)
>>> print(output)
[ 1.    3.25  5.5   7.75 10.  ]
tinyms.primitives.log(input)[source]

Returns the natural logarithm of a tensor element-wise.

\[y_i = log_e(x_i)\]

Warning

If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted.

Parameters:

input (Tensor) – Input Tensor of any dimension. The value must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64 on CPU.

  • TypeError – If dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.log(x)
>>> print(output)
[0.        0.6931472 1.3862944]
tinyms.primitives.log10(input)[source]

Returns a new Tensor by taking the base 10 logarithm of the elements in the input Tensor.

\[y_i = log_{10}(input_i)\]

Warning

If the input value of operator log10 is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affected.

Parameters:

input (Tensor) – Input Tensor of any dimension. The each element in Tensor must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64 on CPU and GPU, if dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, 10]).astype(np.float16))
>>> output = ops.log10(x)
>>> print(output)
[0.301 0.602 1.   ]
tinyms.primitives.log1p(input)[source]

Returns the natural logarithm of one plus the input tensor element-wise.

\[out_i = {log_e}(input_i + 1)\]
Parameters:

input (Tensor) – The input tensor. With float16 or float32 data type. The value must be greater than -1. \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> output = ops.log1p(x)
>>> print(output)
[0.6931472 1.0986123 1.609438 ]
tinyms.primitives.log2(input)[source]

Returns a new Tensor by taking the base 2 logarithm of the elements in the input Tensor.

\[y_i = log_2(input_i)\]

Warning

If the input value of operator log2 is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affected.

Parameters:

input (Tensor) – Input Tensor of any dimension. The value must be greater than 0.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32 or float64 on CPU and GPU, if dtype of input is not float16 or float32 on Ascend.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, 8]).astype(np.float16))
>>> output = ops.log2(x)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.log_matrix_determinant(input)[source]

log_matrix_determinant is deprecated, please use matrix_solve instead.

tinyms.primitives.log_softmax(logits, axis=-1)[source]

Applies the Log Softmax function to the input tensor on the specified axis. Supposes a slice in the given axis, \(x\) for each element \(x_i\), the Log Softmax function is shown as follows:

\[\text{output}(x_i) = \log \left(\frac{\exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),\]

where \(N\) is the length of the Tensor.

Parameters:
  • logits (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

  • axis (int) – The axis to perform the Log softmax operation. Default: -1.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If dtype of logits is neither float16 nor float32.

  • ValueError – If axis is not in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.log_softmax(logits)
>>> print(output)
[-4.4519143 -3.4519143 -2.4519143 -1.4519144 -0.4519144]
tinyms.primitives.log_uniform_candidate_sampler(true_classes, num_true=1, num_sampled=5, unique=True, range_max=5, seed=0)[source]

Generates random labels with a log-uniform distribution for sampled_candidates.

Randomly samples a tensor of sampled classes from the range of integers [0, range_max).

Parameters:
  • true_classes (Tensor) – The target classes. With data type of int64 and shape \((batch\_size, num\_true)\) .

  • num_true (int) – The number of target classes per training example. Default: 1.

  • num_sampled (int) – The number of classes to randomly sample. Default: 5.

  • unique (bool) – Determines whether sample with rejection. If unique is True, all sampled classes in a batch are unique. Default: True.

  • range_max (int) – The number of possible classes. When unique is True, range_max must be greater than or equal to num_sampled. Default: 5.

  • seed (int) – Random seed, must be non-negative. Default: 0.

Returns:

Tuple of 3 Tensors.

  • sampled_candidates (Tensor) - A Tensor with shape \((num\_sampled,)\) and the same type as true_classes.

  • true_expected_count (Tensor) - A Tensor with the same shape as true_classes and type float32.

  • sampled_expected_count (Tensor) - A Tensor with the same shape as sampled_candidates and type float32.

Raises:
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If unique is not a bool.

  • TypeError – If neither range_max nor seed is an int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

Ascend CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> output1, output2, output3 = ops.log_uniform_candidate_sampler(
... Tensor(np.array([[1, 7], [0, 4], [3, 3]])), 2, 5, True, 5)
>>> print(output1, output2, output3)
[3 2 0 4 1]
[[0.92312991 0.49336370]
 [0.99248987 0.65806371]
 [0.73553443 0.73553443]]
[0.73553443 0.82625800 0.99248987 0.65806371 0.92312991]
tinyms.primitives.logaddexp(input, other)[source]

Computes the logarithm of the sum of exponentiations of the inputs.

\[out_i = log(exp(input_i) + exp(other_i))\]
Parameters:
  • input (Tensor) – Input Tensor. The dtype of input must be float.

  • other (Tensor) – Input Tensor. The dtype of input must be float. If the shape of input is not equal to the shape of other, they must be broadcastable to a common shape (which becomes the shape of the output).

Returns:

Tensor.

Raises:
  • TypeError – If input, other is not a Tensor.

  • TypeError – The dtype of input or other is not float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([1, 2, 3]).astype(np.float16))
>>> x2 = Tensor(np.array(2).astype(np.float16))
>>> output = ops.logaddexp(x1, x2)
>>> print(output)
[2.312 2.693 3.312]
tinyms.primitives.logaddexp2(input, other)[source]

Computes the logarithm of the sum of exponentiations in base of 2 of the inputs.

\[out_i = log_2(2^{input_i} + 2^{other_i})\]
Parameters:
  • input (Tensor) – Input tensor. The dtype of input must be float.

  • other (Tensor) – Input tensor. The dtype of other must be float. If input.shape != other.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

Returns:

Tensor.

Raises:
  • TypeError – If input, other is not a Tensor.

  • TypeError – If the dtype of input, other is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.array([2, 4, 8]).astype(np.float16))
>>> x2 = Tensor(np.array([2]).astype(np.float16))
>>> output = ops.logaddexp2(x1, x2)
>>> print(output)
[3. 4.32 8.02]
tinyms.primitives.logdet(input)[source]

Calculates log determinant of one or a batch of square matrices.

Parameters:

input (Tensor) – Input Tensor of any dimension.

Returns:

Tensor, the log determinant of input. If the matrix determinant is smaller than 0, nan will be returned. If the matrix determinant is 0, -inf will be returned.

Raises:

TypeError – If dtype of input is not float32, float64, Complex64 or Complex128.

Supported Platforms:

GPU CPU

Examples

>>> a = Tensor([[[8, 9], [1, 2]], [[5, 6], [3, 4]]], mindspore.float32)
>>> output = ops.logdet(a)
>>> print(output)
[1.9459091 0.6931454]
tinyms.primitives.logical_and(input, other)[source]

Computes the “logical AND” of two tensors element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = input_{i} \wedge other_{i}\]

Note

LogicalAnd supports broadcasting.

Parameters:
  • input (Union[Tensor, bool]) – The first input is a bool or a tensor whose data type can be implicitly converted to bool.

  • other (Union[Tensor, bool]) – The second input is a bool when the first input is a tensor or a tensor whose data type can be implicitly converted to bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_and(x, y)
>>> print(output)
[ True False False]
tinyms.primitives.logical_not(input)[source]

Computes the “logical NOT” of a tensor element-wise.

\[out_{i} = \neg input_{i}\]
Parameters:

input (Tensor) – The input tensor. \((N,*)\) where \(*\) means,any number of additional dimensions.

Returns:

Tensor, the shape is the same as the input, and the dtype is bool.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> output = ops.logical_not(x)
>>> print(output)
[False  True False]
tinyms.primitives.logical_or(input, other)[source]

Computes the “logical OR” of two tensors element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one bool. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them must be bool. When the inputs are one tensor and one bool, the bool object could only be a constant, and the data type of the tensor must be bool.

\[out_{i} = x_{i} \vee y_{i}\]

Note

LogicalOr supports broadcasting.

Parameters:
  • input (Union[Tensor, bool]) – The first input is a bool or a tensor whose data type can be implicitly converted to bool.

  • other (Union[Tensor, bool]) – The second input is a bool when the first input is a tensor or a tensor whose data type can be implicitly converted to bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:

TypeError – If neither input nor other is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_or(x, y)
>>> print(output)
[ True  True  True]
tinyms.primitives.logical_xor(input, other)[source]

Computes the “logical XOR” of two tensors element-wise.

\[out_{i} = x_{i} \oplus y_{i}\]
Parameters:
  • input (Tensor) – The first input is a tensor whose data type can be implicitly converted to bool.

  • other (Tensor) – The second input is a tensor whose data type can be implicitly converted to bool to compute XOR with the first input.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is bool.

Raises:
  • TypeError – If neither input nor other is a Tensor whose data type is bool.

  • ValueError – If the shape of two inputs cannot be broadcast.

Supported Platforms:

Ascend CPU

Examples

>>> x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> output = ops.logical_xor(x, y)
>>> print(output)
[False True True]
tinyms.primitives.logit(input, eps=None)[source]

Calculate the logit of a tensor element-wise. When eps is not None, element in input is clamped to [eps, 1-eps]. When eps is None, input input is not clamped.

\[\begin{split}\begin{align} y_{i} & = \ln(\frac{z_{i}}{1 - z_{i}}) \\ z_{i} & = \begin{cases} input_{i} & \text{if eps is None} \\ \text{eps} & \text{if } input_{i} \lt \text{eps} \\ input_{i} & \text{if } \text{eps} \leq input_{i} \leq 1 - \text{eps} \\ 1 - \text{eps} & \text{if } input_{i} \gt 1 - \text{eps} \end{cases} \end{align}\end{split}\]
Parameters:
  • input (Tensor) – The input tensor.

  • eps (float, optional) – The epsilon. If eps is not None, the input clamp bound is defined as [eps, 1-eps], otherwise, the input input is not clamped. Default: None.

Returns:

Tensor, with the same shape and dtype as the input.

Raises:
  • TypeError – If eps is not a float.

  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.1, 0.2, 0.3]).astype(np.float32))
>>> output = ops.logit(x, eps=1e-5)
>>> print(output)
[-2.1972246 -1.3862944 -0.8472978]
tinyms.primitives.logsigmoid(x)[source]

Applies logsigmoid activation element-wise. The input is a Tensor with any valid shape.

Logsigmoid is defined as:

\[\text{logsigmoid}(x_{i}) = log(\frac{1}{1 + \exp(-x_i)}),\]

where \(x_{i}\) is the element of the input.

Parameters:

x (Tensor) – The input of LogSigmoid with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = ops.logsigmoid(x)
>>> print(output)
[-0.31326166 -0.12692806 -0.04858734]
tinyms.primitives.logspace(start, end, steps, base=10, *, dtype=mindspore.float32)[source]

Returns a Tensor whose value is evenly spaced on a logarithmic scale.

\[\begin{split}\begin{aligned} &step = (end - start)/(steps - 1)\\ &output = [base^{start}, base^{start + 1 * step}, ... , base^{start + (steps-2) * step}, base^{end}] \end{aligned}\end{split}\]

Note

  • Input base must be integer.

Parameters:
  • start (Union[float, Tensor]) – Start value of interval.

  • end (Union[float, Tensor]) – End value of interval.

  • steps (int) – The steps must be a non-negative integer.

  • base (int, optional) – The base must be a non-negative integer. Default: 10.

  • dtype (mindspore.dtype, optional) – The dtype of output. Default: mstype.float32.

Returns:

Tensor has the shape as (step, ). Its datatype is set by the attr ‘dtype’.

Raises:
  • TypeError – If start is not a float or a Tensor.

  • TypeError – If end is not a float or a Tensor.

  • TypeError – If steps is not an int.

  • TypeError – If base is not an int.

  • ValueError – If steps is not a non-negative integer.

  • ValueError – If base is not a non-negative integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> start = Tensor(1, mindspore.float32)
>>> end = Tensor(10, mindspore.float32)
>>> output = ops.logspace(start, end, steps = 10, base = 10, dtype=mstype.float32)
>>> print(output)
[1.e+01 1.e+02 1.e+03 1.e+04 1.e+05 1.e+06 1.e+07 1.e+08 1.e+09 1.e+10]
tinyms.primitives.logsumexp(input, axis, keep_dims=False)[source]

Reduces a dimension of a tensor by calculating exponential for all elements in the dimension, then calculate logarithm of the sum.

\[logsumexp(input) = \log(\sum(e^{input-input_{max}})) + input_{max}\]
Parameters:
  • input (Tensor) – The input tensor. With float16 or float32 data type.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

  • keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False.

Returns:

Tensor, has the same dtype as the input.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.logsumexp(x, 1, keep_dims=True)
>>> print(output.shape)
(3, 1, 5, 6)
tinyms.primitives.lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 1D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Typically the input is of shape \((N, C, L_{in})\) or \((C, L_{in})\), the output is of shape \((N, C, L_{out})\) or \((C, L_{out})\).

\[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]

The operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • x (Tensor) – Tensor of shape \((N, C, L_{in})\) or \((C, L_{in})\).

  • norm_type (Union[int, float]) –

    Type of normalization, represents p in the formula, can not be 0,

    • if p = 1, the result obtained is the sum of elements in the pool nucleus(Proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (int) – The size of kernel window.

  • stride (int) – The distance of kernel moving, an int number that represents the width of movement is stride, if the value is None, the default value kernel_size is used;

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Returns:

  • output (Tensor) - LPPool1d result, with shape \((N, C, L_{out})\) or \((C, L_{out})\), It has the same data type as x, where

    \[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]

Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is not an int.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If length of shape of x is not equal to 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
>>> out = ops.lp_pool1d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
>>> print(out)
[[[ 3.  6.]
  [15. 18.]
  [27. 30.]]
 [[39. 42.]
  [51. 54.]
  [63. 66.]]]
tinyms.primitives.lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 2D LPPooling operation on an input Tensor can be regarded as forming a 2D input plane.

Typically the input is of shape \((N, C, H_{in}, W_{in})\), the output is of shape \((N, C, H_{in}, W_{in})\), with the same shape as input, the operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • x (Tensor) – Tensor of shape \((N, C, H_{in}, W_{in})\).

  • norm_type (Union[int, float]) –

    Type of normalization, represents p in the formula, can not be 0,

    • if p = 1, the result obtained is the sum of elements in the pool nucleus(Proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel window. The data type of kernel_size must be int and the value represents the height and width, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively, if the value is None, the default value kernel_size is used.

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Returns:

  • output (Tensor) - LPPool2d result, with shape \((N, C, H_{in}, W_{in})\), It has the same data type as x, where

    \[H_{out} = \left\lfloor\frac{H_{in} - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]
    \[W_{out} = \left\lfloor\frac{W_{in} - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]

Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is neither int nor tuple.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
>>> out = ops.lp_pool2d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
>>> print(out)
[[[[  54.   63.   72.]
   [  99.  108.  117.]]
  [[ 234.  243.  252.]
   [ 279.  288.  297.]]
  [[ 414.  423.  432.]
   [ 459.  468.  477.]]]
 [[[ 594.  603.  612.]
   [ 639.  648.  657.]]
  [[ 774.  783.  792.]
   [ 819.  828.  837.]]
  [[ 954.  963.  972.]
   [ 999. 1008. 1017.]]]]
tinyms.primitives.lrn(x, depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]

Local Response Normalization.

\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]

where the \(a_{c}\) indicates the specific value of the pixel corresponding to \(c\) in feature map; where the \(n/2\) indicates the depth_radius; where the \(k\) indicates the bias; where the \(\alpha\) indicates the alpha; where the \(\beta\) indicates the beta.

Parameters:
  • depth_radius (int) – Half-width of the 1-D normalization window with the shape of 0-D. Default: 5.

  • bias (float) – An offset (usually positive to avoid dividing by 0). Default: 1.0.

  • alpha (float) – A scale factor, usually positive. Default: 1.0.

  • beta (float) – An exponent. Default: 0.5.

  • norm_region (str) – Specifies normalization region. Options: “ACROSS_CHANNELS”. Default: “ACROSS_CHANNELS”.

  • x (Tensor) – A 4-D Tensor with float16 or float32 data type.

Returns:

Tensor, with the same shape and data type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> output = ops.lrn(input_x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
tinyms.primitives.lstsq(input, A)[source]

Computes the solutions of the least squares and minimum norm problems of full-rank matrix x of size \((m \times n)\) and matrix a of size \((m \times k)\).

If \(m \geq n\), lstsq solves the least-squares problem:

\[\begin{array}{ll} \min_y & \|xy-a\|_2. \end{array}\]

If \(m < n\), lstsq solves the least-norm problem:

\[\begin{array}{llll} \min_y & \|y\|_2 & \text{subject to} & xy = a. \end{array}\]

where y is the returned tensor.

Parameters:
  • input (Tensor) – The \((m \times n)\) matrix equivalent to \(x\) in above. The input tensor whose data type is float16, float32 or float64.

  • A (Tensor) – The \((m \times k)\) matrix equivalent to \(a\) in above. The input tensor whose data type is float16, float32 or float64.

Returns:

Tensor, the least squares or minimum norm problems solution, which has shape \((n \times k)\). The data type is the same with input.

Raises:
  • TypeError – If input or A is not a Tensor.

  • TypeError – If dtype of input or A is not one of: float16, float32, float64.

  • TypeError – If the dtypes of input and A are not the same.

  • ValueError – If the dimension of input is not equal to 2.

  • ValueError – If the dimension of A is not equal to 2 or 1.

  • ValueError – If the length of input_dims[0] is not equal to the length of A_dims[0].

Supported Platforms:

CPU

Examples

>>> x = Tensor(np.array([[2,1,5],[3,5,1],[1,1,1]]),mindspore.float32)
>>> a = Tensor(np.array([[10,5],[15,8],[7,4]]),mindspore.float32)
>>> output = ops.lstsq(x, a)
>>> print(output)
[[17.000002  11.000002 ]
 [-6.5000005 -4.500001 ]
 [-3.500002  -2.5000017]]
tinyms.primitives.lt(input, other)[source]

Alias for mindspore.ops.less() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True)[source]

Converts LU_data and LU_pivots back into P, L and U matrices, where P is a permutation matrix, L is a lower triangular matrix, and U is an upper triangular matrix. Typically, LU_data and LU_pivots are generated from the LU decomposition of a matrix.

Parameters:
  • LU_data (Tensor) – The packed LU factorization data. A Tensor of shape \((*, M, N)\), where * is batch dimensions. The dim of LU_data must be equal to or greater than 2.

  • LU_pivots (Tensor) – The packed LU factorization pivots. A Tensor of shape \((*, min(M, N))\), where * is batch dimensions, with data type int8, uint8, int16, int32, int64.

  • unpack_data (bool, optional) – A flag indicating if the LU_data should be unpacked. If False, then the returned L and U are None. Default: True.

  • unpack_pivots (bool, optional) – A flag indicating if the LU_pivots should be unpacked into a permutation matrix P. If False, then the returned P is None. Default: True.

Returns:

  • pivots(Tensor) - The permutation matrix of LU factorization. The shape is \((*, M, M)\), the dtype is same as LU_data.

  • L (Tensor) - The L matrix of LU factorization. The dtype is same as LU_data.

  • U (Tensor) - The U matrix of LU factorization. The dtype is same as LU_data.

Raises:
  • TypeError – If the dtype of LU_data is int, uint or float.

  • TypeError – If the dtype of LU_pivots is not one of the following: int8, uint8, int16, int32, int64.

  • ValueError – If the dimension of LU_data is less than 2.

  • ValueError – If the dimension of LU_pivots is less than 1.

  • ValueError – If the size of the last dimension of LU_pivots is not equal to the minimum of the sizes of the last two dimensions of LU_data.

  • ValueError – If the batch dimensions of LU_data’s does not match LU_pivots’s batch dimensions.

  • ValueError – On the CPU platform, if the value of LU_pivots are out of range \([1, LU_data.shape[-2])\).

  • RuntimeError – On the Ascend platform, if the value of LU_pivots are out of range \([1, LU_data.shape[-2])\).

Supported Platforms:

GPU CPU

Examples

>>> LU_data = Tensor(np.array([[[-0.3806, -0.4872,  0.5536],
...                             [-0.1287,  0.6508, -0.2396],
...                             [ 0.2583,  0.5239,  0.6902]],
...                            [[ 0.6706, -1.1782,  0.4574],
...                             [-0.6401, -0.4779,  0.6701],
...                             [ 0.1015, -0.5363,  0.6165]]]), mstype.float64)
>>> LU_pivots = Tensor(np.array([[1, 3, 3],
...                              [2, 3, 3]]), mstype.int32)
>>> pivots, L, U = ops.lu_unpack(LU_data, LU_pivots)
>>> print(pivots)
[[[1. 0. 0.]
  [0. 0. 1.]
  [0. 1. 0.]]
 [[0. 0. 1.]
  [1. 0. 0.]
  [0. 1. 0.]]]
>>> print(L)
[[[ 1.       0.       0.]
  [-0.1287   1.       0.]
  [ 0.2583   0.5239   1.]]
 [[ 1.0000   0.       0.]
  [-0.6401   1.       0.]
  [ 0.1015  -0.5363   1.]]]
>>> print(U)
[[[-0.3806  -0.4872   0.5536]
  [ 0.       0.6508  -0.2396]
  [ 0.       0.       0.6902]]
 [[ 0.6706  -1.1782   0.4574]
  [ 0.      -0.4779   0.6701]
  [ 0.       0.       0.6165]]]
tinyms.primitives.make_row_tensor(indices, values, dense_shape)[source]

Call make_row_tensor_inner in this function.

tinyms.primitives.make_sparse_tensor(indices, values, dense_shape)[source]

Call make_coo_tensor in this function.

tinyms.primitives.margin_ranking_loss(input1, input2, target, margin=0.0, reduction='mean')[source]

MarginRankingLoss creates a criterion that measures the loss.

For details, please refer to mindspore.nn.MarginRankingLoss.

tinyms.primitives.masked_fill(input_x, mask, value)[source]

Fills elements of Tensor with value where mask is True. The shapes of input_x and mask need to be the same or broadcastable.

Parameters:
  • input_x (Tensor) – The source Tensor whose data type is one of bool, uint8, int8, int16, int32, int64, float16, float32, float64, complex64, complex128.

  • mask (Tensor[bool]) – The boolean mask.

  • value (Union[float, Tensor]) – The value to fill in with, which dtype is the same as input_x.

Returns:

Tensor, has the same type and shape as input_x.

Raises:
  • TypeError – If dtype of mask is not bool.

  • TypeError – If input_x or mask is not a Tensor.

  • ValueError – If the shapes of input_x and mask could not be broadcast.

  • TypeError – If dtype of input_x or value is not one of bool, uint8, int8, int16, int32, int64, float16, float32, float64, complex64, complex128.

  • TypeError – If dtype of value is different from that of input_x.

  • TypeError – If value is neither float number nor Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
>>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
>>> output = ops.masked_fill(input_x, mask, 0.5)
>>> print(output)
[0.5 0.5 3.  0.5]
tinyms.primitives.masked_select(input, mask)[source]

Returns a new 1-D Tensor which indexes the x tensor according to the boolean mask. The shapes of the mask tensor and the x tensor don’t need to match, but they must be broadcastable.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • mask (Tensor[bool]) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

Returns:

A 1-D Tensor, with the same type as input.

Raises:
  • TypeError – If input or mask is not a Tensor.

  • TypeError – If dtype of mask is not bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
>>> output = ops.masked_select(x, mask)
>>> print(output)
[1 3]
tinyms.primitives.matmul(input, other)[source]

Returns the matrix product of two tensors.

Note

Numpy arguments out, casting, order, subok, signature, and extobj are not supported. On GPU, the supported dtypes are np.float16 and np.float32. On CPU, the supported dtypes are np.float16 and np.float32.

Parameters:
  • input (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

  • other (Tensor) – Input tensor, scalar not allowed. The last dimension of input must be the same size as the second last dimension of other. And the shape of input and other could be broadcast.

Returns:

Tensor or scalar, the matrix product of the inputs. This is a scalar only when both input, other are 1-d vectors.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of other, or if a scalar value is passed in.

  • ValueError – If the shape of input and other could not broadcast together.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : Reasonable application of broadcast mechanism
>>> input = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
>>> other = Tensor(np.arange(4*5).reshape(4, 5), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[[[  70.   76.   82.   88.   94.]
[ 190.  212.  234.  256.  278.]
[ 310.  348.  386.  424.  462.]]
[[ 430.  484.  538.  592.  646.]
[ 550.  620.  690.  760.  830.]
[ 670.  756.  842.  928. 1014.]]]
>>> print(output.shape)
(2, 3, 5)
>>> # case 2 : the rank of `input` is 1
>>> input = Tensor(np.ones([1, 2]), mindspore.float32)
>>> other = Tensor(np.ones([2,]), mindspore.float32)
>>> output = ops.matmul(input, other)
>>> print(output)
[2.]
>>> print(output.shape)
(1,)
tinyms.primitives.matrix_band_part(x, lower, upper)[source]

Copy a tensor setting everything outside a central band in each innermost matrix to zero.

Parameters:
  • x (Tensor) – Input tensor. \((*, m, n)\) where \(*\) means, any number of additional dimensions. The data type must be float16, float32, float64, int32 or int64.

  • lower (Union[int, Tensor]) – Number of subdiagonals to keep. The data type must be int32 or int64. If negative, keep entire lower triangle.

  • upper (Union[int, Tensor]) – Number of superdiagonals to keep. The data type must be int32 or int64. If negative, keep entire upper triangle.

Returns:

Tensor, has the same type and shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is not one of float16, float32, float64, int32 or int64.

  • TypeError – If lower is neither a number nor a Tensor.

  • TypeError – If upper is neither a number nor a Tensor.

  • TypeError – If dtype of lower is neither int32 nor int64.

  • TypeError – If dtype of upper is neither int32 nor int64.

  • ValueError – If the shape of x is not greater than or equal to 2D.

  • ValueError – If the shape of lower is not equal to 0D.

  • ValueError – If the shape of upper is not equal to 0D.

Supported Platforms:

Examples

>>> x = Tensor(np.ones([2, 4, 4]).astype(np.float32))
>>> output = ops.matrix_band_part(x, 2, 1)
>>> print(output)
[[[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]
 [[1. 1. 0. 0.]
  [1. 1. 1. 0.]
  [1. 1. 1. 1.]
  [0. 1. 1. 1.]]]
tinyms.primitives.matrix_determinant(input)[source]

matrix_determinant is deprecated, please use det instead.

tinyms.primitives.matrix_diag(x, k=0, num_rows=-1, num_cols=-1, padding_value=0, align='RIGHT_LEFT')[source]

Returns a Tensor with the contents in x as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding_value. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix of output Tensor is square and infers its size from k and the innermost dimension of x. If the num_rows and num_cols specify only one of them, the operator will derive the smallest legal value as the dimension of output. Moreover, when only one diagonal is given (k is an integer or k[0] == k[1]), the first to the second innermost dimension of x is the batch size. Otherwise, the second innermost dimension is not a part of batch size.

Parameters:
  • x (Tensor) – The diagonal Tensor.

  • k (Union[int, Tensor], optional) – Diagonal offsets. A Tensor of type int32. Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The value must be in the range of given or derivated num_rows and num_cols, meaning value of k must be in (-num_rows, num_cols). Default: 0.

  • num_rows (Union[int, Tensor], optional) – The number of rows of the output Tensor. A Tensor of type int32 with only one value. If num_rows is -1, indicating that the innermost matrix of the output Tensor is a square matrix, and the real number of rows will be derivated by other inputs. That is \(num\_rows = x.shape[-1] - min(k[1], 0)\). Otherwise, the value must be equal or greater than \(x.shape[-1] - min(k[1], 0)\). Default: -1.

  • num_cols (Union[int, Tensor], optional) – The number of columns of the output Tensor. A Tensor of type int32 with only one value. If num_cols is -1, indicating that the innermost matrix of the output Tensor is a square matrix, and the real number of columns will be derivated by other inputs. That is \(num\_cols = x.shape[-1] + max(k[0], 0)\). Otherwise, the value must be equal or greater than \(x.shape[-1] - min(k[1], 0)\). Default: -1.

  • padding_value (Union[int, float, Tensor], optional) – The number to fill the area outside the specified diagonal band. A Tensor with only one value. Have the same dtype as x. Default: 0.

  • align (str, optional) –

    specifies how superdiagonals and subdiagonals should be aligned. Supported values:”RIGHT_LEFT”, “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Default: “RIGHT_LEFT”.

    • When set to “RIGHT_LEFT”, the alignment of superdiagonals will be towards the right side (padding the row on the left), while subdiagonals will be towards the left side (padding the row on the right)

    • When set to “LEFT_RIGHT”, the alignment of superdiagonals will be towards the left side (padding the row on the right), while subdiagonals will be towards the right side (padding the row on the left)

    • When set to “LEFT_LEFT”, the alignment of both superdiagonals and subdiagonals will be towards the left side(padding the row on the right).

    • When set to “RIGHT_RIGHT”, the alignment of both superdiagonals and subdiagonals will be towards the right side(padding the row on the left).

Returns:

A Tensor. Has the same type as x. Suppose x has r dimensions with shape \((I, J, ..., M, N)\) . The output Tensor has rank r + 1 with shape \((I, J, ..., M, num_rows, num_cols)\) when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape \((I, J, ..., num_rows, num_cols)\) .

Raises:
  • TypeError – If x is not Tensor.

  • TypeError – If input x and padding_value are not the same dtype.

  • TypeError – If k, num_rows or num_cols is not int32 dtype.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of num_rows, num_cols or padding_value is not equal to 0.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If the value of k is not in (-num_rows, num_cols).

  • ValueError – If k[1] is not greater equal to k[0] when k[0] != k[1].

  • ValueError – If rank of x is not greater than or is equal to 1 when k is an integer or k[0] == k[1].

  • ValueError – If rank of x is not greater than or is equal to 2 when k[0] != k[1].

  • ValueError – If x.shape[-2] is not equal to k[1] - k[0] + 1 when k[0] != k[1].

  • ValueError – If num_rows and num_cols do not match the dimensions of x and the values of k.

  • ValueError – If align is not a string or not in the valid set of values.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> x = Tensor(np.array([[8, 9, 0],
...                      [1, 2, 3],
...                      [0, 4, 5]]), mindspore.float32)
>>> k =Tensor(np.array([-1, 1]), mindspore.int32)
>>> num_rows = Tensor(np.array(3), mindspore.int32)
>>> num_cols = Tensor(np.array(3), mindspore.int32)
>>> padding_value = Tensor(np.array(11), mindspore.float32)
>>> output = ops.matrix_diag(x, k, num_rows, num_cols, padding_value, align='LEFT_RIGHT')
>>> print(output)
[[ 1.  8. 11.]
 [ 4.  2.  9.]
 [11.  5.  3.]]
>>> print(output.shape)
(3, 3)
tinyms.primitives.matrix_diag_part(x, k=0, padding_value=0, align='RIGHT_LEFT')[source]

Returns the diagonal part of input tensor. Returns a tensor with the k[0]-th to k[1]-th diagonals of x. Some diagonals are shorter than max_diag_len and need to be padded. Input k and padding_value must be const Tensor when taking Graph mode.

Parameters:
  • x (Tensor) – The input Tensor with rank r, where r >= 2.

  • k (Union[int, Tensor], optional) – A Tensor of type int32. Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The value of k has restructions, meaning value of k must be in (-x.shape[-2], x.shape[-1]).

  • padding_value (Union[int, float, Tensor], optional) – A Tensor with only one value. Have the same dtype as x. The number to fill the area outside the specified diagonal band. Default: 0.

  • align (str, optional) – An optional string from: “RIGHT_LEFT”(default), “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row).

Returns:

A Tensor. Has the same type as x. Assume x has r dimensions \((I, J, ..., L, M, N)\) . Let max_diag_len be the maximum length among all diagonals to be extracted, \(max\_diag\_len = min(M + min(k[1], 0), N + min(-k[0], 0))\) Let num_diags be the number of diagonals to extract, \(num\_diags = k[1] - k[0] + 1\). If \(num\_diags == 1\), the output tensor is of rank r - 1 with shape \((I, J, ..., L, max\_diag\_len)\) Otherwise, the output tensor has rank r with dimensions \((I, J, ..., L, num\_diags, max\_diag\_len)\) .

Raises:
  • TypeError – If x is not Tensor.

  • TypeError – If input x and padding_value are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of padding_value is not equal to 0.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the value of k is not in (-x.shape[-2], x.shape[-1]).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3, 4],
...                      [5, 6, 7, 8],
...                      [9, 8, 7, 6]]), mindspore.float32)
>>> k =Tensor(np.array([1, 3]), mindspore.int32)
>>> padding_value = Tensor(np.array(9), mindspore.float32)
>>> output = ops.matrix_diag_part(x, k, padding_value, align='RIGHT_LEFT')
>>> print(output)
[[9. 9. 4.]
 [9. 3. 8.]
 [2. 7. 6.]]
>>> print(output.shape)
(3, 3)
tinyms.primitives.matrix_exp(input)[source]

Computes the exponential of a single or a batch of square matrices.

\[matrix\_exp(x) = \sum_{k=0}^{\infty} \frac{1}{k !} x^{k} \in \mathbb{K}^{n \times n}\]

where \(x\) corresponds to input .

Parameters:

input (Tensor) – The shape of tensor is \((*, n, n)\) where * is zero or more batch dimensions. Must be one of the following types: float16, float32, float64, complex64, complex128.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If the dtype of input is not one of the following dtype: float16, float32, float64, complex64, complex128.

  • ValueError – If the rank of input is less than 2.

  • ValueError – If the size of last two dimensions of input are not equal.

Supported Platforms:

Examples

>>> input = Tensor(np.array([[1, 2], [0, 1]]), mindspore.float32)
>>> output = ops.matrix_exp(input)
>>> print(output)
[[2.7182817 5.436563 ]
[0.        2.7182817]]
tinyms.primitives.matrix_power(input, n)[source]

Raises a square matrix to the (integer) power n .

  • When \(n=0\) , returns the identity matrix, which has the same shape as input .

  • When \(n<0\) and input is invertible, returns the inverse of input to the power of \(-n\) .

Parameters:
  • input (Tensor) – A 3-D Tensor. Supported data types are float16 and float32. The shape is \((b, m, m)\) , represents b m-D square matrices.

  • n (int) – The exponent, a required int.

Returns:

A 3-D Tensor. Data type and shape are the same as input ‘s.

Raises:
  • TypeError – If the data type of n is not int.

  • TypeError – If the data type of input is neither float32 nor float16.

  • TypeError – If input is not a Tensor.

  • ValueError – If input is not a 3-D tensor.

  • ValueError – If shape[1] and shape[2] of input are not the same.

  • ValueError – If n is negative but got input input has singular matrices.

Supported Platforms:

Examples

>>> input = Tensor([[[0, 1], [-1, 0]], [[1, 0], [0, -1]]], dtype=ms.float32)
>>> y = ops.matrix_power(input, 2)
>>> print(y)
[[[-1.  0.]
  [-0. -1.]]
 [[ 1.  0.]
  [ 0.  1.]]]
tinyms.primitives.matrix_set_diag(x, diagonal, k=0, align='RIGHT_LEFT')[source]

Returns a batched matrix tensor with new batched diagonal values. Given x and diagonal, this operation returns a tensor with the same shape and values as x, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal. Some diagonals are shorter than max_diag_len and need to be padded. The diagonal \(shape[-2]\) must be equal to num_diags calculated by \(k[1] - k[0] + 1\). The diagonal \(shape[-1]\) must be equal to the longest diagonal value max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\). Let x have r + 1 dimensions \((I, J, ..., L, M, N)\) . The diagonal tensor has rank r with shape \((I, J, ..., L, max\_diag\_len)\) when k is an integer or \(k[0] == k[1]\). Otherwise, it has rank r + 1 with shape \((I, J, ... L, num\_diags, max\_diag\_len)\) .

Parameters:
  • x (Tensor) – Rank r + 1, where r >= 1.

  • diagonal (Tensor) – A Tensor. Have the same dtype as x. Rank r when k is an integer or \(k[0] == k[1]\). Otherwise, it has rank r + 1.

  • k (Union[int, Tensor], optional) – A int32 Scalar or int32 Tensor. Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. The alue of k has restructions, meaning value of k must be in \((-x.shape[-2], x.shape[-1])\). Input k must be const Tensor when taking Graph mode.

  • align (str, optional) – An optional string from: “RIGHT_LEFT”(default), “LEFT_RIGHT”, “LEFT_LEFT”, “RIGHT_RIGHT”. Align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row).

Returns:

Tensor, The same type as x. Let x has r+1 dimensions \((I, J, ..., L, M, N)\) . The output is a tensor of rank r+1 with dimensions \((I, J, ..., L, M, N)\) , the same as input x.

Raises:
  • TypeError – If input x or diagonal is not Tensor.

  • TypeError – If input x and diagonal are not the same dtype.

  • TypeError – If k is not int32 dtype.

  • ValueError – If align is not a string or not in the valid range.

  • ValueError – If rank of k is not equal to 0 or 1.

  • ValueError – If rank of x is not greater equal to 2.

  • ValueError – If size of k is not equal to 1 or 2.

  • ValueError – If k[1] is not greater equal to k[0] in case the size of k is 2.

  • ValueError – If the diagonal rank size don’t match with input x rank size.

  • ValueError – If the diagonal shape value don’t match with input x shape value.

  • ValueError – If the diagonal \(shape[-2]\) is not equal to num_diags calculated by \(k[1]-k[0]+1\).

  • ValueError – If the value of k is not in \((-x.shape[-2], x.shape[-1])\).

  • ValueError – If the diagonal.shape[-1] is not equal to the max_diag_len calculated by \(min(x.shape[-2] + min(k[1], 0), x.shape[-1] + min(-k[0], 0))\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[7, 7, 7, 7],
...                      [7, 7, 7, 7],
...                      [7, 7, 7, 7]]), mindspore.float32)
>>> diagonal = Tensor(np.array([[0, 9, 1],
...                             [6, 5, 8],
...                             [1, 2, 3],
...                             [4, 5, 0]]), mindspore.float32)
>>> k = Tensor(np.array([-1, 2]), mindspore.int32)
>>> align = 'RIGHT_LEFT'
>>> output = ops.matrix_set_diag(x, diagonal, k, align)
>>> print(output)
[[1. 6. 9. 7.]
 [4. 2. 5. 1.]
 [7. 5. 3. 8.]]
>>> print(output.shape)
(3, 4)
tinyms.primitives.matrix_solve(matrix, rhs, adjoint=False)[source]

Solves systems of linear equations.

\[\begin{split}\begin{aligned} &matrix[..., M, M] * x[..., M, K] = rhs[..., M, K]\\ &adjoint(matrix[..., M, M]) * x[..., M, K] = rhs[..., M, K] \end{aligned}\end{split}\]

Warning

On GPU, if the matrix is irreversible, an error may be reported or an unknown result may be returned.

Parameters:
  • matrix (Tensor) – The shape of tensor is \((..., M, M)\) .

  • rhs (Tensor) – The shape of tensor is \((..., M, K)\) . rhs must have the same dtype as matrix.

  • adjoint (bool) – Indicating whether to solve with matrix or its (block-wise) adjoint. Default: False.

Returns:

x (Tensor), The dtype and shape is the same as ‘rhs’.

Raises:
  • TypeError – If adjoint is not the type of bool.

  • TypeError – If the type of matrix is not one of the following dtype: mstype.float16, mstype.float32, mstype.float64, mstype.complex64, mstype.complex128.

  • TypeError – If the type of matrix is not the same as that of rhs.

  • ValueError – If the rank of matrix less than 2.

  • ValueError – If the dimension of matrix is not the same as rhs.

  • ValueError – If the inner-most 2 dimension of matrix is not the same.

  • ValueError – If the inner-most 2 dimension of rhs does not match matrix.

  • ValueError – If the matrix is irreversible.

Supported Platforms:

Ascend CPU

Examples

>>> matrix = Tensor([[5, 4], [3, 1]], mindspore.float32)
>>> rhs = Tensor([[7], [2]], mindspore.float32)
>>> result = ops.matrix_solve(matrix, rhs)
>>> print(result)
[[0.14285707]
 [1.5714287 ]]
tinyms.primitives.max(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple maximum values, the index of the first maximum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input”.

Also see: mindspore.ops.ArgMaxWithValue.

Parameters:
  • input (Tensor) – The input tensor, can be any dimension. Complex tensor is not supported for now.

  • axis (int) – The dimension to reduce. Default: 0.

  • keepdims (bool) – Whether to reduce dimension, if true, the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The minimum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether you need to replace the primitive value in ‘input’ with the initial value. If True, do not replace, if False, replace. The ‘where’ position is False and the corresponding ‘initial’ value must be provided. Default value: None, which indicates True by default.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input tensor.

  • values (Tensor) - The maximum value of input tensor, with the same shape as index, and same dtype as x.

  • index (Tensor) - The index for the maximum value of the input tensor, with dtype int32. If keepdims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index,  = ops.max(x, keepdims=True)
>>> print(output, index)
0.7 0
tinyms.primitives.max_pool2d(x, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

Performs a 2D max pooling on the input Tensor.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel_size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • x (Tensor) – Tensor of shape \((N_{in}, C_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. Default: kernel_size.

  • padding (Union[int, tuple[int]]) – An int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 0.

  • dilation (Union[int, tuple[int]]) – Control the stride of elements in the kernel. Default: 1.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • return_indices (bool) – Whether to output the indices of max value. Default: False.

Returns:

If return_indices is False, return a Tensor output, else return a tuple (output, argmax).

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64. It will be return only when return_indices is True.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 4.

  • TypeError – If kernel_size , stride , padding or dilation is not int or tuple.

  • ValueError – If kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • TypeError – If ceil_mode is not bool

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(20 * 16 * 50 * 32).reshape((20, 16, 50, 32)), mindspore.float32)
>>> output_tensor, argmax = ops.max_pool2d(x, kernel_size=(3, 2), stride=(2, 1), return_indices=True)
>>> print(output_tensor.shape)
(20, 16, 24, 31)
>>> print(argmax.shape)
(20, 16, 24, 31)
tinyms.primitives.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)[source]

Performs a 3D max pooling on the input Tensor.

Typically the input is a Tensor with shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel_size \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows:

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • x (Tensor) – Tensor of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\) with data type of int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: kernel_size.

  • padding (Union[int, tuple[int]]) – An int number that represents the depth, height and width of movement are both strides, or a tuple of three int numbers that represent depth, height and width of movement respectively. Default: 0.

  • dilation (Union[int, tuple[int]]) – Control the stride of elements in the kernel. Default: 1.

  • ceil_mode (bool) – Whether to use ceil instead of floor to calculate output shape. Default: False.

  • return_indices (bool) – Whether to output the indices of max value. Default: False.

Returns:

If return_indices is False, return a Tensor output, else return a tuple (output, argmax).

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64. It will be return only when return_indices is True.

Raises:
  • TypeError – If x is not a Tensor.

  • ValueError – If length of shape of x is not equal to 5.

  • TypeError – If kernel_size , stride , padding or dilation is not int or tuple.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If padding is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(2 * 1 * 2 * 2 * 2).reshape((2, 1, 2, 2, 2)), mindspore.float32)
>>> output_tensor, argmax = ops.max_pool3d(x, kernel_size=2, stride=1, padding=1, return_indices=True)
>>> print(output_tensor.shape)
(2, 1, 3, 3, 3)
>>> print(argmax.shape)
(2, 1, 3, 3, 3)
tinyms.primitives.max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of max_pool1d.

max_unpool1d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in})\) or \((C, H_{in})\), and the output is of shape \((N, C, H_{out})\) or \((C, H_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, H_{in})\) or \((C, H_{in})\).

  • indices (Tensor) – Index of maximum value. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, If stride is 0, (0) or None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0.

  • output_size (tuple[int], optional) – The output shape. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H)\) , \((C, H)\) or \((H)\) and output_size must belong to \([(N, C, H_{out} - stride[0]), (N, C, H_{out} + stride[0])]\).

Returns:

Tensor, with shape \((N, C, H_{out})\) or \((C, H_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0)) or kernel_size is not positive.

  • ValueError – If the shapes of x and indices are not equal.

  • ValueError – If x whose length is not 2 or 3.

  • ValueError – If type of output_size is not tuple.

  • ValueError – If output_size whose length is not 0, 2 or 3.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[2, 4, 6, 8]]).astype(np.float32))
>>> indices = Tensor(np.array([[1, 3, 5, 7]]).astype(np.int64))
>>> output = ops.max_unpool1d(x, indices, kernel_size =2, stride=2, padding=0)
>>> print(output.asnumpy())
[[0. 2. 0. 4. 0. 6. 0. 8.]]
tinyms.primitives.max_unpool2d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of max_pool2d.

max_unpool2d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\), and the output is of shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).

  • indices (Tensor) – Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of height and width are the same, equal to padding. If padding is a tuple of two integers, the padding of height and width equal to padding[0] and padding[1] correspondingly.

  • output_size (tuple[int], optional) – The target output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H, W)\) , \((C, H, W)\) or \((H, W)\) and output_size must belong to \([(N, C, H_{out} - stride[0], W_{out} - stride[1]), (N, C, H_{out} + stride[0], W_{out} + stride[1])]\).

Returns:

Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 2.

  • ValueError – If x whose length is not 3 or 4.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size whose length is not 0, 3 or 4.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0, 1], [8, 9]]]]).astype(np.float32))
>>> indices = Tensor(np.array([[[[0, 1], [2, 3]]]]).astype(np.int64))
>>> output = ops.max_unpool2d(x, indices, kernel_size=1, stride=1, padding=0)
>>> print(output.asnumpy())
[[[[0. 1.]
   [8. 9.]]]]
tinyms.primitives.max_unpool3d(x, indices, kernel_size, stride=None, padding=0, output_size=None)[source]

Computes the inverse of mindspore.ops.max_pool3d().

max_unpool3d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\), and the output is of shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\ \end{array}\end{split}\]
Parameters:
  • x (Tensor) – The input Tensor to invert. Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).

  • indices (Tensor) – Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, D_{in} \times H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of depth, height and width are the same, equal to padding. If padding is a tuple of three integers, the padding of depth, height and width equal to padding[0], padding[1] and padding[2] correspondingly.

  • output_size (tuple[int], optional) – The output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, D, H, W)\) or \((C, D, H, W)\) or \((D, H, W)\) and output_size must belong to \([(N, C, D_{out} - stride[0], H_{out} - stride[1], W_{out} - stride[2]), (N, C, D_{out} + stride[0], H_{out} + stride[1], W_{out} + stride[2])]\).

Returns:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride or padding (also support 0 and (0, 0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 3.

  • ValueError – If x whose length is not 4 or 5.

  • ValueError – If output_size whose length is not 0, 4 or 5.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[[0, 1], [8, 9]]]]]).astype(np.float32))
>>> indices= Tensor(np.array([[[[[0, 1], [2, 3]]]]]).astype(np.int64))
>>> output = ops.max_unpool3d(x, indices, kernel_size=2, stride=1, padding=0)
>>> print(output)
[[[[[0. 1. 8.]
    [9. 0. 0.]
    [0. 0. 0.]]
   [[0. 0. 0.]
    [0. 0. 0.]
    [0. 0. 0.]]]]]
tinyms.primitives.maximum(x, y)[source]

Computes the maximum of input tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

  • If one of the elements being compared is a NaN, then that element is returned.

\[output_i = max(x_i, y_i)\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.maximum(x, y)
>>> print(output)
[4. 5. 6.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.maximum(x, y)
>>> print(output.dtype)
Float32
tinyms.primitives.mean(x, axis=None, keep_dims=False)[source]

Reduces all dimension of a tensor by averaging all elements in the dimension, by default. And reduce a dimension of x along the specified axis. keep_dims determines whether the dimensions of the output and input are the same.

Parameters:
  • x (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of x is r, and the value range is [-r,r).

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((x_0, x_3, ..., x_R)\).

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.mean(x, 1, keep_dims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by averaging all elements in the dimension.
>>> x = Tensor(np.array([[[2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2]],
... [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
... [[6, 6, 6, 6, 6, 6], [8, 8, 8, 8, 8, 8], [10, 10, 10, 10, 10, 10]]]),
... mindspore.float32)
>>> output = ops.mean(x)
>>> print(output)
5.0
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along the axis 0
>>> output = ops.mean(x, 0, True)
>>> print(output)
[[[4. 4. 4. 4. 4. 4.]
  [5. 5. 5. 5. 5. 5.]
  [6. 6. 6. 6. 6. 6.]]]
>>> # case 3: Reduces a dimension along the axis 1
>>> output = ops.mean(x, 1, True)
>>> print(output)
[[[2. 2. 2. 2. 2. 2.]]
 [[5. 5. 5. 5. 5. 5.]]
 [[8. 8. 8. 8. 8. 8.]]]
>>> # case 4: Reduces a dimension along the axis 2
>>> output = ops.mean(x, 2, True)
>>> print(output)
[[[ 2.]
  [ 2.]
  [ 2.]]
 [[ 4.]
  [ 5.]
  [ 6.]]
 [[ 6.]
  [ 8.]
  [10.]]]
tinyms.primitives.median(input, axis=-1, keepdims=False)[source]

Computes the median and indices of input tensor.

Parameters:
  • input (Tensor) – A Tensor of any dimension whose data type is int16, int32, int64, float32 or float64.

  • axis (int, optional) – The dimension need to reduce. Default: -1.

  • keepdims (bool, optional) – Whether the output tensor need to retain axis dimension or not. Default: False.

Returns:

y (Tensor), has the same dtype as the input. If keepdims is true, the y has the same shape as the input except the shape of y in dimension axis is size 1. Otherwise, the y lacks axis dimension than input.

indices (Tensor), has the same shape as the y, but dtype is int64.

Raises:
  • TypeError – If dtype of input is not one of the following: int16, int32, int64, float32, float64.

  • TypeError – If input input is not a Tensor.

  • TypeError – If axis is not a int.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is not in range of [-x.dim, x.dim-1].

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[0.57, 0.11, 0.21],[0.38, 0.50, 0.57], [0.36, 0.16, 0.44]]).astype(np.float32))
>>> y = ops.median(x, axis=0, keepdims=False)
>>> print(y)
(Tensor(shape=[3], dtype=Float32, value= [ 3.79999995e-01,  1.59999996e-01,  4.39999998e-01]),
Tensor(shape=[3], dtype=Int64, value= [1, 2, 2]))
tinyms.primitives.meshgrid(*inputs, indexing='xy')[source]

Generates coordinate matrices from given coordinate tensors.

Given N one-dimensional coordinate tensors, returns a tuple outputs of N N-D coordinate tensors for evaluating expressions on an N-D grid.

Parameters:

inputs (List[Tensor]) – List of 1-D tensors. The length of inputs should be greater than 1. The data type is Number.

Keyword Arguments:

indexing (str, optional) – Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. Valid options: xy’ or ‘ij’. In the 2-D case with inputs of length M and N, the outputs are of shape \((N, M)\) for ‘xy’ indexing and \((M, N)\) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape \((N, M, P)\) for ‘xy’ indexing and \((M, N, P)\) for ‘ij’ indexing. Default: ‘xy’.

Returns:

Tensors, a Tuple of N N-D Tensor objects. The data type is the same with the Inputs.

Raises:
  • TypeError – If indexing is not a str or inputs is not a tuple.

  • ValueError – If indexing is neither ‘xy’ nor ‘ij’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
>>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
>>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
>>> output = ops.meshgrid(x, y, z, indexing='xy')
>>> print(output)
(Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]],
  [[1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3],
   [4, 4, 4, 4, 4]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5],
   [5, 5, 5, 5, 5]],
  [[6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6],
   [6, 6, 6, 6, 6]],
  [[7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7],
   [7, 7, 7, 7, 7]]]),
 Tensor(shape=[3, 4, 5], dtype=Int32, value=
 [[[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]],
  [[8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2],
   [8, 9, 0, 1, 2]]]))
tinyms.primitives.min(input, axis=None, keepdims=False, *, initial=None, where=None)[source]

Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Warning

  • If there are multiple minimum values, the index of the first minimum value is used.

  • The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “x”.

Parameters:
  • input (Tensor) – The input tensor, can be any dimension. Complex tensor is not supported for now.

  • axis (int) – The dimension to reduce. Default: None.

  • keepdims (bool) – Whether to reduce dimension, if true the output will keep the same dimension as the input, the output will reduce dimension if false. Default: False.

Keyword Arguments:
  • initial (scalar, optional) – The maximum value of an output element. Must be present to allow computation on empty slice. Default: None.

  • where (Tensor[bool], optional) – A Tensor indicating whether to replace the primitive value in input with the value in initial. If True, do not replace, otherwise replace. For the index of True in where, the corresponding value in initial must be assigned. Default: None, which indicates True by default.

Returns:

tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input tensor.

  • values (Tensor) - The minimum value of input tensor, with the same shape as index, and same dtype as x.

  • index (Tensor) - The index for the minimum value of the input tensor, with dtype int32. If keepdims is true, the shape of output tensors is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Otherwise, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\) .

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
>>> output, index = ops.min(x, keepdims=True)
>>> print(output, index)
0.0 0
tinyms.primitives.minimum(x, y)[source]

Computes the minimum of input tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Shapes of them are supposed to be broadcast.

  • If one of the elements being compared is a NaN, then that element is returned.

\[output_i = min(x_i, y_i)\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • ValueError – If x and y are not the same shape after broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : same data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.minimum(x, y)
>>> print(output)
[1. 2. 3.]
>>> # case 2 : different data type
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.int32)
>>> y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> output = ops.minimum(x, y)
>>> print(output.dtype)
Float32
tinyms.primitives.mirror_pad(input_x, paddings, mode)[source]

Pads the input tensor according to the paddings and mode.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • paddings (Tensor) – Paddings requires constant tensor. The value of paddings is a matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings are int type. For the input in the D th dimension, paddings[D, 0] indicates how many sizes to be extended ahead of the input tensor in the D th dimension, and paddings[D, 1] indicates how many sizes to be extended behind the input tensor in the D th dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input_x.dim_size(D) (or input_x.dim_size(D) - 1) if mode is SYMMETRIC (if REFLECT, respectively).

  • mode (str) – Specifies the padding mode. The optional values are “REFLECT” and “SYMMETRIC”. Default: “REFLECT”.

Returns:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]]. For a more intuitive understanding, please see the example below.

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]]. For a more intuitive understanding, please see the example below.

Raises:
  • TypeError – If input_x or paddings is not a Tensor.

  • TypeError – If mode is not a str.

  • ValueError – If paddings.size is not equal to 2 * rank of input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[1,2,3], [4,5,6], [7,8,9]])
>>> mode = "REFLECT"
>>> paddings = Tensor([[1, 1], [2, 2]])
>>> output = ops.mirror_pad(input_x, paddings, mode)
>>> print(output)
[[6 5 4 5 6 5 4]
 [3 2 1 2 3 2 1]
 [6 5 4 5 6 5 4]
 [9 8 7 8 9 8 7]
 [6 5 4 5 6 5 4]]
tinyms.primitives.mish(x)[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

The function is shown as follows:

\[\text{output} = x * \tanh(\log(1 + \exp(\text{x})))\]

See more details in A Self Regularized Non-Monotonic Neural Activation Function.

Parameters:

x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.mish(input_x)
>>> print(output)
[[-3.0340147e-01  3.9974129e+00 -2.68311895e-03]
 [ 1.9439590e+00  -3.3576239e-02 8.99999990e+00]]
tinyms.primitives.mm(input, mat2)[source]

Returns the matrix product of two arrays. If input is a \((n \times m)\) Tensor, mat2 is a \((m \times p)\) Tensor, out will be a \((n \times p)\) Tensor.

Note

This function cannot support broadcasting. Refer to mindspore.ops.matmul() instead if you need a broadcastable function.

Parameters:
  • input (Tensor) – The first matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

  • mat2 (Tensor) – The second matrix of matrix multiplication. The last dimension of input must be the same size as the first dimension of mat2.

Returns:

Tensor or scalar, the matrix product of the inputs.

Raises:
  • ValueError – If the last dimension of input is not the same size as the second-to-last dimension of mat2.

  • ValueError – If input or mat2 is not a matrix.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x1 = ms.Tensor(np.random.rand(2, 3))
>>> x2 = ms.Tensor(np.random.rand(3, 4))
>>> out = ops.mm(x1, x2)
>>> print(out.shape)
(2, 4)
tinyms.primitives.moveaxis(x, source, destination)[source]

Alias for ops.movedim. Moves axis of an array from source to destination.

Refer to mindspore.ops.movedim() for more detail.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.moveaxis(x, 0, -1)
>>> print(output.shape)
(4, 5, 3)
tinyms.primitives.movedim(x, source, destination)[source]

Moves axis of an array from source to destination.

Other axis remain in their original order.

Parameters:
  • x (Tensor) – The tensor array whose axis should be reordered.

  • source (Union[int, sequence[int]]) – Original positions of the axis to move. These must be unique.

  • destination (Union[int, sequence[int]]) – Destination positions for each of the original axis. These must also be unique.

Returns:

Tensor, array with moved axis.

Raises:

ValueError – If axis are out of the range of [-a.ndim, a.ndim), or if the axis contain duplicates.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case1 : moving single axis
>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.movedim(x, 0, -1)
>>> print(output.shape)
(4, 5, 3)
>>> # case 2 : moving multiple axes
>>> from mindspore import ops, Tensor
>>> import numpy as np
>>> x = Tensor(np.zeros((3, 4, 5)))
>>> output = ops.movedim(x, (0, 2), (1, 2))
>>> print(output.shape)
(4, 3, 5)
tinyms.primitives.mse_loss(input, target, reduction='mean')[source]

Calculates the mean squared error between the predicted value and the label value.

For detailed information, please refer to mindspore.nn.MSELoss.

Parameters:
  • input (Tensor) – Tensor of any dimension.

  • target (Tensor) – The input label. Tensor of any dimension, same shape as the input in common cases. However, it supports that the shape of input is different from the shape of target and they should be broadcasted to each other.

  • reduction (str, optional) – Type of reduction to be applied to loss. The optional values are “mean”, “none” and “sum”. Default: “mean”.

Returns:

Tensor, loss of type float, the shape is zero if reduction is ‘mean’ or ‘sum’, while the shape of output is the broadcasted shape if reduction is ‘none’.

Raises:
  • ValueError – If reduction is not one of ‘none’, ‘mean’ or ‘sum’.

  • ValueError – If input and target have different shapes and cannot be broadcasted.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = ops.mse_loss(logits, labels, reduction='none')
>>> print(output)
[[0. 1. 4.]
 [0. 0. 1.]]
tinyms.primitives.msort(input)[source]

Sorts the elements in Tensor in ascending order of value along its first dimension.

ops.msort(t) is equivalent to ops.Sort(axis=0)(t)[0]. See also mindspore.ops.Sort().

Parameters:

input (Tensor) – The input to sort, with float16 or float32 data type.

Returns:

A tensor whose values are the sorted values, with the same shape and data type as input.

Raises:

TypeError – If dtype of input is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), ms.float16)
>>> output = ops.msort(input)
>>> print(output)
[[4. 2. 1.]
 [5. 6. 3.]
 [8. 9. 7.]]
tinyms.primitives.mul(input, other)[source]

Multiplies two tensors element-wise.

\[out_{i} = input_{i} * other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not one of the following: Tensor, number.Number, bool.

  • ValueError – If input and other are not the same shape.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> output = ops.mul(x, y)
>>> print(output)
[ 4. 10. 18.]
tinyms.primitives.multi_margin_loss(input, target, p=1, margin=1, weight=None, reduction='mean')[source]

Hinge loss for optimizing a multi-class classification.

Optimizes a multi-class classification hinge loss (margin-based loss) between input and output.

For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is:

\[\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)}\]

where \(i\in \{0,⋯,x.size(0)−1\}\) and \(i \ne y\).

Parameters:
  • input (Tensor) – Input , with shape \((N, C)\). Data type only support float32, float16 or float64. It is \(x\) in the above formula.

  • target (Tensor) – Ground truth labels, with shape \((N,)\). Data type only support int64. The value of target should be non-negative, less than C. It is \(y\) in the above formula.

  • p (int, optional) – The norm degree for pairwise distance. Should be 1 or 2. Default: 1.

  • margin (int, optional) – A parameter to change pairwise distance. Default: 1.

  • weight (Tensor, optional) – The rescaling weight to each class with shape \((C,)\). Data type only support float16, float32 or float64. Default: None.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Returns:

Tensor. If reduction is ‘none’, returns a Tensor with the same shape as target. Otherwise, it is a scalar.

Raises:
  • TypeError – If dtype of p or target is not int.

  • TypeError – If dtype of margin is not int.

  • TypeError – If dtype of reduction is not str.

  • TypeError – If dtype of input is not float16, float or float64.

  • TypeError – If dtype of weight and input is not the same.

  • ValueError – If p is not 1 or 2.

  • ValueError – If reduction is not one of {‘none’,’sum’,’mean’}.

  • ValueError – If shape[0] of input is not equal to shape[0] of target.

  • ValueError – If shape[1] of input is not equal to shape[0] of weight.

  • ValueError – If rank of weight is not 1 or rank of target is not 1 or input is not 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = Tensor(np.ones(shape=[3, 3]), mindspore.float32)
>>> target = Tensor(np.array([1, 2, 1]), mindspore.int64)
>>> weight = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> output = ops.multi_margin_loss(inputs, target, weight=weight)
>>> print(output)
0.6666667
tinyms.primitives.multilabel_margin_loss(input, target, reduction='mean')[source]

Hinge loss for optimizing a multi-label classification.

Creates a criterion that optimizes a multi-label multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:

\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}\]

where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\). \(y\) and \(x\) must have the same size. The criterion only considers a contiguous block of non-negative targets that starts at the front. This allows for different samples to have variable amounts of target classes.

Parameters:
  • input (Tensor) – Predict data. Tensor of shape \((C)\) or \((N, C)\), where \(N\) is the batch size and \(C\) is the number of classes. Data type must be float16 or float32.

  • target (Tensor) – Ground truth data, with the same shape as input, data type must be int32 and label targets padded by -1.

  • reduction (str, optional) –

    Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

    • ’none’: no reduction will be applied.

    • ’mean’: the sum of the output will be divided by the number of elements in the output.

    • ’sum’: the output will be summed.

Returns:

  • outputs (Union[Tensor, Scalar]) - The loss of MultilabelMarginLoss. If reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If input or target is not a Tensor.

  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If dtype of target is not int32.

  • ValueError – If length of shape of input is neither 1 nor 2.

  • ValueError – If shape of input is not the same as target.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

Ascend GPU

Examples

>>> inputs = Tensor(np.array([[0.1, 0.2, 0.4, 0.8], [0.2, 0.3, 0.5, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[1, 2, 0, 3], [2, 3, -1, 1]]), mindspore.int32)
>>> output = ops.multilabel_margin_loss(inputs, target)
>>> print(output)
0.325
tinyms.primitives.multilabel_soft_margin_loss(input, target, weight=None, reduction='mean')[source]

Calculates the MultiLabelSoftMarginLoss. The multi-label soft margin loss is a commonly used loss function in multi-label classification tasks where an input sample can belong to multiple classes. Given an input \(input\) and binary labels \(output\) of size \((N,C)\), where \(N\) denotes the number of samples and \(C\) denotes the number of classes.

\[\mathcal{loss\left( input , output \right)} = - \frac{1}{N}\frac{1}{C}\sum_{i = 1}^{N} \sum_{j = 1}^{C}\left(output_{ij}\log\frac{1}{1 + e^{- input_{ij}}} + \left( 1 - output_{ij} \right)\log\frac{e^{-input_{ij}}}{1 + e^{-input_{ij}}} \right)\]

where \(input_{ij}\) represents the predicted score of sample \(i\) for class \(j\). \(output_{ij}\) represents the binary label of sample \(i\) for class \(j\), where sample \(i\) belongs to class \(j\) if \(output_{ij}=1\) , and sample \(i\) does not belong to class \(j\) if \(output_{ij}=0\). For a multi-label classification task, each sample may have multiple labels with a value of 1 in the binary label \(output\). weight will multiply to the loss of each class if given.

Parameters:
  • input (Tensor) – A tensor of shape (N, C), where N is batch size and C is number of classes.

  • target (Tensor) – The label target Tensor which has the same shape as input.

  • weight (Union[Tensor, int, float]) – The manual rescaling weight given to each class. Default: None.

  • reduction (str) – Specifies which reduction to be applied to the output. It must be one of ‘none’, ‘mean’, and ‘sum’, meaning no reduction, reduce mean and sum on output, respectively. Default: ‘mean’.

Returns:

Tensor, the data type is the same as input, if the reduction is ‘none’, its shape is (N), otherwise it is zero.

Raises:

ValueError – If the rank of input or target is not 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor([[0.3, 0.6, 0.6], [0.9, 0.4, 0.2]])
>>> target = Tensor([[0.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
>>> loss = ops.multilabel_soft_margin_loss(input, target, reduction='mean')
>>> print(loss.asnumpy())
0.84693956
tinyms.primitives.multinomial(input, num_samples, replacement=True, seed=None)[source]

Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • input (Tensor) – The input tensor containing probabilities, must be 1 or 2 dimensions, with float32 data type.

  • num_samples (int) – Number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not, default: True.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None.

Returns:

Tensor, has the same rows with input. The number of sampled indices of each row is num_samples. The dtype is float32.

Raises:
  • TypeError – If input is not a Tensor whose dtype is not float32.

  • TypeError – If num_samples is not an int.

  • TypeError – If seed is neither an int nor None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> # case 1: The output is random, and the length of the output is the same as num_sample.
>>> input = Tensor([0, 9, 4, 0], mindspore.float32)
>>> output = ops.multinomial(input, 2)
>>> # print(output)
>>> # [1 2] or [2 1]
>>> # the case where the result is [2 1] in multiple times.
>>> # This is because the value corresponding to the index 1 is larger than the value of the index 2.
>>> print(len(output))
2
>>> # case 2: The output is random, and the length of the output is the same as num_sample.
>>> # replacement is False(Default).
>>> # If the extracted value is 0, the index value of 1 will be returned.
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4)
>>> print(output)
[1 1 2 1]
>>> # case 3: The output is random, num_sample == x_length = 4, and replacement is True,
>>> # Can extract the same elements。
>>> input = Tensor([0, 9, 4, 0], mstype.float32)
>>> output = ops.multinomial(input, 4, True)
>>> print(output)
[1 1 2 2]
tinyms.primitives.multinomial_with_replacement(x, seed, offset, numsamples, replacement=False)[source]

Returns a tensor where each row contains numsamples indices sampled from the multinomial distribution with replacement. It is different from multinomial in that it allows the same outcome to be chosen multiple times.

Note

The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum.

Parameters:
  • x (Tensor) – the input tensor containing the cumsum of probabilities, must be 1 or 2 dimensions. Must be one of the following types: float16, float32, float64.

  • seed (int) – If seed is set to be -1, and offset is set to be 0, the random number generator is seeded by a random seed. Otherwise, it is seeded by the given seed.

  • offset (int) – Offset used to avoid seed collision.

  • numsamples (int) – the number of samples to draw.

  • replacement (bool, optional) – Whether to draw with replacement or not. Default: False.

Returns:

Tensor with the same rows as x, each row has numsamples sampled indices.

Raises:
  • TypeError – If x is not a 1D or 2D Tensor.

  • TypeError – If dtype of x is not float16, float32 or float64.

  • TypeError – If numsamples is not an int.

  • TypeError – If replacement is not a bool.

  • ValueError – If the value of numsamples is not greater than x_shape[-1] when replacement is False.

  • ValueError – If the sum of one row of x less than 0.

  • ValueError – If one of the element of each row of x less than 0.

  • ValueError – If numsamples equal or less than 0.

Supported Platforms:

CPU

Examples

>>> x = Tensor([[0., 9., 4., 0.]], mstype.float32)
>>> output = ops.multinomial_with_replacement(x, 2, 5, 2, True)
>>> print(output)
[[1 1]]
tinyms.primitives.multiply(input, other)[source]

Alias for mindspore.ops.asinh().

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.mv(mat, vec)[source]

Multiplies matrix mat and vector vec.

If mat is a Tensor with \((N, M)\), vec is a 1-D Tensor of size \(M\), out will be 1-D of size \(N\).

Parameters:
  • mat (Tensor) – Input matrix of shape \((N, M)\).

  • vec (Tensor) – Input vector of shape \((M,)\).

Returns:

Tensor, the shape of the output Tensor is \((N,)\).

Raises:
  • TypeError – If mat or vec is not a Tensor.

  • ValueError – If mat is not a 2-D Tensor or vec is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> mat = Tensor(np.array([[3., 4.], [1., 6.], [1., 3.]]).astype(np.float32))
>>> vec = Tensor(np.array([1., 2.]).astype(np.float32))
>>> output = ops.mv(mat, vec)
>>> print(output)
[11. 13. 7.]
tinyms.primitives.mvlgamma(input, p)[source]

Returns the results of the multivariate log-gamma function with dimension p element-wise.

The mathematical calculation process of Mvlgamma is shown as follows:

\[\log (\Gamma_{p}(input))=C+\sum_{i=1}^{p} \log (\Gamma(input-\frac{i-1}{2}))\]

where \(C = \log(\pi) \times \frac{p(p-1)}{4}\) and \(\Gamma(\cdot)\) is the Gamma function.

Parameters:
  • input (Tensor) – The input tensor of the multivariate log-gamma function, which must be one of the following types: float32, float64. The shape is \((N,*)\), where \(*\) means any number of additional dimensions. And the value of any element in input must be greater than \((p - 1) / 2\).

  • p (int) – The number of dimensions. And the value of p must be greater than or equal to 1.

Returns:

Tensor, has the same shape and type as input.

Raises:
  • TypeError – If dtype of input is neither float32 nor float64.

  • TypeError – If p is not an int.

  • ValueError – If p is less than 1.

  • ValueError – If not all elements of input are greater than \((p - 1) / 2\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
>>> y = ops.mvlgamma(x, p=3)
>>> print(y)
[[2.694925 5.402975 9.140645]
 [5.402975 1.596312 13.64045]]
tinyms.primitives.nan_to_num(input, nan=0.0, posinf=None, neginf=None)[source]

Replace the NaN, positive infinity and negative infinity values in ‘input’ with the specified values in nan, posinf and neginf respectively.

Parameters:
  • input (Tensor) – The shape of tensor is \((input_1, input_2, ..., input_R)\). With float32 or float16 data type.

  • nan (float) – The replace value of ‘NaN’. Default value is 0.0.

  • posinf (float) – the value to replace positive infinity values with. Default: None, replacing positive infinity with the maximum value supported by the data type of input.

  • neginf (float) – the value to replace negative infinity values with. Default: None, replacing negative infinity with the minimum value supported by the data type of input.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16 or float32.

Supported Platforms:

Ascend CPU

Examples

>>> input = Tensor(np.array([float('nan'), float('inf'), -float('inf'), 5.0]), mindspore.float32)
>>> output = ops.nan_to_num(input, 1.0, 2.0, 3.0)
>>> print(output)
[1.  2.  3.  5.0]
tinyms.primitives.nanquantile(input, q, axis=None, keepdims=False)[source]

This operator is derived from mindspore.ops.quantile() that ‘ignores’ NaN values. It computes quantiles as though the input has no NaN values. If all values in a reduced dimension are NaN then the quantiles for that reduction will be NaN.

Refer to mindspore.ops.quantile() for more details.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). Supported dtypes: float32, float64.

  • q (Union[float, Tensor]) – A scalar or 1D tensor of quantile values in the range [0, 1]. Supported dtypes: float32, float64.

  • axis (int, optional) – The dimension to reduce. By default, axis is None resulting in the input tensor being flattened before computation. Default: None.

  • keepdims (bool, optional) – Whether the output tensor has dim retained or not. Default: False.

Returns:

Tensor, has the same dtype as the input.

Suppose the shape of input is \((m, x_0, x_1, ..., x_i, ..., X_R)\), axis = \(i\) and m is the element count of input q.

  • If q is scalar and keepdims is True, the shape of output is \((x_0, x_1, ..., 1, ..., X_R)\).

  • If q is scalar and keepdims is False, the shape of output is \((x_0, x_1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is True, the shape of output is \((m, x_0, x_1, ..., 1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is False, the shape of output is \((m, x_0, x_1, ..., X_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If q is not a Tensor or float.

  • TypeError – If dtype of input is not float32 or float64.

  • TypeError – If dtype of q is not float32 or float64.

  • TypeError – If dtype of input and the dtype of q is different.

  • ValueError – If the q values not in the range [0, 1].

  • ValueError – If the axis values out of range.

Supported Platforms:

Examples

>>> x = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = ops.nanquantile(x, q)
>>> print(output.asnumpy())
[-0.5446  0.07  0.9214]
tinyms.primitives.nansum(input, axis=None, keepdims=False, *, dtype=None)[source]

Computes sum of input over a given dimension, treating NaNs as zero.

Parameters:
  • input (Tensor) – The input Tensor.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Supposed the rank of input is r, axis must be in the range [-rank(input), rank(input)). Default: None, all dimensions are reduced.

  • keepdims (bool, optional) – Whether the output Tensor keeps dimensions or not. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The dtype of output Tensor. Default: None.

Returns:

Tensor, the sum of input input in the given dimension dim, treating NaNs as zero.

  • If axis is None, keepdims is False, the output is a 0-D Tensor representing the sum of all elements in the input Tensor.

  • If axis is int, set as 2, and keepdims is False, the shape of output is \((input_1, input_3, ..., input_R)\).

  • If axis is tuple(int) or list(int), set as (2, 3), and keepdims is False, the shape of output is \((input_1, input_4, ..., input_R)\).

Raises:
  • TypeError – If input is not Tensor.

  • TypeError – If keepdims is not a bool.

  • TypeError – If the dtype of input or dtype is complex type.

  • ValueError – If ‘axis’ not in [-rank(input), rank(input)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[float("nan"), 2, 3], [1, 2, float("nan")]]), mindspore.float32)
>>> output1 = ops.nansum(x, axis=0, keepdims=False, dtype=mindspore.float32)
>>> output2 = ops.nansum(x, axis=0, keepdims=True, dtype=mindspore.float32)
>>> print(output1)
[1. 4. 3.]
>>> print(output2)
[[1. 4. 3.]]
tinyms.primitives.narrow(input, axis, start, length)[source]

Returns a narrowed tensor from input tensor, and the dimension axis is input from start to start + length.

Parameters:
  • input (Tensor) – the tensor to narrow.

  • axis (int) – the axis along which to narrow.

  • start (int) – the starting dimension.

  • length (int) – the distance to the ending dimension.

Returns:

Tensor.

  • output (Tensors) - The narrowed tensor.

Raises:

TypeError – If the input is not a tensor or tuple or list of tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import ops
>>> from mindspore import Tensor
>>> x = Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], mindspore.int32)
>>> output = ops.narrow(x, 0, 0, 2)
>>> print(output)
[[ 1 2 3]
 [ 4 5 6]]
>>> output = ops.narrow(x, 1, 1, 2)
>>> print(output)
[[ 2 3]
 [ 5 6]
 [ 8 9]]
tinyms.primitives.ne(x, y)[source]

Computes the non-equivalence of two tensors element-wise.

Note

  • Inputs of x and y comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, the shapes of them could be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

  • Broadcasting is supported.

\[\begin{split}out_{i} =\begin{cases} & \text{True, if } x_{i} \ne y_{i} \\ & \text{False, if } x_{i} = y_{i} \end{cases}\end{split}\]
Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number or a bool or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number or a bool when the first input is a tensor or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If neither x nor y is a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> output = ops.ne(x, 2.0)
>>> print(output)
[ True False  True]
>>>
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> output = ops.ne(x, y)
>>> print(output)
[False False  True]
tinyms.primitives.neg(input)[source]

Returns a tensor with negative values of the input tensor element-wise.

\[out_{i} = - input_{i}\]
Parameters:

input (Tensor) – The input tensor with a dtype of Number.

Returns:

Tensor, has the same shape and dtype as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = ops.neg(input)
>>> print(output)
[-1.  -2.   1.  -2.   0.   3.5]
tinyms.primitives.negative(input)[source]

Alias for mindspore.ops.neg() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.nextafter(input, other)[source]

Returns the next representable floating-point value after input towards other element-wise.

Say there are two float32 numbers \(a\), \(b\), and let the representable delta of float32 datatype is \(eps\). If \(a < b\), then the next representable of \(a\) towards \(b\) is \(a+eps\), the next representable of \(b\) towards \(a\) is \(b-eps\).

\[out_{i} = nextafter({input_{i}, other_{i}})\]
Parameters:
  • input (Tensor) – The first input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • other (Tensor) – The second input tensor. The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

Returns:

Tensor, has the same shape and data type as input.

Raises:
  • TypeError – If neither input nor other is a Tensor.

  • TypeError – If the dtype of input and other is not one of: float32, float64.

  • TypeError – If the dtypes of input and other are not same.

  • ValueError – If input’s shape is not the same as other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_ = Tensor(np.asarray([0.0]), mindspore.float32)
>>> other_ = Tensor(np.asarray([0.1]), mindspore.float32)
>>> output_ = ops.nextafter(input_, other_)
>>> print(output_)
[1.e-45]
tinyms.primitives.nll_loss(inputs, target, weight=None, ignore_index=-100, reduction='mean', label_smoothing=0.0)[source]

Gets the negative log likelihood loss between inputs and target.

The nll loss with reduction=none can be described as:

\[\ell(x, t)=L=\left\{l_{1}, \ldots, l_{N}\right\}^{\top}, \quad l_{n}=-w_{t_{n}} x_{n, t_{n}}, \quad w_{c}=\text { weight }[c] \cdot \mathbb{1} \{c \not= \text{ignore_index}\},\]

where \(x\) is the inputs, \(t\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to [0, C-1] is class index, where \(C\) is the number of classes.

If reduction is not ‘none’ (default ‘mean’), then

\[\begin{split}\ell(x, t)=\left\{\begin{array}{ll} \sum_{n=1}^{N} \frac{1}{\sum_{n=1}^{N} w_{t n}} l_{n}, & \text { if reduction }=\text { 'mean', } \\ \sum_{n=1}^{N} l_{n}, & \text { if reduction }=\text { 'sum' } \end{array}\right.\end{split}\]
Parameters:
  • inputs (Tensor) – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). inputs is expected to be log-probabilities, data type must be float16 or float32.

  • target (Tensor) – \((N)\) or \((N, d_1, d_2, ..., d_K)\) for high-dimensional loss, data type must be int32.

  • weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\). The data type must be float16 or float32. Default: None.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default: -100

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

  • label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.

Returns:

Tensor, the computed loss value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
>>> output = ops.nll_loss(inputs, target)
tinyms.primitives.nonzero(input)[source]

Return a Tensor of the positions of all non-zero values.

Parameters:

input (Tensor) – The shape of Tensor is \((x_1, x_2, ..., x_R)\). The data type is int, float or bool.

Returns:

Tensor, a 2-D Tensor whose data type is int64, containing the positions of all non-zero values of the input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor(np.array([[[1,  0], [-5, 0]]]), mindspore.int32)
>>> output = ops.nonzero(x)
>>> print(output)
[[0 0 0]
 [0 1 0]]
>>> x = Tensor(np.array([1, 0, 2, 0, 3]), mindspore.int32)
>>> output = ops.nonzero(x)
>>> print(output)
[[0]
 [2]
 [4]]
tinyms.primitives.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None)[source]

Returns the matrix norm or vector norm of a given tensor.

ord is the calculation mode of norm. The following norm modes are supported.

ord

norm for matrices

norm for vectors

None (default)

Frobenius norm

2-norm (see below)

‘fro’

Frobenius norm

– not supported –

‘nuc’

nuclear norm

– not supported –

inf

\(max(sum(abs(x), dim=1))\)

\(max(abs(x))\)

-inf

\(min(sum(abs(x), dim=1))\)

\(min(abs(x))\)

0

– not supported –

\(sum(x != 0)\)

1

\(max(sum(abs(x), dim=0))\)

as below

-1

\(min(sum(abs(x), dim=0))\)

as below

2

largest singular value

as below

-2

smallest singular value

as below

other int or float

– not supported –

\(sum(abs(x)^{ord})^{(1 / ord)}\)

Note

Currently, complex numbers are not supported.

Parameters:
  • A (Tensor) – Tensor of shape \((*, n)\) or \((*, m, n)\) where * is zero or more batch dimensions.

  • ord (Union[int, float, inf, -inf, 'fro', 'nuc'], optional) – norm’s mode. refer to the table above for behavior. Default: None.

  • dim (Union[int, Tuple(int)], optional) –

    calculate the dimension of vector norm or matrix norm. Default: None.

    • When dim is int, it will be calculated by vector norm.

    • When dim is a 2-tuple, it will be calculated by matrix norm.

    • If dim is None and ord is None, A will be flattened to 1D and the 2-norm of the vector will be calculated.

    • If dim is None and ord is not None, A must be 1D or 2D.

  • keepdim (bool) – whether the output Tensor retains the original dimension. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, A will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, the result of norm calculation on the specified dimension, dim, has the same dtype as A.

Raises:
  • ValueError – If dim is out of range.

  • TypeError – If dim is neither an int nor a tuple of int.

  • TypeError – If A is a vector and ord is a str.

  • ValueError – If A is a matrices and ord is not in valid mode.

  • ValueError – If A is a matrices and ord is an integer but not in [1, -1, 2, -2].

  • ValueError – If two elements of dim is same after normalize.

  • ValueError – If any elements of dim is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> x = ops.arange(-12, 13, dtype=ms.float32)
>>> y = x.reshape(5, 5)
>>> print(ops.norm(x))
36.05551
>>> print(ops.norm(x, float('inf')))
12.0
>>> print(ops.norm(x, float('-inf')))
0.0
>>> print(ops.norm(x, 0))
24.0
>>> print(ops.norm(x, 1))
156.0
>>> print(ops.norm(x, -1))
0.0
>>> print(ops.norm(x, 2))
36.05551
>>> print(ops.norm(x, -2))
0.0
>>> print(ops.norm(x, 3))
23.000631
>>> print(ops.norm(x, -3))
0.0
>>> print(ops.norm(y))
36.05551
>>> print(ops.norm(y, 'fro'))
36.05551
>>> print(ops.norm(y, 'nuc'))
42.42641
>>> print(ops.norm(y, float('inf')))
50.0
>>> print(ops.norm(y, float('-inf')))
6.0
>>> print(ops.norm(y, 1))
32.0
>>> print(ops.norm(y, -1))
30.0
>>> print(ops.norm(y, 2))
35.355343
>>> m = ms.Tensor([[1., -1., 2.], [-2., 3., -4.]])
>>> print(ops.norm(m, dim=0))
[2.236068  3.1622777 4.472136 ]
>>> print(ops.norm(m, dim=1))
[2.4494898 5.3851647]
>>> print(ops.norm(m, ord=1, dim=1))
[4. 9.]
>>> n = ops.arange(27, dtype=ms.float32).reshape(3, 3, 3)
>>> print(ops.norm(n, dim=(1, 2)))
[14.282857 39.76179  66.45299 ]
>>> print(ops.norm(n[0, :, :]), ops.norm(n[1, :, :]), ops.norm(n[2, :, :]))
14.282857 39.76179 66.45299
tinyms.primitives.normal(shape, mean, stddev, seed=None)[source]

Generates random numbers according to the Normal (or Gaussian) random number distribution.

Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Union[Tensor, int, float]) – The mean μ distribution parameter, which specifies the location of the peak, with data type in [int8, int16, int32, int64, float16, float32].

  • stddev (Union[Tensor, int, float]) – The deviation σ distribution parameter. It should be greater than 0, with data type in [int8, int16, int32, int64, float16, float32].

  • seed (int) – Seed is used as entropy source for the Random number engines to generate pseudo-random numbers. The value must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of mean and stddev. The dtype is float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> shape = (3, 1, 2)
>>> mean = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 3)
>>> shape = (3, 1, 3)
>>> mean = Tensor(np.array([[1, 2, 3], [3, 4, 3], [3, 5, 6]]), mindspore.float32)
>>> stddev = Tensor(1.0, mindspore.float32)
>>> output = ops.normal(shape, mean, stddev, seed=5)
>>> result = output.shape
>>> print(result)
(3, 3, 3)
tinyms.primitives.not_equal(input, other)[source]

Alias for mindspore.ops.ne() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.numel(input)[source]

Returns a Scalar of type int that represents the total number of elements in the Tensor.

Parameters:

input (Tensor) – Input Tensor.

Returns:

int. A scalar representing the total of elements in the Tensor.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> print(ops.numel(input_x))
4
tinyms.primitives.one_hot(indices, depth, on_value, off_value, axis=-1)[source]

Computes a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters:
  • indices (Tensor) – A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\). Data type must be uint8, int32 or int64.

  • depth (int) – A scalar defining the depth of the one-hot dimension.

  • on_value (Union[Tensor, int, float]) – A value to fill in output when indices[j] = i. Support uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, bool, complex64, complex128.

  • off_value (Union[Tensor, int, float]) – A value to fill in output when indices[j] != i. Has the same data type as on_value.

  • axis (int) – Position to insert the value. e.g. If shape of self is \((N, C)\), and axis is -1, the output shape will be \((N, C, depth)\), If axis is 0, the output shape will be \((depth, N, C)\). Default: -1.

Returns:

Tensor, one-hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Raises:
  • TypeError – If axis or depth is not an int.

  • TypeError – If dtype of indices is not uint8, int32 or int64.

  • TypeError – If indices, on_value or off_value is not a Tensor.

  • ValueError – If axis is not in range [-1, ndim].

  • ValueError – If depth is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> output = ops.one_hot(indices, depth, on_value, off_value, axis=-1)
>>> print(output)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]
tinyms.primitives.ones(shape, dtype=None)[source]

Creates a tensor filled with value ones.

Creates a tensor with shape described by the first argument and fills it with value ones in type of the second argument.

Parameters:
  • shape (Union[tuple[int], int]) – The specified shape of output tensor. Only constant positive int is allowed.

  • dtype (mindspore.dtype) – The specified type of output tensor. If dtype is None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:

TypeError – If shape is neither tuple nor int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.ones((2, 2), mindspore.float32)
>>> print(output)
[[1. 1.]
 [1. 1.]]
tinyms.primitives.ones_like(input, *, dtype=None)[source]

Returns a Tensor with a value of 1 and its shape is the same as the input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, has the same shape as input but filled with ones.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = ops.ones_like(x)
>>> print(output)
[[1 1]
 [1 1]]
tinyms.primitives.orgqr(input, input2)[source]

Calculates the explicit representation of the orthogonal matrix \(Q\) returned by mindspore.ops.Geqrf.

Take the case of input without batch dimension as an example, computes the first \(N\) columns of a product of Householder matrices. Suppose input input is a matrix of size \((M, N)\) after householder transformation. When the diagonal of input is set to 1, every colunm of lower triangular in input is denoted as \(w_j\) for \(j\) for \(j=1, \ldots, M\), this function returns the first \(N\) columns of the matrix

\[H_{1} H_{2} \ldots H_{k} \quad \text { with } \quad H_{j}=\mathrm{I}_{M}-\tau_{j} w_{j} w_{j}^{\mathrm{H}}\]

where \(\mathrm{I}_{M}\) is the \(M\)-dimensional identity matrix. And when \(w\) is complex, \(w^{\mathrm{H}}\) is the conjugate transpose, otherwise the transpose. The output matrix is the same size as the input matrix input. \(tau\) is corresponding to input2.

Parameters:
  • input (Tensor) – Tensor of shape \((*, M, N)\), indicating 2D or 3D matrices, with float32, float64, complex64 and complex128 data type.

  • input2 (Tensor) – Tensor of shape \((*, K)\), where K is less than or equal to N, indicating the reflecting coefficient in Householder transformation, which have the same type as input.

Returns:

Tensor, has the same shape and data type as input.

Raises:
  • TypeError – If input or input2 are not Tensors.

  • TypeError – If dtype of input and input2 is not one of: float64, float32, complex64, complex128.

  • ValueError – If input and input2 have different batch size.

  • ValueError – If input.shape[-2] < input.shape[-1].

  • ValueError – If input.shape[-1] < input2.shape[-1].

  • ValueError – If rank(input) - rank(input2) != 1.

  • ValueError – If rank(input) != 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[-114.6, 10.9, 1.1], [-0.304, 38.07, 69.38], [-0.45, -0.17, 62.]]),
... mindspore.float32)
>>> input2 = Tensor(np.array([1.55, 1.94, 0.0]), mindspore.float32)
>>> net = ops.orgqr()
>>> y = net(input, input2)
>>> print(y)
[[-0.54999995 -0.2128925   0.8137956 ]
 [ 0.47119996 -0.8752807   0.08240613]
 [ 0.69749993  0.42560163  0.57772595]]
tinyms.primitives.outer(input, vec2)[source]

Return outer product of input and vec2. If input is a vector of size \(n\) and vec2 is a vector of size \(m\) , then output must be a matrix of shape \((n, m)\) .

Note

This function does not broadcast.

Parameters:
  • input (Tensor) – 1-D input vector.

  • vec2 (Tensor) – 1-D input vector.

Returns:

out (Tensor, optional), 2-D matrix, the outer product of two vectors.

Raises:
  • TypeError – If input or vec2 is not a Tensor.

  • ValueError – If input or vec2 is not an 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> vec2 = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> out = ops.outer(input, vec2)
>>> print(out)
[[1 2 3]
 [2 4 6]
 [3 6 9]]
tinyms.primitives.pad(input_x, padding, mode='constant', value=None)[source]

Pads the input tensor according to the padding.

Parameters:
  • input_x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions.

  • padding (Union[tuple[int], list[int], Tensor]) –

    Filling position of pad. \(\left\lfloor\frac{\text{len(padding)}}{2}\right\rfloor\) dimensions of input_x will be padded.

    Example: to pad only the last dimension of the input tensor, then padding has the form \((\text{padding_left}, \text{padding_right})\);

    Example: to pad the last 2 dimensions of the input tensor, then use \((\text{padding_left}, \text{padding_right}\), \(\text{padding_top}, \text{padding_bottom})\);

    Example: to pad the last 3 dimensions, use \((\text{padding_left}, \text{padding_right}\), \(\text{padding_top}, \text{padding_bottom}\), \(\text{padding_front}, \text{padding_back})\) and so on.

  • mode (str, optional) –

    Pad filling mode, “constant”, “reflect” or “replicate”. Default: “constant”.

    For “constant” mode, please refer to mindspore.nn.ConstantPad1d as an example to understand this filling pattern and extend the padding pattern to n dimensions.

    For “reflect” mode, please refer to mindspore.nn.ReflectionPad1d as an example to understand this filling pattern. The reflect mode is used to pad the last two dimensions of 3D or 4D input, or the last dimension of 2D or 3D input.

    For “replicate” mode, please refer to mindspore.nn.ReplicationPad1d as an example to understand this filling pattern. The replicate mode is used to pad the last three dimensions of 4D or 5D input, the last two dimensions of 3D or 4D input, or the last dimension of 2D or 3D input.

  • value (Union[int, float, None], optional) – Valid only in “constant” mode. Set the padding value in “constant” mode. If the value is None, 0 is used as the default padding value.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If paddings is not an int of tuple or int of list.

  • TypeError – If input_x is not a Tensor.

  • ValueError – If length of padding is not even.

  • ValueError – If length of padding is greater than 6.

  • ValueError – If mode is not “constant” and value not None.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> x = ms.Tensor(np.arange(1 * 2 * 2 * 2).reshape((1, 2, 2, 2)), dtype=ms.float64)
>>> output = ops.pad(x, [1, 0, 0, 1], mode='constant', value=6.0)
>>> print(output)
[[[[6. 0. 1.]
   [6. 2. 3.]
   [6. 6. 6.]]
  [[6. 4. 5.]
   [6. 6. 7.]
   [6. 6. 6.]]]]
>>> output1 = ops.pad(x, (1, 0, 0, 1), mode='reflect')
>>> print(output1)
[[[[1. 0. 1.]
   [3. 2. 3.]
   [1. 0. 1.]]
  [[5. 4. 5.]
   [7. 6. 7.]
   [5. 4. 5.]]]]
>>> output2 = ops.pad(x, (1, 1, 2, 1), mode='replicate')
>>> print(output2)
[[[[0. 0. 1. 1.]
   [0. 0. 1. 1.]
   [0. 0. 1. 1.]
   [2. 2. 3. 3.]
   [2. 2. 3. 3.]]
  [[4. 4. 5. 5.]
   [4. 4. 5. 5.]
   [4. 4. 5. 5.]
   [6. 6. 7. 7.]
   [6. 6. 7. 7.]]]]
tinyms.primitives.padding(x, pad_dim_size=8)[source]

Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.

Parameters:
  • x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The rank of x must be at least 2. The last dimension of x must be 1. The data type is Number.

  • pad_dim_size (int) – The value of the last dimension of x to be extended, which must be positive. Default: 8.

Returns:

Tensor, has the same type and shape as input shape value.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
>>> pad_dim_size = 4
>>> output = ops.padding(x, pad_dim_size)
>>> print(output)
[[ 8.  0.  0.  0.]
 [10.  0.  0.  0.]]
tinyms.primitives.pdist(input, p=2.0)[source]

Calculates the distance between every pair of row vectors in the input using the p-norm. If the input input is a 2D Tensor with shape \((N, M)\), the output must be a 1D Tensor with shape \((N * (N - 1) / 2,)\). If input has batch dimension with shape \((*B, N, M)\), then the output must be a Tensor with shape \((*B, N * (N - 1) / 2)\).

\[y[n] = \sqrt[p]{{\mid x_{i} - x_{j} \mid}^p}\]

where \(x_{i}, x_{j}\) are two different row vectors in the input.

Parameters:
  • input (Tensor) – Input tensor of shape \((*B, N, M)\). \(*B\) is batch size, one-dim or multi-dim. dtype: float16, float32 or float64.

  • p (float) – The order of norm distance, \(p∈[0, ∞)\). Default: 2.0.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • TypeError – If p is not a float.

  • ValueError – If p is a negative float.

  • ValueError – If dimension of input is less than 2.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]).astype(np.float32))
>>> y = ops.pdist(x, p=2.0)
>>> print(y)
[1.4142135 2.828427 1.4142135]
tinyms.primitives.permute(input, axis)[source]

Permutes the dimensions of the input tensor according to input axis .

Parameters:
  • input (Tensor) – Input Tensor.

  • axis (Union[tuple(int), int]) – Permute will permute the tensor to the input axis order.

Returns:

Tensor, has the same dimension as input tensor, with axis suitably permuted.

Raises:
  • ValueError – If axis is None.

  • ValueError – If the number of elements of axis is not equal to input ndim.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> print(ops.permute(input_x, input_perm))
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
tinyms.primitives.pinv(x, *, atol=None, rtol=None, hermitian=False)[source]

Computes the (Moore-Penrose) pseudo-inverse of a matrix. This function is computed using SVD. If \(x=U*S*V^{T}\) ,Than the pseudo-inverse of x is: \(x^{+}=V*S^{+}*U^{T}\) , \(S^{+}\) is the reciprocal of each non-zero element on the diagonal of S, and zero remains in place. Batch matrices are supported. If x is a batch matrix, the output has the same batch dimension when atol or rtol is float. If atol or rtol is a Tensor, its shape must be broadcast to the singular value returned by x.svd . If x.shape is \((B, M, N)\), and the shape of atol or rtol is \((K, B)\), the output shape is \((K, B, N, M)\). When the Hermitian is True, temporary support only real domain, x is treated as a real symmetric, so x is not checked internally, and only use the lower triangular part in the computations. When the singular value of x (or the norm of the eigenvalues when hermitian = True) that are below threshold (\(max(atol, \sigma \cdot rtol)\), \(\sigma\) as the largest singular value or characteristic value), it is set to zero, and is not used in the computations. If rtol is not specified and x is a matrix of dimensions (M, N), then rtol is set to be \(rtol=max(M, N)*\varepsilon\), \(\varepsilon\) is the eps value of x.dtype. If rtol is not specified and atol specifies a value larger than zero, rtol is set to zero.

Note

This function uses svd internally, (or eigh , when hermitian = True). So it has the same problem as these functions. For details, see the warnings in svd() and eigh().

Parameters:

x (Tensor) –

A matrix to be calculated. Only float32, float64 are supported Tensor dtypes. shape is \((*, M, N)\), * is zero or more batch dimensions.

  • When hermitian is true, batch dimensions are not supported temporarily.

Keyword Arguments:
  • atol (float, Tensor) – absolute tolerance value. Default: None.

  • rtol (float, Tensor) – relative tolerance value. Default: None.

  • hermitian (bool) – An optional bool. x is assumed to be symmetric if real. Default: False.

Outputs:
  • output (Tensor) - same type as input. Shape is \((*, N, M)\), * is zero or more batch dimensions.

Raises:
Supported Platforms:

CPU

Examples

>>> x = Tensor([[4., 0.], [0., 5.]], mindspore.float32)
>>> output = ops.pinv(x)
>>> print(output)
[[0.25  0. ]
[0.  0.2 ]]
tinyms.primitives.pixel_shuffle(input, upscale_factor)[source]

Applies the PixelShuffle operation over input input which implements sub-pixel convolutions with stride \(1/r\) . For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C \times r^2, H, W)\) , and the output is of shape \((*, C, H \times r, W \times r)\), where r is an upscale factor and * is zero or more batch dimensions.

Parameters:
  • input (Tensor) – Tensor of shape \((*, C \times r^2, H, W)\) . The dimension of input is larger than 2, and the length of third to last dimension can be divisible by upscale_factor squared.

  • upscale_factor (int) – factor to shuffle the input Tensor, and is a positive integer. upscale_factor is the above-mentioned \(r\).

Returns:

  • output (Tensor) - Tensor of shape \((*, C, H \times r, W \times r)\) .

Raises:
  • ValueError – If upscale_factor is not a positive integer.

  • ValueError – If the length of third to last dimension is not divisible by upscale_factor squared.

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(3 * 2 * 9 * 4 * 4).reshape((3, 2, 9, 4, 4))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_shuffle(input_x, 3)
>>> print(output.shape)
(3, 2, 1, 12, 12)
tinyms.primitives.pixel_unshuffle(input, downscale_factor)[source]

Applies the PixelUnshuffle operation over input input which is the inverse of PixelShuffle. For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C, H \times r, W \times r)\) , and the output is of shape \((*, C \times r^2, H, W)\) , where r is a downscale factor and * is zero or more batch dimensions.

Parameters:
  • input (Tensor) – Tensor of shape \((*, C, H \times r, W \times r)\) . The dimension of input is larger than 2, and the length of second to last dimension or last dimension can be divisible by downscale_factor .

  • downscale_factor (int) – factor to unshuffle the input Tensor, and is a positive integer. downscale_factor is the above-mentioned \(r\).

Returns:

  • output (Tensor) - Tensor of shape \((*, C \times r^2, H, W)\) .

Raises:
  • ValueError – If downscale_factor is not a positive integer.

  • ValueError – If the length of second to last dimension or last dimension is not divisible by downscale_factor .

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_unshuffle(input_x, 2)
>>> print(output.shape)
(1, 4, 4, 4)
tinyms.primitives.poisson(shape, mean, seed=None)[source]

The ops.poisson is deprecated, please use mindspore.ops.random_poisson Generates random numbers according to the Poisson random number distribution.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (tuple) – The shape of random tensor to be generated. The format is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • mean (Tensor) – The mean μ distribution parameter. It should be greater than 0 with float32 data type.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input “shape” and shapes of mean. The dtype is float32.

Raises:
  • TypeError – If shape is not a tuple.

  • TypeError – If mean is not a Tensor whose dtype is not float32.

  • TypeError – If seed is not an int.

Supported Platforms:

deprecated

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> # case 1: It can be broadcast.
>>> shape = (4, 1)
>>> mean = Tensor(np.array([5.0, 10.0]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(4, 2)
>>> # case 2: It can not be broadcast. It is recommended to use the same shape.
>>> shape = (2, 2)
>>> mean = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.poisson(shape, mean, seed=5)
>>> result = output.shape
>>> print(result)
(2, 2)
tinyms.primitives.polar(abs, angle)[source]

Converts polar coordinates to Cartesian coordinates.

Returns a complex tensor, its elements are Cartesian coordinates constructed with the polar coordinates which is specified by radial distance abs and polar angle angle.

\[y_{i} = abs_{i} * cos(angle_{i}) + abs_{i} * sin(angle_{i}) * j\]
Parameters:
  • abs (Tensor) – Radial distance. The shape of tensor is \((N,*)\) where \(N\) means the batchsize of the input tensor, \(*\) means, any number of additional dimensions. Must be one of the following types: float32, float64.

  • angle (Tensor) – Polar angle. It has the same shape and dtype as abs.

Returns:

Tensor, has the same shape as abs. - If the inputs are float32, data type must be complex64. - If the inputs are float64, data type must be complex128.

Raises:
  • TypeError – If neither abs nor angle is a Tensor.

  • TypeError – If the dtype of input is not one of: float32, float64.

  • TypeError – If the dtypes of abs and angle are not the same.

  • ValueError – If abs’s shape is not the same as angle.

Supported Platforms:

GPU CPU

Examples

>>> abs = Tensor(np.array([1, 2]), mindspore.float64)
>>> angle = Tensor(np.array([np.pi / 2, 5 * np.pi / 4]), mindspore.float64)
>>> output = ops.polar(abs, angle)
>>> print(output)
[ 6.12323400e-17+1.j         -1.41421356e+00-1.41421356j]
tinyms.primitives.polygamma(n, input)[source]

Computes the \(n\)-th derivative of the polygamma function on input.

\[\psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x)\]

where \(\psi(x)\) is the digamma function.

Parameters:
  • n (Tensor) – The order of the polygamma function. Supported dtypes: int32, int64. The shape of n is \(()\).

  • input (Tensor) – The tensor to compute the \(n\)-th derivative of the polygamma function with.

Returns:

Tensor, has the same dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not one of: float16, float32, float64.

  • TypeError – If dtype of n is not one of: int32, int64.

  • TypeError – If shape of n is not \(()\).

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([3.14, -2.71]), mindspore.float64)
>>> a = Tensor(np.array(1), mindspore.int64)
>>> output = ops.polygamma(a, x)
>>> print(output)
[ 0.37446456 15.49884838]
tinyms.primitives.population_count(input_x)[source]

Computes element-wise population count(a.k.a bitsum, bitcount). For each entry in input_x, calculates the number of 1 bits in the binary representation of that entry.

Parameters:

input_x (Tensor) – Tensor of any dimension. The data type must be int16 or uint16 (Ascend). The data type must be int8, int16, int32, int64, uint8, uint16, uint32, uint64 (CPU and GPU).

Returns:

Tensor, with the same shape as the input, and the data type is uint8.

Raises:
  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is not int16, uint16 (Ascend).

  • TypeError – If dtype of input_x is not int8, int16, int32, int64, uint8, uint16, uint32, uint64 (CPU and GPU).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([0, 1, 3], mindspore.int16)
>>> output = ops.population_count(input_x)
>>> print(output)
[0 1 2]
tinyms.primitives.positive(input)[source]

Return self Tensor.

Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, self input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> x = Tensor(np.array([-5.0, 1.5, 3.0, 100.0]), mstype.float32)
>>> print(ops.positive(x))
[ -5.    1.5   3.  100. ]
tinyms.primitives.pow(input, exponent)[source]

Calculates the exponent power of each element in input.

\[out_{i} = input_{i} ^{ exponent_{i}}\]

Note

  • Inputs of input and exponent comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • exponent (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and exponent is not one of the following: Tensor, number.Number or bool.

  • ValueError – If the shape of input and exponent are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = 3.0
>>> output = ops.pow(x, y)
>>> print(output)
[ 1.  8. 64.]
>>>
>>> x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> output = ops.pow(x, y)
>>> print(output)
[ 1. 16. 64.]
tinyms.primitives.prelu(x, weight)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of a channel of the input, w is the weight of the channel.

Note

Scalar or 1-D Tensor is not supported on Ascend.

Parameters:
  • x (Tensor) – The input Tensor of the activation function. The data type is float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • weight (Tensor) – Weight Tensor. The data type is float16 or float32. The weight can only be a Tensor, and the length is the same as the number of channels C of the input_x. On GPU devices, when the input is a scalar, the shape is (1,).

Returns:

Tensor, with the same shape and dtype as x.

For detailed information, please refer to mindspore.nn.PReLU.

Raises:
  • TypeError – If dtype of x or weight is neither float16 nor float32.

  • TypeError – If the x or the weight is not a Tensor.

  • ValueError – If the x is a 0-D or 1-D Tensor on Ascend.

  • ValueError – If the weight is not a 1-D Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(-6, 6).reshape((2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> output = ops.prelu(x, weight)
>>> print(output)
[[[-0.60 -0.50]
  [-2.40 -1.80]
  [ 0.60  0.30]]
 [[ 0.00  1.00]
  [ 2.00  3.00]
  [ 4.0   5.00]]]
tinyms.primitives.print_(*input_x)[source]

Outputs the inputs to stdout. The outputs are printed to screen by default. It can also be saved in a file by setting the parameter print_file_path in context. Once set, the output will be saved in the file specified by print_file_path. mindspore.parse_print() can be employed to reload the data. For more information, please refer to mindspore.set_context() and mindspore.parse_print().

Note

In pynative mode, please use python print function. In Ascend platform with graph mode, the bool, int and float would be converted into Tensor to print, and str remains unchanged. This function is used for debugging. When too much data is printed at the same time, in order not to affect the main process, the framework may discard some data. If you need to record the data completely, you are recommended to use the Summary function, and can check Summary.

Parameters:

input_x (Union[Tensor, bool, int, float, str, tuple, list]) – The inputs of print_. Supports multiple inputs which are separated by ‘,’.

Returns:

Invalid value, should be ignored.

Raises:

TypeError – If input_x is not one of the following: Tensor, bool, int, float, str, tuple or list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([2, 1]).astype(np.int32))
>>> y = Tensor(np.ones([2, 2]).astype(np.int32))
>>> result = ops.print_('Print Tensor x and Tensor y:', x, y)
Print Tensor x and Tensor y:
Tensor(shape=[2, 1], dtype=Int32, value=
[[1],
 [1]])
Tensor(shape=[2, 2], dtype=Int32, value=
[[1, 1],
 [1, 1]])
tinyms.primitives.prod(input, axis=None, keep_dims=False)[source]

Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. And also can reduce a dimension of input along the axis. Determine whether the dimensions of the output and input are the same by controlling keep_dims.

Parameters:
  • input (Tensor[Number]) – The input tensor. The dtype of the tensor to be reduced is number. \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[int, tuple(int), list(int)]) – The dimensions to reduce. Default: None, reduce all dimensions. Only constant value is allowed. Assume the rank of input is r, and the value range is [-r,r).

  • keep_dims (bool) – If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, has the same data type as input tensor.

  • If axis is None, and keep_dims is False, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 1, and keep_dims is False, the shape of output is \((input_0, input_2, ..., input_R)\).

  • If axis is tuple(int), set as (1, 2), and keep_dims is False, the shape of output is \((input_0, input_3, ..., input_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: int, tuple or list.

  • TypeError – If keep_dims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> output = ops.prod(x, 1, keep_dims=True)
>>> result = output.shape
>>> print(result)
(3, 1, 5, 6)
>>> # case 1: Reduces a dimension by multiplying all elements in the dimension.
>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mindspore.float32)
>>> output = ops.prod(x)
>>> print(output)
2.2833798e+33
>>> print(output.shape)
()
>>> # case 2: Reduces a dimension along axis 0.
>>> output = ops.prod(x, 0, True)
>>> print(output)
[[[ 28.  28.  28.  28.  28.  28.]
  [ 80.  80.  80.  80.  80.  80.]
  [162. 162. 162. 162. 162. 162.]]]
>>> # case 3: Reduces a dimension along axis 1.
>>> output = ops.prod(x, 1, True)
>>> print(output)
[[[  6.   6.   6.   6.   6.   6.]]
 [[120. 120. 120. 120. 120. 120.]]
 [[504. 504. 504. 504. 504. 504.]]]
>>> # case 4: Reduces a dimension along axis 2.
>>> output = ops.prod(x, 2, True)
>>> print(output)
[[[1.00000e+00]
  [6.40000e+01]
  [7.29000e+02]]
 [[4.09600e+03]
  [1.56250e+04]
  [4.66560e+04]]
 [[1.17649e+05]
  [2.62144e+05]
  [5.31441e+05]]]
tinyms.primitives.qr(input, mode='reduced')[source]

Returns the QR decomposition of one or more matrices. If mode is ‘reduced’(the default), compute the P columns of Q where P is minimum of the 2 innermost dimensions of input. If mode is ‘complete’, compute full-sized Q and R.

Parameters:
  • input (Tensor) – A matrix to be calculated. The matrix must be at least two dimensions, the supported dtype are float16, float32, float64, complex64 and complex128. Define the shape of input as \((..., m, n)\), p as the minimum values of m and n.

  • mode (Union['reduced', 'complete'], optional) – If mode is ‘reduced’, computing reduce-sized QR decomposition, otherwise, computing the full-sized QR decomposition. Default: ‘reduced’.

Returns:

  • Q (Tensor) - The orthonormal matrices of input. If mode is ‘complete’, the shape is \((m, m)\), else the shape is \((m, p)\). The dtype of Q is same as input.

  • R (Tensor) - The upper triangular matrices of input. If mode is ‘complete’, the shape is \((m, n)\), else the shape is \((p, n)\). The dtype of R is same as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If mode is neither ‘reduced’ nor ‘complete’.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[20., -31, 7], [4, 270, -90], [-8, 17, -32]]), mstype.float32)
>>> Q, R = ops.qr(input)
>>> print(Q)
[[-0.912871    0.16366126  0.37400758]
 [-0.18257418 -0.9830709  -0.01544376]
 [ 0.36514837 -0.08238228  0.92729706]]
>>> print(R)
[[ -21.908903  -14.788506  -1.6431675]
[    0.       -271.9031    92.25824  ]
[    0.          0.       -25.665514 ]]
tinyms.primitives.quantile(input, q, axis=None, keepdims=False)[source]

Computes the q-th quantiles of all elements in input, when the q-th quantile lies between two data points, a linear interpolation is implemented between them.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). Supported dtypes: float32, float64.

  • q (Union[float, Tensor]) – A scalar or 1D tensor of quantile values in the range [0, 1]. Supported dtypes: float32, float64.

  • axis (int, optional) – The dimension to reduce. By default, axis is None resulting in the input tensor being flattened before computation. Default: None.

  • keepdims (bool, optional) – Whether the output tensor has dim retained or not. Default: False.

Returns:

Tensor, has the same dtype as the input.

Suppose the shape of input is \((m, x_0, x_1, ..., x_i, ..., X_R)\), axis = \(i\) and m is the element count of input q.

  • If q is scalar and keepdims is True, the shape of output is \((x_0, x_1, ..., 1, ..., X_R)\).

  • If q is scalar and keepdims is False, the shape of output is \((x_0, x_1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is True, the shape of output is \((m, x_0, x_1, ..., 1, ..., X_R)\).

  • If q is 1D Tensor and keepdims is False, the shape of output is \((m, x_0, x_1, ..., X_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If q is not a Tensor or float.

  • TypeError – If dtype of input is not float32 or float64.

  • TypeError – If dtype of q is not float32 or float64.

  • TypeError – If dtype of input and the dtype of q is different.

  • ValueError – If the q values not in the range [0, 1].

  • ValueError – If the axis values out of range.

Supported Platforms:

Examples

>>> x = Tensor(np.array([0.0700, -0.5446,  0.9214]), mindspore.float32)
>>> q = Tensor(np.array([0, 0.5, 1]), mindspore.float32)
>>> output = ops.quantile(x, q)
>>> print(output.asnumpy())
[-0.5446  0.07  0.9214]
tinyms.primitives.rad2deg(x)[source]

Converts angles in radians to angles in degrees element-wise.

Parameters:

x (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x isn’t float16, float32 or float64.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor
>>> import mindspore.ops as ops
>>> x = Tensor([[6.283, -3.142],[1.570, -6.283],[3.142, -1.570]], mindspore.float32)
>>> output = ops.rad2deg(x)
>>> print(output)
[[ 359.98935 -180.02333]
 [  89.95438 -359.98935]
 [ 180.02333  -89.95438]]
tinyms.primitives.rand(*size, dtype=None, seed=None)[source]

Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype.

Parameters:

size (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g. \((2, 3)\) or \(2\).

Keyword Arguments:
  • dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be applied. Default: None.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Returns:

Tensor, with the designated shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.rand((2,3)))
[[4.1702199e-01 9.9718481e-01 7.2032452e-01]
 [9.3255734e-01 1.1438108e-04 1.2812445e-01]]
tinyms.primitives.rand_like(input, seed=None, *, dtype=None)[source]

Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype.

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, the same dtype of input will be applied. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with random numbers from the uniform distribution on the interval \([0, 1)\).

Raises:
  • TypeError – If seed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> a = Tensor([[2, 3, 4], [1, 2, 3]])
>>> print(ops.rand_like(a, dtype=ms.float32))
[[4.1702199e-01 9.9718481e-01 7.2032452e-01]
 [9.3255734e-01 1.1438108e-04 1.2812445e-01]]
tinyms.primitives.randint(low, high, size, seed=None, *, dtype=None)[source]

Returns a Tensor whose elements are random integers in the range of [ low , high ) .

Parameters:
  • low (int) – Start value of interval.

  • high (int) – End value of interval.

  • size (tuple) – Shape of the new tensor.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be int type. If None, mindspore.int64 will be used. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with random integers from low (inclusive) to high (exclusive).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.randint(1, 10, (2,3)))
[[4 9 7]
 [9 1 2]]
tinyms.primitives.randint_like(input, low, high, seed=None, *, dtype=None)[source]

Returns a tensor with the same shape as Tensor input whose elements are random integers in the range of [ low , high ) .

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • low (int) – Start value of interval.

  • high (int) – End value of interval.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be int type. If None, mindspore.int64 will be used. Default is mindspore.int64.

Returns:

Tensor, with the designated shape and dtype, filled with random integers from low (inclusive) to high (exclusive).

Raises:
  • TypeErrorseed is not a non-negative integer.

  • TypeErrorlow or high is not an integer.

  • ValueError – If dtype is not a mstype.int_type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> a = Tensor([[1, 2, 3], [3, 2, 1]])
>>> print(ops.randint_like(a, 1, 10))
[[4 9 7]
 [9 1 2]]
tinyms.primitives.randn(*size, dtype=None, seed=None)[source]

Returns a new Tensor with given shape and dtype, filled with a sample (or samples) from the standard normal distribution.

Parameters:

size (Union[int, tuple(int), list(int)]) – Shape of the new tensor, e.g., \((2, 3)\) or \(2\).

Keyword Arguments:
  • dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be used. Default: None.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Returns:

Tensor, with the designated shape and dtype, filled with a sample (or samples) from the “standard normal” distribution.

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type.

  • ValueError – If size contains invalid number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> print(ops.randn((2, 2)))
[[ 0.30639967 -0.42438635]
 [-0.4287376   1.3054721 ]]
tinyms.primitives.randn_like(input, seed=None, *, dtype=None)[source]

Returns a new Tensor with given shape and dtype, filled with a sample (or samples) from the standard normal distribution.

Parameters:
  • input (Tensor) – Input Tensor to specify the output shape and its default dtype.

  • seed (int, optional) – Random seed, must be greater or equal to 0. Default: None, and 0 will be used.

Keyword Arguments:

dtype (mindspore.dtype, optional) – Designated tensor dtype, it must be float type. If None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, with the designated shape and dtype, filled with a sample (or samples) from the “standard normal” distribution.

Raises:
  • TypeErrorseed is not a non-negative integer.

  • ValueError – If dtype is not a mstype.float_type.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import Tensor, ops
>>> a = Tensor([[1, 2, 3], [4, 5, 6]])
>>> print(ops.randn_like(a, dtype=ms.float32))
[[ 0.30639967 -0.42438635 -0.20454668]
 [-0.4287376   1.3054721   0.64747655]]
tinyms.primitives.random_categorical(logits, num_sample, seed=0, dtype=mindspore.int64)[source]

Generates random samples from a given categorical distribution tensor.

Parameters:
  • logits (Tensor) – The input tensor. 2-D Tensor with shape \((batch\_size, num\_classes)\).

  • num_sample (int) – Number of sample to be drawn. Only constant values is allowed.

  • seed (int) – Random seed. Only constant values is allowed. Default: 0.

  • dtype (mindspore.dtype) – The type of output. Its value must be one of mindspore.int16, mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

Tensor, The output Tensor with shape \((batch\_size, num\_samples)\).

Raises:
  • TypeError – If dtype is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.

  • TypeError – If logits is not a Tensor.

  • TypeError – If neither num_sample nor seed is an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> from mindspore import Tensor
>>> import mindspore.common.dtype as mstype
>>> import numpy as np
>>> logits = Tensor(np.random.random((10, 5)).astype(np.float32), mstype.float32)
>>> net = ops.random_categorical(logits, 8)
>>> result = net.shape
>>> print(result)
(10, 8)
tinyms.primitives.random_gamma(shape, alpha, seed=None)[source]

Outputs random values from the Gamma distribution(s) described by alpha.

Parameters:
  • shape (Tensor) – The shape of random tensor to be generated. Must be one of the following types: int32, int64. 1-D integer tensor.

  • alpha (Tensor) – The \(\alpha\) distribution parameter. A Tensor. Must be one of the following types: half, float32, float64.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape should be equal to the concat shape between the input shape and the broadcast of alpha. The dtype is the same type as alpha.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If alpha is not a Tensor.

  • TypeError – If seed is not an int.

  • TypeError – If dtype of alpha is not half, float32 or float64.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> shape = Tensor(np.array([7, 5]), mindspore.int32)
>>> alpha = Tensor(np.array([0.5, 1.5]), mindspore.float32)
>>> output = ops.random_gamma(shape, alpha, seed=5)
>>> result = output.shape
>>> print(result)
(7, 5, 2)
tinyms.primitives.random_poisson(shape, rate, seed=None, dtype=mindspore.float32)[source]

Generates random number Tensor with shape shape according to a Poisson distribution with mean rate.

\[\text{P}(i|μ) = \frac{\exp(-μ)μ^{i}}{i!}\]
Parameters:
  • shape (Tensor) – The shape of random tensor to be sampled from each poisson distribution, 1-D Tensor whose dtype is mindspore.dtype.int32 or mindspore.dtype.int64.

  • rate (Tensor) – The \(μ\) parameter the distribution is constructed with. It represents the mean of the distribution and also the variance of the distribution. It should be a Tensor whose dtype is mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16.

  • seed (int, optional) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers and must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – The data type of output: mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16. Default: mindspore.dtype.float32.

Returns:

A Tensor whose shape is mindspore.concat([‘shape’, mindspore.shape(‘rate’)], axis=0) and data type is equal to argument dtype.

Raises:
  • TypeError – If shape is not a Tensor.

  • TypeError – If datatype of shape is not mindspore.dtype.int64 nor mindspore.dtype.int32.

  • ValueError – If shape of shape is not 1-D.

  • TypeError – If rate is not a Tensor nor a scalar.

  • TypeError – If datatype of rate is not in [mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16].

  • TypeError – If seed is not a non-negtive int.

  • TypeError – If dtype is not in [mindspore.dtype.int64, mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 nor mindspore.dtype.float16].

  • ValueError – If any element of input shape tensor is not positive.

Supported Platforms:

GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: 1-D shape, 2-D rate, float64 output
>>> shape = Tensor(np.array([2, 2]), mindspore.int64)
>>> rate = Tensor(np.array([[5.0, 10.0], [5.0, 1.0]]), mindspore.float32)
>>> output = ops.random_poisson(shape, rate, seed=5, dtype=mindspore.float64)
>>> print(output.shape, output.dtype)
(2, 2, 2, 2) Float64
>>> # case 2: 1-D shape, scalar rate, int64 output
>>> shape = Tensor(np.array([2, 2]), mindspore.int64)
>>> rate = Tensor(5.0, mindspore.float64)
>>> output = ops.random_poisson(shape, rate, seed=5, dtype=mindspore.int64)
>>> print(output.shape, output.dtype)
(2, 2) Int64
tinyms.primitives.randperm(n, seed=0, offset=0, dtype=mindspore.int64)[source]

Generates random permutation of integers from 0 to n-1.

Returns the tensor with the determined shape inferred by n, the random numbers in it drawn from the data range that a given type can represent.

Parameters:
  • n (Union[Tensor, int]) – The input n Tensor with shape: () or (1,) and with data type of int64. The value of n must be greater than zero.

  • seed (int, optional) – Random seed. Default: 0. When seed is -1(only negative value), offset is 0, it’s determined by time.

  • offset (int, optional) – Offset to generate random numbers. Priority is higher than random seed. Default: 0. It must be non-negative.

  • dtype (mindspore.dtype, optional) – The type of output. Its value must be one of the following types: int32, int16, int8, uint8, int64, float64, float32, float16. Default: int64.

Returns:

Tensor. Its shape is specified by the required args n. Its type is spcified by dtype. Otherwise is default.

Raises:
  • TypeError – If dtype is not allowed.

  • ValueError – If n is a negative or 0 element.

  • ValueError – If seed is a negative element.

  • ValueError – If n is larger than the maximal data of the set dtype.

Supported Platforms:

CPU

Examples

>>> n = 4
>>> seed = 0
>>> offset = 0
>>> output = ops.randperm(n, seed, offset, dtype=mstype.int64)
>>> print(output)
[1 0 2 3]
tinyms.primitives.range(start, end, step)[source]

Creates a sequence of numbers that begins at start and extends by increments of limit up to but not including end.

The types of all 3 inputs must be the same. The type of the resulting tensor is the same as the type of the inputs.

Parameters:
  • start (Tensor) – A scalar Tensor. The first number in the sequence. Must have type: int32 ,int64, float32 or float64.

  • end (Tensor) – A scalar Tensor. Upper limit of the sequence, exclusive. Must have type: int32 ,int64, float32 or float64.

  • step (Tensor) – A scalar Tensor. Number that increments start. Must have type: int32 ,int64, float32 or float64.

Returns:

A 1-D Tensor, with the same type as the inputs.

Raises:
  • TypeError – If start, end or step is not scalar Tensor.

  • TypeError – If datatype of start, end or step is not same.

  • TypeError – If datatype of start, end or step is not supported.

  • ValueError – If step = 0.

  • ValueError – If start >= end when step > 0.

  • ValueError – If start <= end when step < 0.

Supported Platforms:

GPU CPU

Examples

>>> start = Tensor(0, mstype.int32)
>>> end = Tensor(10, mstype.int32)
>>> step = Tensor(4, mstype.int32)
>>> output = ops.range(start, end, step)
>>> print(output)
[0 4 8]
tinyms.primitives.rank(input_x)[source]

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor is the number of indices required to uniquely select each element of the tensor.

Parameters:

input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Returns:

Tensor. 0-D int32 Tensor representing the rank of input, i.e., \(R\). The data type is an int.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.rank(input_tensor)
>>> print(output)
2
>>> print(type(output))
<class 'int'>
tinyms.primitives.ravel(input)[source]

Expand the multidimensional Tensor into 1D along the 0 axis direction.

Parameters:

input (Tensor) – A tensor to be flattened.

Returns:

Tensor, a 1-D tensor, containing the same elements of the input.

Raises:

TypeError – If argument input is not Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = ops.ravel(x)
>>> print(output)
[0. 1. 2. 1.]
>>> print(output.shape)
(4,)
tinyms.primitives.real(input)[source]

Returns a Tensor that is the real part of the input. If input is real, it is returned unchanged.

Parameters:

input (Tensor) – The input tensor to compute to.

Returns:

Tensor, the shape is the same as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.asarray(np.complex(1.3+0.4j)), ms.complex64)
>>> output = ops.real(input)
>>> print(output)
1.3
tinyms.primitives.reciprocal(input)[source]

Returns reciprocal of a tensor element-wise.

\[out_{i} = \frac{1}{x_{i}}\]
Parameters:

input (Tensor) – The input tensor. \((N, *)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import numpy as np
>>> input = ms.Tensor(np.array([1.0, 2.0, 4.0]), ms.float32)
>>> output = ops.reciprocal(input)
>>> print(output)
[1.   0.5  0.25]
tinyms.primitives.relu(input)[source]

Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise.

It returns \(\max(input,\ 0)\) element-wise. Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

\[ReLU(input) = (input)^+ = max(0, input)\]

Note

In general, this operator is more commonly used. The difference from ReLuV2 is that the ReLuV2 will output one more Mask.

Parameters:

input (Tensor) –

Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, data type is number.

Returns:

Tensor of shape \((N, *)\), with the same dtype and shape as the input.

Raises:
  • TypeError – If dtype of input is not a number.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.relu(input_x)
>>> print(output)
[[0. 4. 0.]
 [2. 0. 9.]]
tinyms.primitives.relu6(x)[source]

Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise.

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]

It returns \(\min(\max(0,x), 6)\) element-wise.

Parameters:

x (Tensor) – Tensor of shape \((N, *)\) with float16 or float32 data type.

Returns:

Tensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> result = ops.relu6(input_x)
>>> print(result)
[[0. 4. 0.]
 [2. 0. 6.]]
tinyms.primitives.remainder(input, other)[source]

Computes the remainder of dividing the first input tensor by the second input tensor element-wise.

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, both dtypes cannot be bool, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

\[remainder(input, other) = input - input.div(other, rounding\_mode="floor") * other\]

Warning

  • When the elements of input exceed 2048, there might be accuracy problems.

  • The calculation results of this operator on Ascend and CPU might be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • input (Union[Tensor, numbers.Number, bool]) – The first input is a number, a bool or a tensor whose data type is number.

  • other (Union[Tensor, numbers.Number, bool]) – When the first input is a tensor, The second input could be a number, a bool or a tensor whose data type is number.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision.

Raises:
  • TypeError – If neither input nor other is one of the following: Tensor, number, bool.

  • ValueError – If the shape input and other cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-4.0, 5.0, 6.0]).astype(np.float16))
>>> y = Tensor(np.array([3.0, 2.0, 3.0]).astype(np.float16))
>>> output = ops.remainder(x, y)
>>> print(output)
[2.  1.  0.]
tinyms.primitives.renorm(input, p, axis, maxnorm)[source]

Renormalizes the sub-tensors along dimension axis, and each sub-tensor’s p-norm should not exceed the ‘maxnorm’. The values of current sub-tensor don’t need change if the p-norm of the sub-tensor is less than maxnorm. Otherwise the sub-tensor needs to be modified to the original value of the corresponding position divided by the p-norm of the substensor and then multiplied by maxnorm.

Parameters:
  • input (Tensor) – A Tensor, types: float32 or float16.

  • p (int) – Power of norm calculation.

  • axis (int) – The dimension that expected to get the slice-tensor.

  • maxnorm (float32) – Max norm.

Returns:

Tensor, has the same dtype and shape as input.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
>>> y = ops.renorm(x, p=1, axis=0, maxnorm=5.)
>>> print(y)
[[1.       1.        1.        ]
[1.6666666 1.6666666 1.6666666 ]
[1.6666667 1.6666667 1.6666667 ]]
tinyms.primitives.repeat_elements(x, rep, axis=0)[source]

Repeat elements of a tensor along an axis, like np.repeat .

Parameters:
  • x (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • rep (int) – The number of times to repeat, must be positive.

  • axis (int) – The axis along which to repeat, default 0.

Returns:

One tensor with values repeated along the specified axis. If x has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * rep, ..., sn)\). The output type will be the same as the type of x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1 : repeat on axis 0
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
>>> # case 2 : repeat on axis 1
>>> x = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_elements(x, rep = 2, axis = 1)
>>> print(output)
[[0 0 1 1 2 2]
 [3 3 4 4 5 5]]
tinyms.primitives.repeat_interleave(input, repeats, axis=None)[source]

Repeat elements of a tensor along an axis, like numpy.repeat.

Parameters:
  • input (Tensor) – The tensor to repeat values for. Must be of type: float16, float32, int8, uint8, int16, int32, or int64.

  • repeats (int) – The number of times to repeat, must be positive.

  • axis (int, optional) – The axis along which to repeat, default: None. if dims is None, the input Tensor will be flattened and the output will alse be flattened.

Returns:

One tensor with values repeated along the specified axis. If input has shape \((s1, s2, ..., sn)\) and axis is i, the output will have shape \((s1, s2, ..., si * repeats, ..., sn)\). The output type will be the same as the type of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[0, 1, 2], [3, 4, 5]]), mindspore.int32)
>>> output = ops.repeat_interleave(input, repeats=2, axis=0)
>>> print(output)
[[0 1 2]
 [0 1 2]
 [3 4 5]
 [3 4 5]]
tinyms.primitives.reshape(input, shape)[source]

Rearranges the input Tensor based on the given shape.

The ‘shape’ can only have one -1 at most, in which case it’s inferred from the remaining dimensions and the number of elements in the input.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • shape (Union[tuple[int], Tensor[int]]) – Constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((y_1, y_2, ..., y_S)\).

Raises:

ValueError – Given a shape tuple, if it has several -1; or if the product of its elements is less than or equal to 0 or cannot be divided by the product of the input tensor shape; or if it does not match the input’s array size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> output = ops.reshape(input, (3, 2))
>>> print(output)
[[-0.1  0.3]
 [ 3.6  0.4]
 [ 0.5 -3.2]]
tinyms.primitives.reverse(x, axis)[source]

Reverses specific dimensions of a tensor.

Warning

The value range of “axis” is [-dims, dims - 1]. “dims” is the dimension length of “input_x”.

Parameters:
  • x (Tensor) – The target tensor. The data type is Number except float64. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

  • axis (Union[tuple(int), list(int)]) – The indices of the dimensions to reverse.

Outputs:

Tensor, has the same shape and type as x.

Raises:
  • TypeError – If axis is neither list nor tuple.

  • TypeError – If element of axis is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> output = ops.reverse(input_x, axis=[1])
>>> print(output)
[[4 3 2 1]
 [8 7 6 5]]
>>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
>>> output = ops.reverse(input_x, axis=[1, 0])
>>> print(output)
[[8 7 6 5]
 [4 3 2 1]]
tinyms.primitives.reverse_sequence(x, seq_lengths, seq_dim, batch_dim=0)[source]

Reverses variable length slices.

Parameters:
  • x (Tensor) – The input to reverse, supporting all number types including bool.

  • seq_lengths (Tensor) – Specified reversing length, must be a 1-D vector with int32 or int64 types.

  • seq_dim (int) – The dimension where reversal is performed. Required.

  • batch_dim (int) – The input is sliced in this dimension. Default: 0.

Returns:

Tensor, with the same shape and data type as x.

Raises:
  • TypeError – If seq_dim or batch_dim is not an int.

  • ValueError – If value of batch_dim is equal to or greater than length of shape of input.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[1. 2. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([1, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=0, batch_dim=1)
>>> print(output)
[[1. 5. 9.]
 [4. 2. 6.]
 [7. 8. 3.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([2, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[2. 1. 3.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([3, 2, 3]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[3. 2. 1.]
 [5. 4. 6.]
 [9. 8. 7.]]
>>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
>>> seq_lengths = Tensor(np.array([4, 4]))
>>> output = ops.reverse_sequence(x, seq_lengths, seq_dim=1)
>>> print(output)
[[4. 3. 2. 1.]
 [8. 7. 6. 5.]]
tinyms.primitives.roll(input, shifts, dims=None)[source]

Rolls the elements of a tensor along an axis.

Parameters:
  • input (Tensor) – Input tensor.

  • shifts (Union[list(int), tuple(int), int]) – Specifies the number of places by which elements are shifted positively (towards larger indices) along the specified dimension. Negative shifts will roll the elements in the opposite direction.

  • dims (Union[list(int), tuple(int), int], optional) – Specifies the dimension indexes of shape to be rolled. Default: None. If dims is None, the Tensor will be flattened before rolling and then restored to the original shape.

Returns:

Tensor, has the same shape and type as input.

Raises:
  • TypeError – If shifts is not an int, a tuple or a list.

  • TypeError – If dims is not an int, a tuple or a list.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([0, 1, 2, 3, 4]).astype(np.float32))
>>> output = ops.roll(input_x, shifts=2, dims=0)
>>> print(output)
[3. 4. 0. 1. 2.]
tinyms.primitives.rot90(input, k, dims)[source]

Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0.

Parameters:
  • input (Tensor) – Input tensor.

  • k (int) – Number of times to rotate.

  • dims (Union[list(int), tuple(int)]) – Axis to rotate.

Returns:

Tensor.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If k is not integer.

  • TypeError – If dims is not tuple of integers or list of ints.

  • ValueError – If the length of dims is not 2.

  • ValueError – If any dims is out of Tensor’s range [-input.ndim, input.ndim).

  • RuntimeError – If rotation dims are not different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[0, 1], [2, 3]])).astype(np.float32)
>>> k = 1
>>> dims = [0, 1]
>>> output = ops.rot90(x, k, dims)
>>> print(output)
[[1. 3.]
[0. 2.]]
tinyms.primitives.round(input)[source]

Returns half to even of a tensor element-wise.

\[out_i \approx input_i\]
Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, has the same shape and type as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> output = ops.round(input)
>>> print(output)
[ 1.  2.  2.  2. -4.]
tinyms.primitives.rrelu(input, lower=0.125, upper=0.3333333333333333)[source]

Randomized Leaky ReLU activation function.

The activation function is defined as:

\[\text{rrelu}(input_{ji}) = \begin{cases}input_{ji}, &\text{if } input_{ji} \geq 0; \cr {\alpha_{ji}} * input_{ji}, &\text{otherwise.}\end{cases}\]

where \(\alpha_{ji}\) ~ \(U(l, u)\), \(l \le u\).

Applies the rrelu function elementally, as described in the paper: Empirical Evaluation of Rectified Activations in Convolution Network .

Parameters:
  • input (Tensor) – The input of rrelu is a Tensor of any dimension.

  • lower (Union[int, float]) – Slope of the activation function at x < 0. Default: 1.0/8.

  • upper (Union[int, float]) – Slope of the activation function at x < 0. Default: 1.0/3.

Returns:

Tensor, after rrelu, has the same type and shape as the input.

Raises:
  • TypeError – If lower is not a float or an int.

  • TypeError – If upper is not a float or an int.

  • TypeError – If input is not a Tensor.

  • TypeError – If input is not a Tensor of mindspore.float16 or mindpore.float32.

  • ValueError – If lower is greater than upper.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0], [2.0, 0]]), mindspore.float32)
>>> output = ops.rrelu(x)
>>> print(output)
[[-0.31465699  4.        ]
 [ 2.          0.        ]]
tinyms.primitives.rsqrt(input)[source]

Computes reciprocal of square root of input tensor element-wise.

\[out_{i} = \frac{1}{\sqrt{input_{i}}}\]
Parameters:

input (Tensor) – The input of rsqrt. Its each element must be a non-negative number, if an element is negative, the calculation result is nan.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([-0.0370,  0.2970,  1.5420, -0.9105])
>>> output = ops.rsqrt(input)
>>> print(output)
[       nan 1.8349396  0.80530024        nan]
tinyms.primitives.scalar_cast(input_x, input_y)[source]

Casts the input scalar to another type.

Parameters:
  • input_x (scalar) – The input scalar. Only constant value is allowed.

  • input_y (mindspore.dtype) – The type to be cast. Only constant value is allowed.

Returns:

Scalar. The type is the same as the python type corresponding to input_y.

Raises:

TypeError – If neither input_x nor input_y is a constant value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.scalar_cast(255.0, mindspore.int32)
>>> print(output)
255
tinyms.primitives.scalar_to_array(input_x)[source]

The interface is deprecated. Please use the mindspore.ops.scalar_to_tensor() instead.

tinyms.primitives.scalar_to_tensor(input_x, dtype=mindspore.float32)[source]

Converts a scalar to a Tensor, and converts the data type to the specified type.

Parameters:
  • input_x (Union[bool, int, float]) – The input is a scalar. Only constant value is allowed.

  • dtype (mindspore.dtype) – The target data type. Default: mindspore.float32. Only constant value is allowed.

Returns:

Tensor. 0-D Tensor and the content is the input.

Raises:

TypeError – If input_x is neither bool nor int nor float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = 1
>>> output = ops.scalar_to_tensor(data, mindspore.float32)
>>> print(output)
1.0
tinyms.primitives.scatter(input, axis, index, src)[source]

Update the value in src to input according to the specified index. Refer to mindspore.ops.tensor_scatter_elements() for more details.

Parameters:
  • input (Tensor) – The target tensor. The rank of input must be at least 1.

  • axis (int) – Which axis to scatter. Accepted range is [-r, r) where r = rank(input).

  • index (Tensor) – The index to do update operation whose data type must be mindspore.int32 or mindspore.int64. Same rank as input . And accepted range is [-s, s) where s is the size along axis.

  • src (Tensor) – The tensor doing the update operation with input , has the same type as input , and the shape of src should be equal to the shape of index .

Returns:

Tensor, has the same shape and type as input .

Raises:
  • TypeError – If index is neither int32 nor int64.

  • ValueError – If anyone of the rank among input , index and src less than 1.

  • ValueError – If the shape of src is not equal to the shape of index .

  • ValueError – If the rank of src is not equal to the rank of input .

  • RuntimeError – If the data type of input and src conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[1, 2, 3, 4, 5]]), dtype=ms.float32)
>>> src = Tensor(np.array([[8, 8]]), dtype=ms.float32)
>>> index = Tensor(np.array([[2, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=1, index=index, src=src)
>>> print(out)
[[1. 2. 8. 4. 8.]]
>>> input = Tensor(np.zeros((5, 5)), dtype=ms.float32)
>>> src = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), dtype=ms.float32)
>>> index = Tensor(np.array([[0, 0, 0], [2, 2, 2], [4, 4, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=0, index=index, src=src)
>>> print(out)
[[1. 2. 3. 0. 0.]
[0. 0. 0. 0. 0.]
[4. 5. 6. 0. 0.]
[0. 0. 0. 0. 0.]
[7. 8. 9. 0. 0.]]
>>> input = Tensor(np.zeros((5, 5)), dtype=ms.float32)
>>> src = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), dtype=ms.float32)
>>> index = Tensor(np.array([[0, 2, 4], [0, 2, 4], [0, 2, 4]]), dtype=ms.int64)
>>> out = ops.scatter(input=input, axis=1, index=index, src=src)
>>> print(out)
[[1. 0. 2. 0. 3.]
[4. 0. 5. 0. 6.]
[7. 0. 8. 0. 9.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
tinyms.primitives.scatter_add(input_x, indices, updates)[source]

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter. The shape is \((N, *)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) – The index to do add operation whose data type must be int32 or int64.

  • updates (Tensor) – The tensor doing the add operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_add(input_x, indices, updates)
>>> print(output)
[[ 1.  1.  1.]
 [19. 19. 19.]]
tinyms.primitives.scatter_div(input_x, indices, updates)[source]

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do divide operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) – The tensor doing the divide operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same type and shape as input_x.

Raises:
  • TypeError – If the type of indices is not one of the following dtype: int32, int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]
tinyms.primitives.scatter_max(input_x, indices, updates)[source]

Using given values to update tensor value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates follow the implicit type conversion rules to keep the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do max operation whose data type must be mindspore.int32.

  • updates (Tensor) – The tensor doing the max operation with input_x, the data type is same as input_x, the shape is indices.shape + x.shape[1:].

Returns:

Tensor, the updated input_x, the type and shape same as input_x.

Raises:
  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> output = ops.scatter_max(input_x, indices, updates)
>>> print(output)
[[88. 88. 88.]
 [88. 88. 88.]]
tinyms.primitives.scatter_min(input_x, indices, updates)[source]

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when updates does not support conversion to the data type required by input_x.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) – The tensor doing the min operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter
>>> from mindspore import ops
>>> input_x = Parameter(Tensor(np.zeros((2, 3)), mindspore.float32), name="input_x")
>>> indices = Tensor(np.array([1, 0]), mindspore.int32)
>>> update = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> output = ops.scatter_min(input_x, indices, update)
>>> print(output)
[[0. 0. 0.]
 [0. 0. 0.]]
tinyms.primitives.scatter_mul(input_x, indices, updates)[source]

Using given values to update tensor value through the mul operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type. A RuntimeError will be reported when the data types of parameters need to be converted.

Parameters:
  • input_x (Parameter) – The target tensor to be updated, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) – The index to do mul operation whose data type must be int32 or int64.

  • updates (Tensor) – The tensor doing the mul operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If use_locking is not a bool.

  • TypeError – If indices is not an int32 or int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
 [4. 4. 4.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
>>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[  1.   1.   1.]
 [378. 378. 378.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
>>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[  3.   3.   3.]
 [126. 126. 126.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [0, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
>>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
>>> # step 2: [0, 1]
>>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
>>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
>>> output = ops.scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7.  7.  7.]
 [54. 54. 54.]]
tinyms.primitives.scatter_nd(indices, updates, shape)[source]

Scatters a tensor into a new tensor depending on the specified indices.

Creates an empty tensor with the given shape, and set values by scattering the update tensor depending on indices. The empty tensor has rank \(P\) and indices has rank \(Q\).

The shape is \((s_0, s_1, ..., s_{P-1})\), where \(P \ge 1\).

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\), where \(Q \ge 2\) and \(N \le P\).

The last dimension of indices (with length \(N\) ) indicates slices along the \(N\) th dimension of the empty tensor.

updates is a tensor of rank \(Q-1+P-N\), and its shape is \((i_0, i_1, ..., i_{Q-2}, s_N, s_{N+1}, ..., s_{P-1})\).

If indices contains duplicates, the duplicate updates are summed.

The following figure shows the calculation process of inserting two new value matrices into the first dimension with rank-3:

tinyms/ScatterNd.png
Parameters:
  • indices (Tensor) – Define the index of scattering in the new tensor with int32 or int64 data type. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – Define the source Tensor to be updated. It has shape indices.shape[:-1] + shape[indices.shape[-1]:].

  • shape (tuple[int]) – Define the shape of the output tensor, has the same data type as indices. shape can not be empty, and the elements in shape must be greater than or equal to 1.

Returns:

Tensor, the new tensor, has the same type as update and the same shape as shape.

Raises:
  • TypeError – If shape is not a tuple.

  • ValueError – If any element of shape is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[1, 1, 1, 1], [2, 2, 2, 2],
...                             [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
>>> shape = (4, 4, 4)
>>> output = ops.scatter_nd(indices, updates, shape)
>>> print(output)
[[[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]
 [[1. 1. 1. 1.]
  [2. 2. 2. 2.]
  [3. 3. 3. 3.]
  [4. 4. 4. 4.]]
 [[0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]
  [0. 0. 0. 0.]]]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = ops.scatter_nd(indices, updates, shape)
>>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
>>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
>>> # [
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> #     [0. 0. 0.]
>>> # ]
>>> # Step 2: Modify the data at the specified location according to the indicators
>>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
>>> # means that the empty tensor in the 0th row and 1st col set to 3.2
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 0.   0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
>>> # means that the empty tensor in the 1th row and 1st col set to 1.1
>>> # [
>>> #     [0. 3.2. 0.]
>>> #     [0. 1.1  0.]
>>> #     [0. 0.   0.]
>>> # ]
>>> # The final result is as follows:
>>> print(output)
[[0. 3.2 0.]
 [0. 1.1 0.]
 [0. 0.  0.]]
tinyms.primitives.scatter_nd_add(input_x, indices, updates, use_locking=False)[source]

Applies sparse addition to individual values or slices in a tensor.

Using given values to update tensor value through the add operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor doing the addition operation with input_x, the data type is same as input_x, the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_add(input_x, indices, updates, False)
>>> print(output)
[ 1. 10.  9.  4. 12.  6.  7. 17.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_add(input_x, indices, updates, False)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]
  [0 0 0 0]]]
tinyms.primitives.scatter_nd_div(input_x, indices, updates, use_locking=False)[source]

Applying sparse division to individual values or slices in a tensor.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q, where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do div operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the div operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_div(input_x, indices, updates, False)
>>> print(output)
[1.         0.25       0.5        4.         0.71428573 6.
 7.         0.8888889 ]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.float32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.float32)
>>> output = ops.scatter_nd_div(input_x, indices, updates, False)
>>> print(output)
[[[1.         1.         1.         1.        ]
  [0.5        0.5        0.5        0.5       ]
  [0.33333334 0.33333334 0.33333334 0.33333334]
  [0.25       0.25       0.25       0.25      ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]
 [[0.2        0.2        0.2        0.2       ]
  [0.16666667 0.16666667 0.16666667 0.16666667]
  [0.14285715 0.14285715 0.14285715 0.14285715]
  [0.125      0.125      0.125      0.125     ]]
 [[1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]
  [1.         1.         1.         1.        ]]]
tinyms.primitives.scatter_nd_max(input_x, indices, updates, use_locking=False)[source]

Applying sparse maximum to individual values or slices in a tensor.

Using given values to update parameter value through the max operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index to do maximum operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the max operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_max(input_x, indices, updates, False)
>>> print(output)
[1. 8. 6. 4. 7. 6. 7. 9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_max(input_x, indices, updates, False)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
tinyms.primitives.scatter_nd_min(input_x, indices, updates, use_locking=False)[source]

Applying sparse minimum to individual values or slices in a tensor.

Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter. The shape is \((N,*)\), where \(*\) means any number of additional dimensions.

  • indices (Tensor) – The index to do min operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the min operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.ones(8) * 10, mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_min(input_x, indices, updates, False)
>>> print(output)
[10.  8.  6. 10.  7. 10. 10.  9.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)) * 10, mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_min(input_x, indices, updates, False)
>>> print(output)
[[[ 1  1  1  1]
  [ 2  2  2  2]
  [ 3  3  3  3]
  [ 4  4  4  4]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]
 [[ 5  5  5  5]
  [ 6  6  6  6]
  [ 7  7  7  7]
  [ 8  8  8  8]]
 [[10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]
  [10 10 10 10]]]
tinyms.primitives.scatter_nd_mul(input_x, indices, updates, use_locking=False)[source]

Applies sparse multiplication to individual values or slices in a tensor.

Using given values to update parameter value through the multiplication operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q, where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – Input parameter.

  • indices (Tensor) – The index to do multiplication operation whose data type must be mindspore.int32 or mindspore.int64. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor to do the multiplication operation with input_x. The data type is same as input_x, and the shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, the updated input_x, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[ 1. 16. 18.  4. 35.  6.  7. 72.]
>>> input_x = Parameter(Tensor(np.ones((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_mul(input_x, indices, updates)
>>> print(output)
[[[1 1 1 1]
  [2 2 2 2]
  [3 3 3 3]
  [4 4 4 4]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]
tinyms.primitives.scatter_nd_sub(input_x, indices, updates, use_locking=False)[source]

Applies sparse subtraction to individual values or slices in a tensor.

Using given values to update tensor value through the subtraction operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

input_x has rank P and indices has rank Q where Q >= 2.

indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.

The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.

updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index of input tensor, with int32 or int64 data type. The rank of indices must be at least 2 and indices.shape[-1] <= len(shape).

  • updates (Tensor) – The tensor doing the subtraction operation with input_x, has the same type as input. The shape is indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • use_locking (bool) – Whether to protect the assignment by a lock. Default: False.

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If the dtype of use_locking is not bool.

  • TypeError – If the dtype of indices is not int32 or int64.

  • TypeError – If dtype of input_x and updates are not the same.

  • ValueError – If the shape of updates is not equal to indices.shape[:-1] + x.shape[indices.shape[-1]:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
>>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
>>> output = ops.scatter_nd_sub(input_x, indices, updates, False)
>>> print(output)
[ 1. -6. -3.  4. -2.  6.  7. -1.]
>>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
>>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
...                            [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
>>> output = ops.scatter_nd_sub(input_x, indices, updates, False)
>>> print(output)
[[[-1 -1 -1 -1]
  [-2 -2 -2 -2]
  [-3 -3 -3 -3]
  [-4 -4 -4 -4]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]
 [[-5 -5 -5 -5]
  [-6 -6 -6 -6]
  [-7 -7 -7 -7]
  [-8 -8 -8 -8]]
 [[ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]
  [ 0  0  0  0]]]
tinyms.primitives.scatter_update(input_x, indices, updates)[source]

Updates tensor values by using input indices and value.

Using given values to update tensor value, along with the input indices.

for each i, …, j in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters:
  • input_x (Parameter) – The target tensor, with data type of Parameter.

  • indices (Tensor) – The index of input tensor. With int32 or int64 data type. If there are duplicates in indices, the order for updating is undefined.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape = indices.shape + input_x.shape[1:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
>>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
>>> updates = Tensor(np_updates, mindspore.float32)
>>> output = ops.scatter_update(input_x, indices, updates)
>>> print(output)
[[2. 1.2  1.]
 [3. 1.2  1.]]
tinyms.primitives.searchsorted(sorted_sequence, values, *, out_int32=False, right=False)[source]

Return the position indices such that after inserting the values into the sorted_sequence, the order of innermost dimension of the sorted_sequence remains unchanged.

Parameters:
  • sorted_sequence (Tensor) – The input tensor. It must contain a monotonically increasing sequence on the innermost dimension.

  • values (Tensor) – The value that should be inserted.

Keyword Arguments:
  • out_int32 (bool, optional) – Output datatype. If True, the output datatype will be int32; if False, the output datatype will be int64. Default: False.

  • right (bool, optional) – Search Strategy. If True, return the last suitable index found; if False, return the first such index. Default: False.

Returns:

Tensor containing the indices from the innermost dimension of sorted_sequence such that, if insert the corresponding value in the values tensor, the order of sorted_sequence would be preserved, whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.

Raises:

ValueError – If the dimension of sorted_sequence isn’t 1 and all dimensions except the last dimension of sorted_sequence and values are different.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sorted_sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
>>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
>>> output = ops.searchsorted(sorted_sequence, values)
>>> print(output)
[[2 4 5]
 [1 2 4]]
tinyms.primitives.select(cond, x, y)[source]

The conditional tensor determines whether the corresponding element in the output must be selected from x (if true) or y (if false) based on the value of each element.

It can be defined as:

\[\begin{split}out_i = \begin{cases} x_i, & \text{if } cond_i \\ y_i, & \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • cond (Tensor[bool]) – The condition tensor, decides which element is chosen. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

  • x (Union[Tensor, int, float]) – The first Tensor or number to be selected. If x is a Tensor, the shape is or can be broadcadt to \((x_1, x_2, ..., x_N, ..., x_R)\). If x is an int or a float, it will be cast to the type of int32 or float32, and broadcast to the same shape as y. One of x and y must be a Tensor.

  • y (Union[Tensor, int, float]) – The second Tensor or number to be selected. If y is a Tensor, The shape is or can be broadcadt to \((x_1, x_2, ..., x_N, ..., x_R)\). If y is an int or a float, it will be cast to the type of int32 or float32, and broadcast to the same shape as x. One of x and y must be a Tensor.

Returns:

Tensor, has the same shape as cond.

Raises:
  • TypeError – If x or y is not a Tensor, int or float.

  • ValueError – The shapes of inputs can not be broadcast.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # 1) Both inputs are Tensor
>>>
>>> cond = Tensor([True, False])
>>> x = Tensor([2,3], mindspore.float32)
>>> y = Tensor([1,2], mindspore.float32)
>>> output = ops.select(cond, x, y)
>>> print(output)
[2. 2.]
>>> # 2) y is a float
>>> cond = Tensor([True, False])
>>> x = Tensor([2,3], mindspore.float32)
>>> y = 2.0
>>> output = ops.select(cond, x, y)
>>> print(output)
[2. 2.]
tinyms.primitives.selu(input_x)[source]

Activation function SeLU (Scaled exponential Linear Unit).

The activation function is defined as:

\[E_{i} = scale * \begin{cases} x_{i}, &\text{if } x_{i} \geq 0; \cr \text{alpha} * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(alpha\) and \(scale\) are pre-defined constants(\(alpha=1.67326324\) and \(scale=1.05070098\)).

See more details in Self-Normalizing Neural Networks.

Parameters:

input_x (Tensor) – Tensor of any dimension, the data type is float16 or float32.

Returns:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> output = ops.selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
tinyms.primitives.sequence_mask(lengths, maxlen=None)[source]

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape \((d_1, d_2, ..., d_n)\), then the resulting tensor mask has type and shape \((d_1, d_2, ..., d_n, maxlen)\), with mask \([i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n])\).

Parameters:
  • lengths (Tensor) – Tensor to calculate the mask for. All values in this tensor should be less than or equal to maxlen. Values greater than maxlen will be treated as maxlen.

  • maxlen (int) – size of the last dimension of returned tensor. Must be positive and same type as elements in lengths. Default is None.

Returns:

One mask tensor of shape lengths.shape + (maxlen,) .

Raises:
  • TypeError – If lengths is not a Tensor.

  • TypeError – If maxlen is not an int.

  • TypeError – If dtype of lengths is neither int32 nor int64.

Supported Platforms:

GPU CPU

Examples

>>> # case 1: When maxlen is assigned
>>> x = Tensor(np.array([1, 2, 3, 4]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[ True False False False False]
 [ True  True False False False]
 [ True  True  True False False]
 [ True  True  True  True False]]
>>> # case 2: When there is 0 in x
>>> x = Tensor(np.array([[1, 3], [2, 0]]))
>>> output = ops.sequence_mask(x, 5)
>>> print(output)
[[[ True False False False False]
  [ True  True  True False False]]
 [[ True  True False False False]
  [False False False False False]]]
>>> # case 3: when the maxlen is not assigned
>>> x = Tensor(np.array([[1, 3], [2, 4]]))
>>> output = ops.sequence_mask(x)
>>> print(output)
[[[ True False False False ]
  [ True  True  True False ]]
 [[ True  True False False ]
  [ True  True  True  True ]]]
tinyms.primitives.sgn(input)[source]

Extension of mindspore.ops.sign() in complex domain. For real number input, this function is the same as mindspore.ops.sign(). For complex input, this function is calculated according to the following formula.

\[\begin{split}\text{out}_{i} = \begin{cases} 0 & |\text{input}_i| == 0 \\ \frac{{\text{input}_i}}{|{\text{input}_i}|} & \text{otherwise} \end{cases}\end{split}\]
Parameters:

input (Tensor) – The input value.

Returns:

Tensor, the sgn of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([[3 + 4j, 7 - 24j, 0, 6 + 8j, 8], [15 + 20j, 7 - 24j, 0, 3 + 4j, 20]], dtype=ms.complex64)
>>> output = ops.sgn(input)
>>> print(output)
[[0.6 +0.8j  0.28-0.96j 0.  +0.j   0.6 +0.8j  1.  +0.j  ]
 [0.6 +0.8j  0.28-0.96j 0.  +0.j   0.6 +0.8j  1.  +0.j  ]]
tinyms.primitives.shape(input_x)[source]

Returns the shape of the input tensor.

Parameters:

input_x (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

Returns:

tuple[int], the output tuple is constructed by multiple integers, \((x_1, x_2, ..., x_R)\).

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.shape(input_x)
>>> print(output)
(3, 2, 1)
tinyms.primitives.shuffle(x, seed=None)[source]

Randomly shuffles a Tensor along its first dimension.

Parameters:
  • x (Tensor) – The Tensor need be shuffled.

  • seed (int, optional) – Random seed used for random number generation, must be non-negative. If seed is 0, which will be replaced with a randomly generated value. Default: None, which will be treated as 0.

Returns:

Tensor. The shape and type are the same as the input x.

Raises:

TypeError – If data type of seed is not None or non-negative int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mstype.float32)
>>> output = ops.shuffle(x, seed=1)
>>> print(output)
(3. 4. 2. 1.)
tinyms.primitives.sigmoid(input)[source]

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{sigmoid}(input_i) = \frac{1}{1 + \exp(-input_i)}\]

where \(input_i\) is an element of the input.

Parameters:

input (Tensor) – Tensor of any dimension, the data type is float16, float32, float64, complex64 or complex128.

Returns:

Tensor, with the same type and shape as the input.

Raises:
  • TypeError – If dtype of input is not float16, float32, float64, complex64 or complex128.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.sigmoid(input)
>>> print(output)
[0.7310586  0.880797   0.95257413 0.98201376 0.9933072 ]
tinyms.primitives.sign(input)[source]

Returns an element-wise indication of the sign of a number.

\[\begin{split}\text{out}_{i} = \begin{cases} -1 & \text{input}_{i} < 0 \\ 0 & \text{input}_{i} = 0 \\ 1 & \text{input}_{i} > 0 \end{cases}\end{split}\]
Parameters:

input (Tensor) – Input Tensor.

Returns:

Tensor, the sign of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([[-1, 0, 2, 4, 6], [2, 3, 5, -6, 0]])
>>> output = ops.sign(input)
>>> print(output)
[[-1  0  1  1  1]
 [ 1  1  1 -1  0]]
>>> ms.set_context(device_target="CPU")
>>> x = ms.Tensor([[-1, 0, float('inf'), 4, float('nan')], [2, 3, float('-inf'), -6, 0]])
>>> output = ops.sign(x)
>>> print(output)
[[-1.  0.  1.  1.  0.]
 [ 1.  1. -1. -1.  0.]]
tinyms.primitives.signbit(input)[source]

Determine the symbol of each element. If the element value is less than 0, the corresponding output position is True; otherwise, it is False.

Parameters:

input (Tensor) – The input value.

Returns:

Tensor, the signbit of input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> input = ms.Tensor([0.3, 1.2, 0., -2.5])
>>> output = ops.signbit(input)
>>> print(output)
[False False False  True]
tinyms.primitives.silu(x)[source]

Computes Sigmoid Linear Unit of input element-wise. The SiLU function is defined as:

\[\text{SiLU}(x) = x * \sigma(x),\]

where the Logistic Sigmoid function is defined as:

\[\text{sigma}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

where \(x_i\) is an element of the x.

For more details, please refer to mindspore.nn.SiLU.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops
>>> x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> output = ops.silu(x)
>>> print(output)
[-0.269  1.762  -0.1423  1.762  -0.269]
tinyms.primitives.sin(input)[source]

Computes sine of the input element-wise.

\[out_i = sin(input_i)\]
Parameters:

input (Tensor) – The shape of tensor is \((N,*)\) where \(*\) means, any number of additional dimensions.

Returns:

Tensor, has the same shape and dtype as input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64, complex64, complex128.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sin(input)
>>> print(output)
[0.5810352 0.27635565 0.41687083 0.5810352]
tinyms.primitives.sinc(input)[source]

Computes the normalized sinc of input.

\[\begin{split}out_i = \begin{cases} \frac{sin(\pi input_i)}{\pi input_i} & input_i\neq 0\\ 1 & input_i=0 \end{cases}\end{split}\]
Parameters:

input (Tensor) – The input Tensor.

Returns:

Tensor, has the same shape as the input. The dtype of output is float32 when dtype of input is in [int, bool]. Otherwise output has the same dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sinc(input)
>>> print(output)
[0.47735003 0.8759357  0.7224278  0.47735003]
tinyms.primitives.sinh(input)[source]

Computes hyperbolic sine of the input element-wise.

\[out_i = \sinh(input_i)\]
Parameters:

input (Tensor) – The input tensor of hyperbolic sine function.

Returns:

Tensor, has the same shape as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = ops.sinh(input)
>>> print(output)
[0.6604918  0.28367308 0.44337422 0.6604918 ]
tinyms.primitives.size(input_x)[source]

Returns a Scalar of type int that represents the size of the input Tensor and the total number of elements in the Tensor.

Parameters:

input_x (Tensor) –

Input parameters, the shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is number.

Returns:

int. A scalar representing the elements’ size of input_x, tensor is the number of elements in a tensor, \(size=x_1*x_2*...x_R\). The data type is an int.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.size(input_x)
>>> print(output)
4
tinyms.primitives.slice(input_x, begin, size)[source]

Slices a tensor in the specified shape.

Slice the tensor input_x in shape of size and starting at the location specified by begin. The slice begin represents the offset in each dimension of input_x. The slice size represents the size of the output tensor.

Note

begin is zero-based and size is one-based.

If size[i] is -1, all remaining elements in dimension i are included in the slice. This is equivalent to setting \(size[i] = input\_x.shape(i) - begin[i]\).

Parameters:
  • input_x (Tensor) – The target tensor.

  • begin (Union[tuple, list]) – The beginning of the slice. Only constant value(>=0) is allowed.

  • size (Union[tuple, list]) – The size of the slice. Only constant value is allowed.

Returns:

Tensor, the shape is input size, the data type is the same as input_x.

Raises:

TypeError – If begin or size is neither tuple nor list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
...                         [[3, 3, 3], [4, 4, 4]],
...                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 3))
>>> print(output)
[[[3 3 3]]]
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 2))
>>> print(output)
[[[3 3]]]
>>> output = ops.slice(data, (1, 0, 0), (1, 1, 1))
>>> print(output)
[[[3]]]
>>> output = ops.slice(data, (1, 1, 0), (1, 1, 3))
>>> print(output)
[[[4 4 4]]]
>>> output = ops.slice(data, (1, 0, 1), (1, 1, 2))
>>> print(output)
[[[3 3]]]
tinyms.primitives.slogdet(input)[source]

Computes the sign and the log of the absolute value of the determinant of one or more square matrices.

Parameters:

input (Tensor) – A matrix to be calculated, its shape is \([..., M, M]\). The matrix must be at least two dimensions, and the last two dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.

Returns:

Tensor. The signs of the log determinants. The shape is \(input.shape[:-2]\) , and the dtype is same as input.

Tensor. The absolute values of the log determinants. The shape is \(input.shape[:-2]\). The dtype always be real-value, even input is complex.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input not float32, float64, complex64 or complex128.

  • ValueError – If the last two dimensions of input is not same size.

  • ValueError – If the dimension of input is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[-4.5, -1.5], [7.0, 6.0]], [[2.5, 0.5], [3.0, 9.0]]]), mindspore.float32)
>>> sign, output = ops.slogdet(input_x)
>>> print(sign)
[-1.   1.]
>>> print(output)
[2.80336046e+00    3.04452229e+00]
tinyms.primitives.smooth_l1_loss(input, target, beta=1.0, reduction='none')[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:

\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\beta}, & \text{if } |x_i - y_i| < \beta \\ |x_i - y_i| - 0.5 * \beta, & \text{otherwise. } \end{cases}\end{split}\]

If reduction is not none, then:

\[\begin{split}L = \begin{cases} \operatorname{mean}(L_{i}), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L_{i}), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. \(\text{beta}>0\) , its default value is 1.0. \(N\) is the batch size.

Parameters:
  • input (Tensor) – Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • target (Tensor) – Ground truth data, tensor of shape \((N, *)\), same shape and dtype as the input.

  • beta (float) – A parameter used to control the point where the function will change between L1 to L2 loss. The value should be greater than zero. Default: 1.0.

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’ or ‘sum’. Default: ‘none’.

Returns:

Tensor, if reduction is ‘none’, then output is a tensor with the same shape as input. Otherwise, the shape of output tensor is (1,).

Raises:
  • TypeError – If beta is not a float.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • TypeError – If dtype of input or target is not one of float16, float32, float64.

  • ValueError – If beta is less than or equal to 0.

  • ValueError – If shape of input is not the same as target.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = ops.smooth_l1_loss(logits, labels)
>>> print(output)
[0.  0.  0.5]
tinyms.primitives.soft_shrink(input, lambd=0.5)[source]

soft_shrink is deprecated, please use softshrink instead.

tinyms.primitives.softmax(x, axis=-1, *, dtype=None)[source]

Applies the Softmax operation to the input tensor on the specified axis. Suppose a slice in the given axis \(x\), then for each element \(x_i\), the Softmax function is shown as follows:

\[\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},\]

where \(N\) is the length of the tensor.

Parameters:
  • axis (Union[int, tuple[int]], optional) – The axis to perform the Softmax operation. Default: -1.

  • x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, x will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int or a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape))

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.softmax(x)
>>> print(output)
[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086 ]
tinyms.primitives.softmin(x, axis=-1, *, dtype=None)[source]

Applies the Softmin operation to the input tensor on the specified axis. Suppose a slice in the given axis \(x\), then for each element \(x_i\), the Softmin function is shown as follows:

\[\text{output}(x_i) = \frac{exp(-x_i)}{\sum_{j = 0}^{N-1}\exp(-x_j)},\]

where \(N\) is the length of the tensor.

Parameters:
  • axis (Union[int, tuple[int]], optional) – The axis to perform the Softmin operation. Default: -1.

  • x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Keyword Arguments:

dtype (mindspore.dtype, optional) – When set, x will be converted to the specified type, dtype, before execution, and dtype of returned Tensor will also be dtype. Default: None.

Returns:

Tensor, with the same type and shape as the logits.

Raises:
  • TypeError – If axis is not an int or a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> output = ops.softmin(x)
>>> print(output)
[0.2341  0.636  0.0862  0.01165  0.03168 ]
tinyms.primitives.softshrink(x, lambd=0.5)[source]

Applies the Softshrink function element-wise.

\[\begin{split}\text{SoftShrink}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • x (Tensor) – The input of soft shrink with data type of float16 or float32.

  • lambd (float) – The \(\lambda\) must be no less than zero. Default: 0.5.

Returns:

Tensor, has the same shape and data type as x.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If lambd is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mindspore.float32)
>>> output = ops.softshrink(x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
tinyms.primitives.softsign(x)[source]

Softsign activation function.

The function is shown as follows:

\[\text{SoftSign}(x) = \frac{x}{1 + |x|}\]
Parameters:

x (Tensor) – Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> output = ops.softsign(x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
tinyms.primitives.sort(input_x, axis=-1, descending=False)[source]

Sorts the elements of the input tensor along the given dimension in the specified order.

Parameters:
  • input_x (Tensor) – The input tensor to sort. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

  • axis (int, optional) – The dimension to sort along. Default: -1.

  • descending (bool, optional) – Controls the sort order. If descending is True, the elements are sorted in descending order, or else sorted in ascending order. Default: False.

Warning

Currently, the data types of Float16, UInt8, Int8, Int16, Int32, Int64 are well supported. If use Float32, it may cause loss of accuracy.

Returns:

  • y1, a tensor whose values are the sorted values, with the same shape and data type as input.

  • y2, a tensor that consists of the indices of the elements in the original input tensor. Data type is int32.

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If descending is not a bool.

  • TypeError – If dtype of input_x is neither float16, float32, uint8, int8, int16, int32, int64.

  • ValueError – If axis is not in range of [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
>>> output = ops.sort(x)
>>> # The output below is based on the Ascend platform.
>>> print(output)
(Tensor(shape=[3, 3], dtype=Float16, value=
[[ 1.0000e+00,  2.0000e+00,  8.0000e+00],
[ 3.0000e+00,  5.0000e+00,  9.0000e+00],
[ 4.0000e+00,  6.0000e+00,  7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
[[2, 1, 0],
[2, 0, 1],
[0, 1, 2]]))
tinyms.primitives.space_to_batch_nd(input_x, block_size, paddings)[source]

Divides a tensor’s spatial dimensions into blocks and combines the block sizes with the original batch.

This operation will divide spatial dimensions into blocks with block_size, and after division, the output tensor’s spatial dimension is the corresponding number of blocks. The output tensor’s batch dimension is the product of the original batch and the product of block_size. Before division, the spatial dimensions of the input are zero padded according to paddings if necessary. Assume input shape is \((n, c_1, ... c_k, w_1, ..., w_M)\), then the shape of the output tensor will be \((n', c_1, ... c_k, w'_1, ..., w'_M)\), where

\[\begin{split}\begin{array}{ll} \\ n' = n*(block\_size[0] * ... * block\_size[M]) \\ w'_i = (w_i + paddings[i][0] + paddings[i][1])//block\_size[i] \end{array}\end{split}\]
Parameters:
  • input_x (Tensor) – The input tensor. It must be a 4-D tensor on Ascend.

  • block_size (Union[list(int), tuple(int), int]) – The block size of dividing block with all value greater than 1. If block_size is a tuple or list, the length of block_size is M corresponding to the number of spatial dimensions. If block_size is an int, the block size of M dimensions are the same, equal to block_size. M must be 2 on Ascend.

  • paddings (Union[tuple, list]) – The padding values for spatial dimensions, containing M subtraction list. Each contains 2 integer values. All values must be greater than 0. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the input dimension i + offset. It is required that input_shape[i+offset]+paddings[i][0]+paddings[i][1] is divisible by block_size[i]. M must be 2 on Ascend.

Returns:

Tensor, the output tensor with the same data type as input.

Raises:
  • ValueError – If block_size is not one dimensional when block_size is a list or tuple.

  • ValueError – If the length of block_size is not 2 on Ascend.

  • ValueError – If the element of block_size is not an integer larger than 1.

  • ValueError – If shape of paddings is not (M, 2), where M is the length of block_size.

  • ValueError – If the element of paddings is not an integer larger than 0.

  • TypeError – If block_size is not one of list, tuple, int.

  • TypeError – If paddings is neither list nor tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> block_size = [2, 2]
>>> paddings = [[0, 0], [0, 0]]
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> output = ops.space_to_batch_nd(input_x, block_size, paddings)
>>> print(output)
[[[[1.]]]
 [[[2.]]]
 [[[3.]]]
 [[[4.]]]]
tinyms.primitives.sparse_segment_mean(x, indices, segment_ids)[source]

Computes a Tensor such that \(output_i = \frac{\sum_j x_{indices[j]}}{N}\) where mean is over \(j\) such that \(segment\_ids[j] == i\) and \(N\) is the total number of values summed. If the mean is empty for a given segment ID \(i\), \(output[i] = 0\).

Note

  • On CPU, values in segment_ids are always validated to be sorted, and an error is thrown for indices that are not increasing. Moreover, values in indices are validated to be bounded, and an error is thrown when indices are out of range[0, x.shape[0]).

  • On GPU, this does not throw an error for unsorted segment_ids and out-of-bound indices. Out-of-order segment_ids result in safe but unspecified behavior, while out-of-range indices will be ignored.

Parameters:
  • x (Tensor) – A Tensor, and its rank must be greater than or equal to 1.

  • indices (Tensor) – A 1-D Tensor, with int32 or int64 data type.

  • segment_ids (Tensor) – A 1-D Tensor, must have the same dtype as indices. Values should be sorted and can be repeated.

Returns:

Tensor, whose dtype and rank is the same as x. The first dimension is equal to the value of the last element of segment_ids plus one, and the other dimensions are the same as those of x.

Raises:
  • TypeError – If x, indices or segment_ids is not a Tensor.

  • TypeError – If the dtype of x is not one of the following dtype: float16, float32, float64.

  • TypeError – If the dtype of indices and segment_ids are not one of the following dtype: int32, int64.

  • TypeError – If the dtype of indices and segment_ids are not the same.

  • ValueError – If the shape of x, ‘indices’ or segment_ids don’t meet the parameter description.

  • ValueError – If the size of ‘indices’ and segment_ids are not the same.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([[0, 1, 2], [1, 2, 3], [3, 6, 7]], dtype=mindspore.float32)
>>> indices = Tensor([0, 1, 2], dtype=mindspore.int32)
>>> segment_ids = Tensor([1,2,2], dtype=mindspore.int32)
>>> out = ops.sparse_segment_mean(x, indices, segment_ids)
>>> print(out)
[[0. 0. 0.]
 [0. 1. 2.]
 [2. 4. 5.]]
tinyms.primitives.split(tensor, split_size_or_sections, axis=0)[source]

Splits the Tensor into chunks along the given axis.

Parameters:
  • tensor (Tensor) – A Tensor to be divided.

  • split_size_or_sections (Union[int, tuple(int), list(int)]) – If split_size_or_sections is an int type, tensor will be split into equally sized chunks, each chunk with size split_size_or_sections. Last chunk will be smaller than split_size_or_sections if tensor.shape[axis] is not divisible by split_size_or_sections. If split_size_or_sections is a list type, then tensor will be split into len(split_size_or_sections) chunks with sizes split_size_or_sections along the given axis.

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument tensor is not Tensor.

  • TypeError – If argument axis is not Tensor.

  • ValueError – If argument axis is out of range of \([-tensor.ndim, tensor.ndim)\) .

  • TypeError – If each element in ‘split_size_or_sections’ is not integer.

  • TypeError – If argument indices_or_sections is not int, tuple(int) or list(int).

  • ValueError – The sum of ‘split_size_or_sections’ is not equal to x.shape[axis].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.split(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
 Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.sqrt(x)[source]

Returns sqrt of a tensor element-wise.

\[out_{i} = \sqrt{x_{i}}\]
Parameters:

x (Tensor) – The input tensor with a dtype of number.Number.

Returns:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> output = ops.sqrt(x)
>>> print(output)
[1. 2. 3.]
tinyms.primitives.square(input)[source]

Returns square of a tensor element-wise.

\[y_i = input_i ^ 2\]
Parameters:

input (Tensor) – The input tensor with a dtype of Number.

Returns:

Tensor, has the same shape and dtype as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = ops.square(input)
>>> print(output)
[1. 4. 9.]
tinyms.primitives.squeeze(input, axis=None)[source]

Return the Tensor after deleting the dimension of size 1 in the specified axis.

If \(axis=None\), it will remove all the dimensions of size 1. If axis is specified, it will remove the dimensions of size 1 in the given axis. For example, if the dimension is not specified \(axis=None\), input shape is (A, 1, B, C, 1, D), then the shape of the output Tensor is (A, B, C, D). If the dimension is specified, the squeeze operation is only performed in the specified dimension. If input shape is (A, 1, B), input Tensor will not be changed when \(axis=0\) , but when \(axis=1\) , the shape of the input Tensor will be changed to (A, B).

Note

  • Please note that in dynamic graph mode, the output Tensor will share data with the input Tensor, and there is no Tensor data copy process.

  • The dimension index starts at 0 and must be in the range [-input.ndim, input.ndim].

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (Union[int, tuple(int)]) – Specifies the dimension indexes of shape to be removed, which will remove all the dimensions of size 1 in the given axis parameter. If specified, it must be int32 or int64. Default: None, an empty tuple will be used.

Returns:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_S)\).

Raises:
  • TypeError – If input is not a tensor.

  • TypeError – If axis is neither an int nor tuple.

  • TypeError – If axis is a tuple whose elements are not all int.

  • ValueError – If the corresponding dimension of the specified axis isn’t equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> output = ops.squeeze(input)
>>> print(output)
[[1. 1.]
 [1. 1.]
 [1. 1.]]
tinyms.primitives.stack(tensors, axis=0)[source]

Stacks a list of tensors in specified axis.

Stacks the list of input tensors with the same rank R, output is a tensor of rank (R+1).

Given input tensors of shape \((x_1, x_2, ..., x_R)\). Set the number of input tensors as N. If \(axis \ge 0\), the shape of the output tensor is \((x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)\).

Parameters:
  • tensors (Union[tuple, list]) – A Tuple or list of Tensor objects with the same shape and type.

  • axis (int) – Dimension to stack. Default: 0. Negative values wrap around. The range is [-(R+1), R+1).

Returns:

Tensor. A stacked Tensor with the same type as tensors.

Raises:
  • TypeError – If the data types of elements in tensors are not the same.

  • ValueError – If the length of tensors is not greater than 0; or if axis is out of the range [-(R+1), R+1); or if the shapes of elements in tensors are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> input_x2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> output = ops.stack((input_x1, input_x2), 0)
>>> print(output)
[[0. 1.]
 [2. 3.]]
tinyms.primitives.standard_laplace(shape, seed=None)[source]

Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). It is defined as:

\[\text{f}(x) = \frac{1}{2}\exp(-|x|)\]
Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is a tuple containing non-positive items.

  • ValueError – If shape is a Tensor, and the rank of the Tensor is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (4, 4)
>>> output = ops.standard_laplace(shape)
>>> result = output.shape
>>> print(result)
(4, 4)
tinyms.primitives.standard_normal(shape, seed=None)[source]

Generates random numbers according to the standard Normal (or Gaussian) random number distribution.

Returns the tensor with the given shape, the random numbers in it drawn from normal distributions whose mean is 0 and standard deviation is 1.

\[f(x)=\frac{1}{\sqrt{2 \pi}} e^{\left(-\frac{x^{2}}{2}\right)}\]
Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated. Only constant value is allowed when the input type is tuple. And the operator supports dynamic shape only when the input type is Tensor.

  • seed (int, optional) – Seed is used as entropy source for Random number engines generating pseudo-random numbers. Default: None, which will be treated as 0.

Returns:

Tensor. The shape that the input ‘shape’ denotes. The dtype is float32.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • ValueError – If shape is a tuple containing non-positive items.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> shape = (4, 4)
>>> output = ops.standard_normal(shape)
>>> result = output.shape
>>> print(result)
(4, 4)
tinyms.primitives.std(input, axis=None, ddof=0, keepdims=False)[source]

Returns the standard-deviation of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the standard deviation. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, the standard deviation. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output = ms.ops.std(input, 1, 2, True)
>>> print(output)
[[1.5811388]
 [7.3824115]]
tinyms.primitives.std_mean(input, axis=None, ddof=0, keepdims=False)[source]

Returns the standard-deviation and mean of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – Specifies the dimensions from which to calculate the standard deviation and mean. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the standard deviation. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

A tuple containing the standard deviation and mean. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output_std, output_mean = ms.ops.std_mean(input, 1, 2, True)
>>> print(output_std)
[[1.5811388]
 [7.3824115]]
>>> print(output_mean)
[[ 2.5]
 [-1.5]]
tinyms.primitives.stft(x, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='REFLECT', normalized=False, onesided=None, return_complex=None)[source]

STFT segments the signal into narrow time intervals and takes the Fourier transform of each segment to quantify the change of a nonstationary signal’s frequency and phase content over time.

Ignoring the optional batch dimension, this operation computes the following expression:

\[X[\omega, m]=\sum_{k=0}^{\text {win_length-1 }} \text { window }[k] \text { input }[m \times \text { hop_length }+ k] \exp \left(-j \frac{2 \pi \cdot \omega k}{\text { win_length }}\right)\]

where \(m\) is the index of the sliding window, and \(ω\) is the frequency in range \(0 \leq \omega < \text{n\_fft}0≤ω<n_fft\).

Parameters:
  • x (Tensor) – Time sequences of stft, must be either a 1-D time tensor or a 2-D tensor.

  • n_fft (int) – The size of Fourier transform.

  • hop_length (int, optional) – The distance between neighboring sliding window frames. Default: None(treated as equal to \(floor(n_fft / 4)\)).

  • win_length (int, optional) – the size of window frame and STFT filter. Default: None(treated as equal to n_fft).

  • window (Tensor, optional) – the optional window function, 1-D tensor of size win_length. Default: None(treated as window of all \(1\) s). If win_length < n_fft, window will be padded on both sides with ones to length n_fft before it takes effect.

  • center (bool, optional) – whether to pad x on both sides. Default: True.

  • pad_mode (str, optional) – controls the padding method used when center is True. Default: ‘REFLECT’.

  • normalized (bool, optional) – controls whether to return the normalized STFT results Default: False.

  • onesided (bool, optional) – controls whether to return half of results to avoid redundancy for real inputs. Default: None. True for real x and window, False otherwise.

  • return_complex (bool, optional) – whether to return a complex tensor, or a real tensor with an extra last dimension for the real and imaginary components. Default: None. True for complex x or window, False otherwise.

Returns:

  • output (Tensor) - A tensor containing the STFT result.

    If return_complex is True, it returns a complex Tensor with shape \((*, N, T)\). If return_complex is False, it returns a real Tensor with shape \((*, N, T, 2)\).

    N is size of Fourier transform, it depends on parameter onesided: - If onesided is False, \(N = n_fft\). - If onesided is True, \(N = n_fft // 2 + 1\).

    T is the total number of frames used, calculated by this formula: \(T = 1 + (len - n_fft) / hop_length\), where len depends on parameter center: - If center is False, \(len = signal_length\). - If center is True, \(len = signal_length + (n_fft // 2) * 2\). where \(signal_length\) is the signal length, it equals to \(x.shape[-1]\).

Raises:
  • TypeError – If x is not a 1-D or 2-D tensor.

  • TypeError – If window is not a 1-D tensor.

  • TypeError – If any one of center , normalized , onesided and return_complex is assigned a nonboolean value.

  • TypeError – If pad_mode is is assigned a value that is not string.

  • TypeError – If n_fft , hop_length or hop_length is not an int.

Supported Platforms:

Ascend CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> import numpy as np
>>> x = ms.Tensor(np.random.rand(2,7192), ms.float32)
>>> output = ops.stft(n_fft=64, x=x)
>>> print(output.shape)
(2, 33, 450, 2)
tinyms.primitives.stop_gradient(value)[source]

StopGradient is used for eliminating the effect of a value on the gradient, such as truncating the gradient propagation from an output of a function. For more details, please refer to Stop Gradient.

Parameters:

value (Any) – The value whose effect on the gradient to be eliminated.

Returns:

The same as value.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore import dtype as mstype
>>> def net(x, y):
...     out1 = ops.MatMul()(x, y)
...     out2 = ops.MatMul()(x, y)
...     out2 = ops.stop_gradient(out2)
...     return out1, out2
...
>>> x = Tensor([[0.5, 0.6, 0.4], [1.2, 1.3, 1.1]], dtype=mstype.float32)
>>> y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.float32)
>>> grad_fn = ops.grad(net)
>>> output = grad_fn(x, y)
>>> print(output)
[[1.4100001 1.6       6.5999994]
 [1.4100001 1.6       6.5999994]]
tinyms.primitives.strided_slice(input_x, begin, end, strides, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0)[source]

Extracts a strided slice of a Tensor based on begin/end index and strides.

This operation extracts a fragment of size (end-begin)/strides from the given ‘input_tensor’. Starting from the beginning position, the fragment continues adding strides to the index until all dimensions are not less than the ending position.

Warning

  • begin , end and strides must have the same shape.

  • begin , end and strides are all 1-D Tensor, and their shape size must not greater than the dim of input_x.

During the slicing process, the fragment (end-begin)/strides are extracted from each dimension.

Example: For Tensor input_x with shape \((5, 6, 7)\), set begin, end and strides to (1, 3, 2), (3, 5, 6), (1, 1, 2) respectively, then elements from index 1 to 3 are extrected for dim 0, index 3 to 5 are extrected for dim 1 and index 2 to 6 with a stirded of 2 are extrected for dim 2, this process is equivalent to a pythonic slice input_x[1:3, 3:5, 2:6:2].

If the length of beginend and strides is smaller than the dim of input_x, then all elements are extracted from the missing dims, it behaves like all the missing dims are filled with zeros, size of that missing dim and ones.

Example: For Tensor input_x with shape \((5, 6, 7)\), set begin, end and strides to (1, 3), (3, 5), (1, 1) respectively, then elements from index 1 to 3 are extrected for dim 0, index 3 to 5 are extrected for dim 1 and index 3 to 5 are extrected for dim 2, this process is equivalent to a pythonic slice input_x[1:3, 3:5, 0:7].

Here’s how a mask works: For each specific mask, it will be converted to a binary representation internally, and then reverse the result to start the calculation. For Tensor input_x with shape \((5, 6, 7)\). Given mask value of 3 which can be represented as 0b011. Reverse that we get 0b110, which implies the first and second dim of the original Tensor will be effected by this mask. See examples below, for simplicity all mask mentioned below are all in their reverted binary form:

  • begin_mask and end_mask

    If the ith bit of begin_mask is 1, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask is analogous, except with the end range. For Tensor input_x with shape \((5, 6, 7, 8)\), if begin_mask is 0b110, end_mask is 0b011, the slice input_x[0:3, 0:6, 2:7:2] is produced.

  • ellipsis_mask

    If the ith bit of ellipsis_mask is 1, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask. For Tensor input_x with shape \((5, 6, 7, 8)\), input_x[2:,…,:6] is equivalent to input_x[2:5,:,:,0:6] , input_x[2:,…] is equivalent to input_x[2:5,:,:,:].

  • new_axis_mask

    If the ith bit of new_axis_mask is 1, begin, end and strides are ignored and a new length 1 dimension is added at the specified position in the output Tensor. For Tensor input_x with shape \((5, 6, 7)\), if new_axis_mask is 0b110, a new dim is added to the second dim, which will produce a Tensor with shape \((5, 1, 6, 7)\).

  • shrink_axis_mask

    If the ith bit of shrink_axis_mask is 1, begin, end and strides are ignored and dimension i will be shrunk to 0. For Tensor input_x with shape \((5, 6, 7)\), if shrink_axis_mask is 0b010, it is equivalent to slice x[:, 5, :] and results in an output shape of \((5, 7)\).

Note

new_axis_mask and shrink_axis_mask are not recommended to use at the same time, it might incur unexpected result.

Parameters:
  • input_x (Tensor) – The input Tensor to be extracted from.

  • begin (tuple[int]) – A tuple which represents the location where to start. Only non-negative int is allowed.

  • end (tuple[int]) – A tuple or which represents the maximum location where to end. Only non-negative int is allowed.

  • strides (tuple[int]) – A tuple which represents the strides is continuously added before reaching the maximum location. Only int is allowed, it can be negative which results in reversed slicing.

  • begin_mask (int, optional) – Starting index of the slice. Default: 0.

  • end_mask (int, optional) – Ending index of the slice. Default: 0.

  • ellipsis_mask (int, optional) – An int mask, ignore slicing operation when set to 1. Default: 0.

  • new_axis_mask (int, optional) – An int mask for adding new dims. Default: 0.

  • shrink_axis_mask (int, optional) – An int mask for shrinking dims. Default: 0.

Returns:

Tensor, return the extracts a strided slice of a Tensor based on begin/end index and strides.

Raises:
  • TypeError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is not an int.

  • TypeError – If begin, end or strides is not tuple[int].

  • ValueError – If begin_mask, end_mask, ellipsis_mask, new_axis_mask or shrink_axis_mask is less than 0.

  • ValueError – If begin, end and strides have different shapes.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
...                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> output = ops.strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
>>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
>>> # start = [1, 0, 2] , end = [3, 1, 3], strides = [1, 1, 1], Find a segment of (start, end),
>>> # note that end is an open interval
>>> # To facilitate understanding, this operator can be divided into three steps:
>>> # Step 1: Calculation of the first dimension:
>>> # start = 1, end = 3, strides = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
>>> # output_1th =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #         [4,4,4]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #         [6,6,6]
>>> #     ]
>>> # ]
>>> # Step 2: Calculation of the second dimension
>>> # 2nd dimension, start = 0, end = 1, strides = 1. So only 0th rows
>>> # can be taken, and the output at this time.
>>> # output_2nd =
>>> # [
>>> #     [
>>> #         [3,3,3]
>>> #     ]
>>> #     [
>>> #         [5,5,5]
>>> #     ]
>>> # ]
>>> # Step 3: Calculation of the third dimension
>>> # 3nd dimension,start = 2, end = 3, strides = 1, So can take 2th cols,
>>> # and you get the final output at this time.
>>> # output_3ed =
>>> # [
>>> #     [
>>> #         [3]
>>> #     ]
>>> #     [
>>> #         [5]
>>> #     ]
>>> # ]
>>> # The final output after finishing is:
>>> print(output)
[[[3.]]
 [[5.]]]
>>> # another example like :
>>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> print(output)
[[[3. 3. 3.]]]
tinyms.primitives.sub(input, other)[source]

Subtracts the second input tensor from the first input tensor element-wise.

\[out_{i} = input_{i} - other_{i}\]

Note

  • Inputs of input and other comply with the implicit type conversion rules to make the data types consistent.

  • The inputs must be two tensors or one tensor and one scalar.

  • When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them can be broadcast.

  • When the inputs are one tensor and one scalar, the scalar could only be a constant.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input, when the first input is a Tensor, the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If input and other are not number.Number or bool or Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> other = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> output = ops.sub(input, other)
>>> print(output)
[-3 -3 -3]
tinyms.primitives.subtract(input, other, *, alpha=1)[source]

Performs the element-wise subtraction of input tensors.

\[output[i] = input[i] - alpha * other[i]\]
Parameters:
  • input (Union[Tensor, number.Number]) – Tensor or Number involved in subtraction.

  • other (Union[Tensor, number.Number]) – Tensor or Number involved in subtraction.

Keyword Arguments:

alpha (Number) – The multiplier for other. Default: 1.

Returns:

Tensor, has the same shape and dtype as input tensors.

Raises:

TypeErrorinput or other is neither Tensor nor number.Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> y = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> z = ops.subtract(input, y, alpha=1)
>>> print(z)
[3. 3. 3.]
tinyms.primitives.sum(input, dim=None, keepdim=False, *, dtype=None)[source]

Calculate sum of Tensor elements over a given dim.

Parameters:
  • input (Tensor) – The input tensor.

  • dim (Union[None, int, tuple(int), list(int)]) – Dimensions along which a sum is performed. If None, sum all the elements of the input tensor. If the dim is a tuple or list of ints, a sum is performed on all the dimensions specified in the tuple. Must be in the range \([-input.ndim, input.ndim)\) . Default: None.

  • keepdim (bool) – Whether the output tensor has dim retained or not. If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default: False.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The desired data type of returned Tensor. Default: None.

Returns:

A Tensor, sum of elements over a given dim in input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dim is not an int, tulpe(int), list(int) or None.

  • ValueError – If dim is not in the range \([-input.ndim, input.ndim)\) .

  • TypeError – If keepdim is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3]],
...                      [[4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6]],
...                      [[7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9]]]), mstype.float32)
>>> out = ops.sum(x)
>>> print(out)
270.0
>>> out = ops.sum(x, dim=2)
>>> print(out)
[[ 6. 12. 18.]
 [24. 30. 36.]
 [42. 48. 54.]]
>>> out = ops.sum(x, dim=2, keepdim=True)
>>> print(out)
[[[ 6.]
 [12.]
 [18.]]
[[24.]
 [30.]
 [36.]]
[[42.]
 [48.]
 [54.]]]
tinyms.primitives.svd(input, full_matrices=False, compute_uv=True)[source]

Computes the singular value decompositions of one or more matrices.

If \(A\) is a matrix, the svd returns the singular values \(S\), the left singular vectors \(U\) and the right singular vectors \(V\). It meets:

\[A=U*diag(S)*V^{T}\]
Parameters:
  • input (Tensor) – Tensor of the matrices to be decomposed. The shape should be \((*, M, N)\), the supported dtype are float32 and float64.

  • full_matrices (bool, optional) – If true, compute full-sized \(U\) and \(V\). If false, compute only the leading P singular vectors, with P is the minimum of M and N. Default: False.

  • compute_uv (bool, optional) – If true, compute the left and right singular vectors. If false, compute only the singular values. Default: True.

Returns:

  • s (Tensor) - Singular values. The shape is \((*, P)\).

  • u (Tensor) - Left singular vectors. If compute_uv is False, u will not be returned. The shape is \((*, M, P)\). If full_matrices is True, the shape will be \((*, M, M)\).

  • v (Tensor) - Right singular vectors. If compute_uv is False, v will not be returned. The shape is \((*, N, P)\). If full_matrices is True, the shape will be \((*, N, N)\).

Raises:
  • TypeError – If full_matrices or compute_uv is not the type of bool.

  • TypeError – If the rank of input less than 2.

  • TypeError – If the type of input is not one of the following dtype: float32, float64.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, set_context
>>> from mindspore import ops
>>> set_context(device_target="CPU")
>>> input = Tensor(np.array([[1, 2], [-4, -5], [2, 1]]).astype(np.float32))
>>> s, u, v = ops.svd(input, full_matrices=True, compute_uv=True)
>>> print(s)
[7.0652843 1.040081 ]
>>> print(u)
[[ 0.30821905 -0.48819482 0.81649697]
 [-0.90613353  0.11070572 0.40824813]
 [ 0.2896955   0.8656849  0.4082479 ]]
>>> print(v)
[[ 0.63863593 0.769509  ]
 [ 0.769509  -0.63863593]]
tinyms.primitives.swapaxes(input, axis0, axis1)[source]

Interchange two axes of a tensor.

Parameters:
  • input (Tensor) – Input tensor.

  • axis0 (int) – First axis.

  • axis1 (int) – Second axis.

Returns:

Transposed tensor, has the same data type as input.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If axis0 or axis1 is not integer.

  • ValueError – If axis0 or axis1 is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = ops.swapaxes(input, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.primitives.swapdims(input, dim0, dim1)[source]

Interchange two dims of a tensor. This function is equivalent to mindspore.ops.swapaxes() function.

Parameters:
  • input (Tensor) – Input tensor.

  • dim0 (int) – First dim.

  • dim1 (int) – Second dim.

Returns:

Transposed tensor, has the same data type as input.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If dim0 or dim1 is not integer.

  • ValueError – If dim0 or dim1 is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> input = Tensor(np.ones((2,3,4), dtype=np.float32))
>>> output = ops.swapdims(input, 0, 2)
>>> print(output.shape)
(4,3,2)
tinyms.primitives.t(input)[source]

Transposes a 2-D Tensor. 1-D Tensor are returned as it is.

Parameters:

input (Tensor) – The input Tensor.

Returns:

Tensor, the transpose of input .

Raises:

ValueError – If the dimension of input is larger than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[1, 2, 3], [2, 3, 4]], mstype.float32)
>>> output = ops.t(x)
>>> print(output)
[[1. 2.]
 [2. 3.]
 [3. 4.]]
tinyms.primitives.tan(input)[source]

Computes tangent of input element-wise.

\[out_i = tan(input_i)\]
Parameters:

input (Tensor) – The input Tensor, valid for any dimensions.

Returns:

Tensor, has the same shape as input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
>>> output = ops.tan(input)
>>> print(output)
[-1.5574081 0. 1.5574081]
tinyms.primitives.tanh(input)[source]

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Parameters:

input (Tensor) – Input of Tanh, with float16 or float32 data type.

Returns:

Tensor, with the same type and shape as the input.

Raises:
  • TypeError – If dtype of input is neither float16 nor float32.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> output = ops.tanh(input)
>>> print(output)
[0.7615941 0.9640276 0.9950547 0.9993293 0.9999092]
tinyms.primitives.tanhshrink(input)[source]

Tanhshrink Activation, \(Tanhshrink(x)=x-Tanh(x)\) , where \(x\) corresponds to input . See mindspore.nn.Tanhshrink for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> import numpy as np
>>> input = Tensor(np.array([1, 2, 3, 2, 1]), ms.float16)
>>> output = ops.tanhshrink(input)
>>> print(output)
[0.2383 1.036  2.004  1.036  0.2383]
tinyms.primitives.tensor_scatter_add(input_x, indices, updates)[source]

Creates a new tensor by adding the values from the positions in input_x indicated by indices, with values from updates. When multiple values are given for the same index, the updated result will be the sum of all values. This operation is almost equivalent to using ScatterNdAdd, except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

On GPU, if some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to self tensor. On CPU, if some values of the indices are out of bound, raising an index error. On Ascend, out of bound checking is not supported, if some values of the indices are out of bound, unknown errors may be caused.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates. Shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_add(input_x, indices, updates)
>>> print(output)
[[ 3.1  0.3  3.6]
 [ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_div(input_x, indices, updates)[source]

Creates a new tensor by dividing the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will be to divided these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

  • If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

  • The operator can’t handle division by 0 exceptions, so the user needs to make sure there is no 0 value in updates.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, nn, ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.0]), mindspore.float32)
>>> output = ops.tensor_scatter_div(input_x, indices, updates)
>>> print(output)
[[-0.05  0.3  3.6  ]
 [ 0.4   0.5  -3.2 ]]
tinyms.primitives.tensor_scatter_elements(input_x, indices, updates, axis=0, reduction='none')[source]

Updates the value of the input tensor through the reduction operation.

tensor_scatter_elements takes three inputs data, updates, and indices of the same rank r >= 1, an optional attribute axis that identifies an axis of data (default is 0), and another optional attribute reduction that identifies reduction operation. When reduction is set to “none”, the update value will be assigned to the output value according to the indices. When reduction is set to “add”, the update value will be added to the output value according to the indices.

For a 3-D tensor, the output is:

output[indices[i][j][k]][j][k] = updates[i][j][k]  # if axis == 0, reduction == "none"

output[i][indices[i][j][k]][k] += updates[i][j][k]  # if axis == 1, reduction == "add"

output[i][j][indices[i][j][k]] = updates[i][j][k]  # if axis == 2, reduction == "none"

Warning

  • The order in which updates are applied is nondeterministic, meaning that if there are multiple index vectors in indices that correspond to the same position, the value of that position in the output will be nondeterministic.

  • On Ascend, the reduction only support set to “none” for now.

  • On Ascend, the data type of input_x must be float16 or float32.

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The rank of input must be at least 1.

  • indices (Tensor) – The index to do add operation whose data type must be mindspore.int32 or mindspore.int64. Same rank as input_x. And accepted range is [-s, s) where s is the size along axis.

  • updates (Tensor) – The tensor doing the add operation with input_x, has the same type as input_x, and update.shape should be equal to indices.shape.

  • axis (int) – Which axis to scatter, default is 0. Accepted range is [-r, r) where r = rank(input_x).

  • reduction (str) – Which reduction operation to scatter, default is “none”. Other option: “add”.

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If indices is neither int32 nor int64.

  • ValueError – If anyone of the rank among input_x, indices and updates less than 1.

  • ValueError – If the shape of updates is not equal to the shape of indices.

  • ValueError – If the rank of updates is not equal to the rank of input_x.

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[1, 2, 3, 4, 5]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([[2, 4]]), mindspore.int32)
>>> updates = Tensor(np.array([[8, 8]]), mindspore.float32)
>>> axis = 1
>>> reduction = "none"
>>> output = ops.tensor_scatter_elements(input_x, indices, updates, axis, reduction)
>>> print(output)
[[ 1  2  8  4  8]]
tinyms.primitives.tensor_scatter_max(input_x, indices, updates)[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the largest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices].

Note

If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> output = ops.tensor_scatter_max(input_x, indices, updates)
>>> # 5, Perform the max operation for the first time:
>>> #      first_input_x = Max(input_x[0][0], updates[0]) = [[1.0, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the max operation for the second time:
>>> #      second_input_x = Max(input_x[0][0], updates[1]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> print(output)
[[ 2.2  0.3  3.6]
 [ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_min(input_x, indices, updates)[source]

By comparing the value at the position indicated by indices in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor.

The last axis of the index is the depth of each index vector. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see case below.

Note

If some values of the indices are out of range, instead of raising an index error, the corresponding updates will not be hw to input_x.

Parameters:
  • input_x (Tensor) – The input tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_min(input_x, indices, updates)
>>> print(output)
[[ -0.1  0.3  3.6]
[ 0.4  0.5 -3.2]]
tinyms.primitives.tensor_scatter_mul(input_x, indices, updates)[source]

Creates a new tensor by multiplying the values from the positions in input_x indicated by indices, with values from updates. When divided values are provided for the same index, the result of the update will multiply these values respectively. Except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

  • If some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to input_x.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

GPU CPU

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> # Next, demonstrate the approximate operation process of this operator:
>>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
>>> # 2, And input_x[0, 0] = -0.1
>>> # 3, So input_x[indices] = [-0.1, -0.1]
>>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
>>> # 5, Perform the multiply operation for the first time:
>>> #      first_input_x = input_x[0][0] * updates[0] = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> # 6, Perform the multiply operation for the second time:
>>> #      second_input_x = input_x[0][0] * updates[1] = [[-0.22, 0.3, 3.6], [0.4, 0.5, -3.2]]
>>> output = ops.tensor_scatter_mul(input_x, indices, updates)
>>> print(output)
[[-0.22  0.3   3.6  ]
 [ 0.4   0.5   -3.2 ]]
tinyms.primitives.tensor_scatter_sub(input_x, indices, updates)[source]

Creates a new tensor by subtracting the values from the positions in input_x indicated by indices, with values from updates. When multiple values are provided for the same index, the result of the update will be to subtract these values respectively. This operation is almost equivalent to using mindspore.ops.ScatterNdSub , except that the updates are applied on output Tensor instead of input Parameter.

The last axis of indices is the depth of each index vectors. For each index vector, there must be a corresponding value in updates. The shape of updates should be equal to the shape of input_x[indices]. For more details, see use cases.

Note

On GPU, if some values of the indices are out of bound, instead of raising an index error, the corresponding updates will not be updated to self tensor. On CPU, if some values of the indices are out of bound, raising an index error. On Ascend, out of bound checking is not supported, if some values of the indices are out of bound, unknown errors may be caused.

Parameters:
  • input_x (Tensor) – The target tensor. The dimension of input_x must be no less than indices.shape[-1].

  • indices (Tensor) – The index of input tensor whose data type is int32 or int64. The rank must be at least 2.

  • updates (Tensor) – The tensor to update the input tensor, has the same type as input, and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].

Returns:

Tensor, has the same shape and type as input_x.

Raises:
  • TypeError – If dtype of indices is neither int32 nor int64.

  • ValueError – If length of shape of input_x is less than the last dimension of shape of indices.

  • RuntimeError – If a value of indices is not in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_sub(input_x, indices, updates)
>>> print(output)
[[-3.3000002  0.3        3.6      ]
 [ 0.4        0.5       -3.2      ]]
tinyms.primitives.tensor_split(input, indices_or_sections, axis=0)[source]

Splits a tensor into multiple sub-tensors along the given axis.

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) –

    • If indices_or_sections is an integer n, input tensor will be split into n sections.

      • If \(input.size(axis)\) can be divisible by n, sub-sections will have equal size \(input.size(axis) / n\) .

      • If \(input.size(axis)\) is not divisible by n, the first \(input.size(axis) % n\) sections will have size \(x.size(axis) // n + 1\) , and the rest will have size \(input.size(axis) // n\) .

    • If indices_or_sections is of type tuple(int) or list(int), the input tensor will be split at the indices in the list or tuple. For example, given parameters \(indices\_or\_sections=[1, 4]\) and \(axis=0\) , the input tensor will be split into sections \(input[:1]\) , \(input[1:4]\) , and \(input[4:]\) .

  • axis (int) – The axis along which to split. Default: 0.

Returns:

A tuple of sub-tensors.

Raises:
  • TypeError – If argument input is not Tensor.

  • TypeError – If argument axis is not int.

  • ValueError – If argument axis is out of range of \([-input.ndim, input.ndim)\) .

  • TypeError – If each element in ‘indices_or_sections’ is not integer.

  • TypeError – If argument indices_or_sections is not int, tuple(int) or list(int).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).astype("float32")
>>> output = ops.tensor_split(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[3], dtype=Float32, value= [ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]),
Tensor(shape=[3], dtype=Float32, value= [ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]),
Tensor(shape=[3], dtype=Float32, value= [ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]))
tinyms.primitives.threshold(input, thr, value)[source]

Returns each element of input after thresholding by thr as a Tensor.

The formula is defined as follows:

\[\begin{split}y = \begin{cases} input, &\text{ if } input > \text{thr} \\ \text{value}, &\text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • input (Tensor) – The input of threshold with data type of float16 or float32.

  • thr (Union[int, float]) – The value of the threshold.

  • value (Union[int, float]) – The value to replace with when element is less than threshold.

Returns:

Tensor, the same shape and data type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If thr is not a float or an int.

  • TypeError – If value is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> inputs = mindspore.Tensor([0.0, 2, 3], mindspore.float32)
>>> outputs = ops.threshold(inputs, 1, 100)
>>> print(outputs)
[100.   2.   3.]
tinyms.primitives.tile(input, multiples)[source]

Replicates an input tensor with given multiples times.

Creates a new tensor by replicating input multiples times. The i’th dimension of output tensor has input.shape[i] * multiples[i] elements, and the values of input are replicated multiples[i] times along the i’th dimension.

Note

The length of multiples must be greater or equal to the length of dimension in input.

Parameters:
  • input (Tensor) – 1-D or higher dimensional Tensor. Set the shape of input tensor as \((x_1, x_2, ..., x_S)\) .

  • multiples (tuple[int]) – The parameter that specifies the number of replications, the parameter type is tuple, and the data type is int, i.e., \((y_1, y_2, ..., y_S)\). The length of multiples cannot be smaller than the length of the shape of input. Only constant value is allowed.

Returns:

Tensor, has the same data type as the input. Suppose the length of multiples is d, the dimension of input is input.dim, and the shape of input is \((x_1, x_2, ..., x_S)\).

  • If input.dim = d, then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((x_1*y_1, x_2*y_2, ..., x_S*y_R)\).

  • If input.dim < d, fill in multiple 1 in the length of the shape of input until their lengths are consistent. Such as set the shape of input as \((1, ..., x_1, x_2, ..., x_S)\), then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((1*y_1, ..., x_R*y_R, x_S*y_S)\).

Raises:
  • TypeError – If multiples is not a tuple or its elements are not all int.

  • ValueError – If the elements of multiples are not all greater than 0.

  • ValueError – If the length of multiples are smaller than the length of dimension in input_x.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> output = ops.tile(input_x, multiples)
>>> print(output)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
>>> multiples = (2, 3, 2)
>>> output = ops.tile(input_x, multiples)
>>> print(output)
[[[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]
 [[1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]
  [1. 2. 1. 2.]
  [3. 4. 3. 4.]]]
tinyms.primitives.top_k(input_x, k, sorted=True)[source]

top_k is deprecated, please use ops.topk instead.

tinyms.primitives.topk(input, k, dim=None, largest=True, sorted=True)[source]

Finds values and indices of the k largest or smallest entries along a given dimension.

Warning

  • If sorted is set to False, it will use the aicpu operator, the performance may be reduced. In addition, due to different memory layout and traversal methods on different platforms, the display order of calculation results may be inconsistent when sorted is False.

If the input is a one-dimensional Tensor, finds the k largest or smallest entries in the Tensor, and outputs its value and index as a Tensor. values[k] is the k largest item in input, and its index is indices [k].

For a multi-dimensional matrix, calculates the first or last k entries in a given dimension, therefore:

\[values.shape = indices.shape\]

If the two compared elements are the same, the one with the smaller index value is returned first.

Parameters:
  • input (Tensor) – Input to be computed, data type must be float16, float32 or int32.

  • k (int) – The number of top or bottom elements to be computed along the last dimension, constant input is needed.

  • dim (int, optional) – The dimension to sort along. Default: None.

  • largest (bool, optional) – If largest is False then the k smallest elements are returned. Default: True.

  • sorted (bool, optional) – If True, the obtained elements will be sorted by the values in descending order. If False, the obtained elements will not be sorted. Default: True.

Returns:

A tuple consisting of values and indexes.

  • values (Tensor): The k largest or smallest elements in each slice of the given dimension.

  • indices (Tensor): The indices of values within the last dimension of input.

Raises:
  • TypeError – If sorted is not a bool.

  • TypeError – If input is not a Tensor.

  • TypeError – If k is not an int.

  • TypeError – If dtype of input is not one of the following: float16, float32 or int32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> from mindspore import ops
>>> x = ms.Tensor([[0.5368, 0.2447, 0.4302, 0.9673],
...                [0.4388, 0.6525, 0.4685, 0.1868],
...                [0.3563, 0.5152, 0.9675, 0.8230]], dtype=ms.float32)
>>> output = ops.topk(x, 2, dim=1)
>>> print(output)
(Tensor(shape=[3, 2], dtype=Float32, value=
[[ 9.67299998e-01,  5.36800027e-01],
 [ 6.52499974e-01,  4.68499988e-01],
 [ 9.67499971e-01,  8.23000014e-01]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[3, 0],
 [1, 2],
 [2, 3]]))
>>> output2 = ops.topk(x, 2, dim=1, largest=False)
>>> print(output2)
(Tensor(shape=[3, 2], dtype=Float32, value=
[[ 2.44700000e-01,  4.30200011e-01],
 [ 1.86800003e-01,  4.38800007e-01],
 [ 3.56299996e-01,  5.15200019e-01]]), Tensor(shape=[3, 2], dtype=Int32, value=
[[1, 2],
 [3, 0],
 [0, 1]]))
tinyms.primitives.trace(input)[source]

Returns a new tensor that is the sum of the input trace.

Note

Input must be matrix, and complex number is not supported at present.

Parameters:

input (Tensor) – A matrix to be calculated. The matrix must be two dimensional.

Returns:

Tensor, with the same data type as input input, and size equals to 1.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the dimension of input is not equal to 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[10, 11, 12], [13, 14, 15], [16, 17, 18]]), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
42.0
>>> input = Tensor(np.arange(1, 13).reshape(3, 4), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
18.0
>>> input = Tensor(np.arange(12, 0, -1).reshape(4, 3), mindspore.float32)
>>> output = ops.trace(input)
>>> print(output)
24.0
tinyms.primitives.transpose(input, input_perm)[source]

Permutes the dimensions of the input tensor according to input permutation.

For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector please refer the class: mindspore.ops.ExpandDims. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape is \((i[0], i[1], ... i[n-2], i[n-1])\), then a.transpose().shape is \((i[n-1], i[n-2], ... i[1], i[0])\).

Note

On GPU and CPU, if the value of input_perm is negative, its actual value is input_perm[i] + rank(input). Negative value of input_perm is not supported on Ascend.

Parameters:
  • input (Tensor) – The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_perm (tuple[int]) – The permutation to be converted. The elements in input_perm are composed of the indexes of each dimension of input. The length of input_perm and the shape of input must be the same. Only constant value is allowed. Must be in the range [-rank(input), rank(input)).

Returns:

Tensor, the type of output tensor is the same as input and the shape of output tensor is decided by the shape of input and the value of input_perm.

Raises:
  • TypeError – If input_perm is not a tuple.

  • ValueError – If length of shape of input is not equal to length of shape of input_perm.

  • ValueError – If the same element exists in input_perm.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> input_perm = (0, 2, 1)
>>> output = ops.transpose(input, input_perm)
>>> print(output)
[[[ 1.  4.]
  [ 2.  5.]
  [ 3.  6.]]
 [[ 7. 10.]
  [ 8. 11.]
  [ 9. 12.]]]
tinyms.primitives.trapz(y, x=None, *, dx=1.0, dim=-1)[source]

Integrates y(x) along given dim using trapezoidal rule. By default x-dim distances between points will be 1.0, alternatively they can be provided with x array or with dx scalar.

\[\mathop{ \int }\nolimits_{{}}^{{}}{y}{ \left( {x} \right) } \text{d} x\]
Parameters:
  • y (Tensor) – Input tensor to integrate.

  • x (Tensor, optional) – The sample points corresponding to the y values. If x is None, the sample points are assumed to be evenly spaced dx apart. Default: None. If x is not None, after subtracting 1 from the axis specified by dim, the shape of x should be same as y or can broadcast to y.

Keyword Arguments:
  • dx (float, optional) – The spacing between sample points when x is None. If x is specified, dx does not take effect. Default: 1.0.

  • dim (int, optional) – The dim along which to integrate. Default: -1.

Returns:

Tensor of float, definite integral as approximated by trapezoidal rule. If y is a one-dimensional array, the result is a floating-point number. If y is an n-dimensional array, the result is an N-1 dimensional array.

Raises:
  • RuntimeError – If dim of x is 1, and x.shape[0] is not equal to y.shape[dim].

  • ValueError – If dim is out of range of \([-y.ndim, y.ndim)\).

  • TypeError – If y is not a Tensor.

  • TypeError – If x is not None and is not a Tensor.

  • TypeError – If dx is not a float number.

  • TypeError – If dim is not a Integer.

Supported Platforms:

Ascend GPU CPU

Examples

>>> y = Tensor(np.array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]).astype(np.float32))
>>> x = Tensor(np.array([[1, 2, 3], [1, 3, 5], [1, 4, 7]]).astype(np.float32))
>>> output = ops.trapz(y, x)
>>> print(output)
[2. 4. 6.]
tinyms.primitives.tril(input, diagonal=0)[source]

Returns the lower triangle part of ‘input’ (elements that contain the diagonal and below), and set the other elements to zeros.

Parameters:
  • input (Tensor) – A Tensor with shape \((x_1, x_2, ..., x_R)\). The rank must be at least 2. Supporting all number types including bool.

  • diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main diagonal.

Returns:

Tensor, the same shape and data type as the input x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If diagonal is not an int.

  • TypeError – If the type of x is neither number nor bool.

  • ValueError – If the rank of x is less than 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x)
>>> print(result)
[[ 1  0  0  0]
 [ 5  6  0  0]
 [10 11 12  0]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x, diagonal=1)
>>> print(result)
[[ 1  2  0  0]
 [ 5  6  7  0]
 [10 11 12 13]
 [14 15 16 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.tril(x, diagonal=-1)
>>> print(result)
[[ 0  0  0  0]
 [ 5  0  0  0]
 [10 11  0  0]
 [14 15 16  0]]
tinyms.primitives.tril_indices(row, col, offset=0, *, dtype=mindspore.int64)[source]

Calculates the indices of the lower triangular elements in a row * col matrix and returns them as a 2-by-N Tensor. The first row of the Tensor contains row coordinates, and the second row contains column coordinates. The coordinates are sorted by row and then by column.

The lower triangular part of the matrix consists of all elements on and below the diagonal.

Note

When running on CUDA, row * col must be less than 2^59 to prevent overflow during calculation.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

  • y (Tensor) - indices of the elements in lower triangular part of matrix. The type is specified by dtype. The shape of output is \((2, tril\_size)\), where \(tril\_size\) is the number of elements in the lower triangular matrix.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.tril_indices(4, 3, -1, mindspore.int64)
>>> print(output)
[[1 2 2 3 3 3]
 [0 0 1 0 1 2]]
>>> print(output.dtype)
Int64
tinyms.primitives.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, reduction='mean')[source]

TripletMarginLoss operation. See mindspore.nn.TripletMarginLoss for details.

Parameters:
  • anchor (Tensor) – A sample randomly selected from the training set. Data type must be BasicType.

  • positive (Tensor) – A sample belonging to the same category as anchor, with the same type and shape as anchor.

  • negative (Tensor) – A sample belonging to the different class from anchor, with the same type and shape as anchor.

  • margin (float, optional) – Make a margin between the positive pair and the negative pair. Default: 1.0.

  • p (int, optional) – The degree of norm for pairwise distance. Default: 2.

  • eps (float, optional) – Add small value to avoid division by zero. Default: 1e-06.

  • swap (bool, optional) – The distance swap change the negative distance to the distance between positive sample and negative sample. Default: “False”.

  • reduction (str, optional) – Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: “mean”.

Returns:

Tensor. If reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

Raises:
  • TypeError – If anchor or positive or ‘negative’ is not a Tensor.

  • TypeError – If dtype of anchor, positive and negative is not the same.

  • TypeError – If margin is not a float.

  • TypeError – If p is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If swap is not a bool.

  • ValueError – If dimensions of input anchor, positive and negative are less than or equal to 1 at the same time.

  • ValueError – If the dimension of input anchor or positive or negative is bigger than or equal to 8.

  • ValueError – If shape of anchor, positive and negative cannot broadcast.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

Supported Platforms:

GPU

Examples

>>> anchor = Tensor(np.array([[0.3, 0.7], [0.5, 0.5]]), mindspore.float32)
>>> positive = Tensor(np.array([[0.4, 0.6], [0.4, 0.6]]), mindspore.float32)
>>> negative = Tensor(np.array([[0.2, 0.9], [0.3, 0.7]]), mindspore.float32)
>>> output = ops.triplet_margin_loss(anchor, positive, negative)
>>> print(output)
0.8881968
tinyms.primitives.triu(input, diagonal=0)[source]

Returns the upper triangle part of ‘input’ (elements that contain the diagonal and below), and set the other elements to zeros.

Parameters:
  • input (Tensor) – The input tensor with shape \((N,∗)\) where ∗ means any number of additional dimensions.

  • diagonal (int, optional) – An optional attribute indicates the diagonal to consider, default: 0, indicating the main diagonal.

Returns:

Tensor, a tensor has the same shape and data type as input.

Raises:
  • TypeError – If diagonal is not an int.

  • TypeError – If input is not a Tensor.

  • ValueError – If length of shape of input is less than 1.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x)
>>> print(result)
[[ 1  2  3  4]
 [ 0  6  7  8]
 [ 0  0 12 13]
 [ 0  0  0 17]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x, diagonal=1)
>>> print(result)
[[ 0  2  3  4]
 [ 0  0  7  8]
 [ 0  0  0 13]
 [ 0  0  0  0]]
>>> x = Tensor(np.array([[ 1,  2,  3,  4],
...                      [ 5,  6,  7,  8],
...                      [10, 11, 12, 13],
...                      [14, 15, 16, 17]]))
>>> result = ops.triu(x, diagonal=-1)
>>> print(result)
[[ 1  2  3  4]
 [ 5  6  7  8]
 [ 0 11 12 13]
 [ 0  0 16 17]]
tinyms.primitives.triu_indices(row, col, offset=0, *, dtype=mindspore.int64)[source]

Calculates the indices of the upper triangular elements in a row * col matrix and returns them as a 2-by-N Tensor. The first row of the Tensor contains row coordinates, and the second row contains column coordinates. The coordinates are sorted by row and then by column.

The upper triangular part of the matrix consists of all elements on and above the diagonal.

Note

When running on CUDA, row * col must be less than 2^59 to prevent overflow during calculation.

Parameters:
  • row (int) – number of rows in the 2-D matrix.

  • col (int) – number of columns in the 2-D matrix.

  • offset (int, optional) – diagonal offset from the main diagonal. Default: 0.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified type of output tensor. An optional data type of mindspore.int32 and mindspore.int64. Default: mindspore.int64.

Returns:

  • y (Tensor) - indices of the elements in upper triangular part of matrix. The type is specified by dtype. The shape of output is \((2, triu\_size)\), where \(triu\_size\) is the number of elements in the upper triangular matrix.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.triu_indices(4, 4, 2, mindspore.int64)
>>> print(output)
[[0 0 1]
 [2 3 3]]
>>> print(output.dtype)
Int64
tinyms.primitives.true_divide(dividend, divisor)[source]

Alias for mindspore.ops.div() with \(rounding\_mode=None\).

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.trunc(input)[source]

Returns a new tensor with the truncated integer values of the elements of the input tensor.

Parameters:

input (Tensor) – The input tensor.

Returns:

Tensor, the same shape and data type as the input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]),mindspore.float32)
>>> output = ops.trunc(x)
>>> print(output)
[3. 0. 0. -3.]
tinyms.primitives.truncate_div(x, y)[source]

Divides the first input tensor by the second input tensor element-wise and rounds the results of division towards zero. Equivalent to C-style integer division.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

Broadcasting is supported.

Parameters:
  • x (Union[Tensor, Number, bool]) – The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, Number, bool]) – The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:

TypeError – If x and y is not one of the following: Tensor, Number, bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.truncate_div(x, y)
>>> print(output)
[0 1 0]
tinyms.primitives.truncate_mod(x, y)[source]

Returns the remainder of division element-wise.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • The input data does not support 0.

  • When the elements of input exceed 2048 , the accuracy of operator cannot guarantee the requirement of double thousandths in the mini form.

  • Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.

  • If shape is expressed as (D1,D2… ,Dn), then D1*D2… *DN<=1000000,n<=8.

Parameters:
  • x (Union[Tensor, numbers.Number, bool]) – The first input is a number, or a bool, or a tensor whose data type is number or bool.

  • y (Union[Tensor, numbers.Number, bool]) – The second input is a number, or a bool when the first input is a tensor, or a tensor whose data type is number or bool.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision among the two inputs.

Raises:
  • TypeError – If neither x nor y is one of the following: Tensor, number, bool.

  • TypeError – If neither x nor y is a Tensor.

  • ValueError – If the shape x and y cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> output = ops.truncate_mod(x, y)
>>> print(output)
[ 2  1 -1]
tinyms.primitives.tuple_to_array(input_x)[source]

Converts a tuple to a tensor.

If the type of the first number in the tuple is integer, the data type of the output tensor is int. Otherwise, the data type of the output tensor is float.

Parameters:

input_x (tuple) – A tuple of numbers. These numbers have the same type. Only constant value is allowed. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

Returns:

Tensor, if the input tuple contains N numbers, then the shape of the output tensor is (N,).

Raises:
  • TypeError – If input_x is not a tuple.

  • ValueError – If length of input_x is less than or equal to 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = (1,2,3)
>>> print(type(input_x))
<class 'tuple'>
>>> output = ops.tuple_to_array(input_x)
>>> print(type(output))
<class 'mindspore.common.tensor.Tensor'>
>>> print(output)
[1 2 3]
tinyms.primitives.unbind(input, dim=0)[source]

Removes a tensor dimension in specified axis.

Unstacks a tensor of rank R along axis dimension, and output tensors will have rank (R-1).

Given a tensor of shape \((n_1, n_2, ..., n_R)\) and a specified dim, shape of the output tensors is \((n_1, n_2, ..., n_{dim}, n_{dim+2}, ..., n_R)\).

Parameters:
  • input (Tensor) – The shape is \((n_1, n_2, ..., n_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

  • dim (int) – Dimension along which to unpack. Negative values wrap around. The range is [-R, R). Default: 0.

Returns:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-R, R).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
>>> output = ops.unbind(x, dim=0)
>>> print(output)
(Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]),
Tensor(shape=[3], dtype=Int64, value=[7, 8, 9]))
tinyms.primitives.unfold(input, kernel_size, dilation=1, padding=0, stride=1)[source]

Reshapes a tensor of format (N, C, H, W) by extracting sliding local blocks from the input Tensor and concatenating them along a new dimension.

Warning

  • Currently, only 4-D input tensors (batched image-like tensors) are supported.

Parameters:
  • input (Tensor) – 4-D Tensor. Support all real number data type.

  • kernel_size (Union[int, tuple[int], list[int]]) – The size of the kernel, should be two int for height and width. If type is int, it means that height equal with width. Must be specified.

  • dilation (Union[int, tuple[int], list[int]], optional) – The dilation of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

  • padding (Union[int, tuple[int], list[int]], optional) – The pad of the window, that must be a tuple/list of one or two int for height and width. If one int, pad_height = pad_width. If two int, pad_height = padding[0], pad_width = padding[1]. Default: 0.

  • stride (Union[int, tuple[int], list[int]], optional) – The stride of the window, should be two int for height and width. If type is int, it means that height equal with width. Default: 1.

Returns:

A Tensor, with same type as input.

Raises:
  • TypeError – If any data type of kernel_size, stride, dilation, kernel_size is not int, tuple or list.

  • ValueError – If kernel_size, dilation, stride value is not greater than zero or elements number more than 2.

  • ValueError – If padding value is less than zero.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.rand(4, 4, 32, 32), mindspore.float64)
>>> output = ops.unfold(x, kernel_size=3, dilation=1, stride=1)
>>> print(output.shape)
(4, 4, 9, 900)
tinyms.primitives.uniform(shape, minval, maxval, seed=None, dtype=mindspore.float32)[source]

Generates random numbers according to the Uniform random number distribution.

Note

The number in tensor minval should be strictly less than maxval at any position after broadcasting.

Parameters:
  • shape (Union[tuple, Tensor]) – The shape of random tensor to be generated.

  • minval (Tensor) – The distribution parameter a. It defines the minimum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • maxval (Tensor) – The distribution parameter b. It defines the maximum possible generated value, with int32 or float32 data type. If dtype is int32, only one number is allowed.

  • seed (int) – Seed is used as entropy source for the random number engines to generate pseudo-random numbers, must be non-negative. Default: None, which will be treated as 0.

  • dtype (mindspore.dtype) – Type of the Uniform distribution. If it is int32, it generates numbers from discrete uniform distribution; if it is float32, it generates numbers from continuous uniform distribution. It only supports these two data types. Default: mindspore.float32.

Returns:

Tensor. The shape should be equal to the broadcasted shape between the input shape and shapes of minval and maxval. The dtype is designated as the input dtype.

Raises:
  • TypeError – If shape is neither a tuple nor a Tensor.

  • TypeError – If ‘minval’ or ‘maxval’ is neither int32 nor float32 and dtype of ‘minval’ is not the same as ‘maxval’.

  • TypeError – If seed is not an int.

  • TypeError – If ‘dtype’ is neither int32 nor float32.

Supported Platforms:

GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> import mindspore
>>> import numpy as np
>>> # For discrete uniform distribution, only one number is allowed for both minval and maxval:
>>> shape = (4, 2)
>>> minval = Tensor(1, mindspore.int32)
>>> maxval = Tensor(2, mindspore.int32)
>>> output = ops.uniform(shape, minval, maxval, seed=5, dtype=mindspore.int32)
>>>
>>> # For continuous uniform distribution, minval and maxval can be multi-dimentional:
>>> shape = (3, 1, 2)
>>> minval = Tensor(np.array([[3, 4], [5, 6]]), mindspore.float32)
>>> maxval = Tensor([8.0, 10.0], mindspore.float32)
>>> output = ops.uniform(shape, minval, maxval, seed=5)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
tinyms.primitives.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=0, remove_accidental_hits=False)[source]

Uniform candidate sampler.

This function samples a set of classes(sampled_candidates) from [0, range_max-1] based on uniform distribution. If unique=True, candidates are drawn without replacement, else unique=False with replacement.

Parameters:
  • true_classes (Tensor) – A Tensor. The target classes with a Tensor shape of \((batch_size, num_true)\) .

  • num_true (int) – The number of target classes in each training example.

  • num_sampled (int) – The number of classes to randomly sample. The sampled_candidates will have a shape of num_sampled. If unique=True, num_sampled must be less than or equal to range_max.

  • unique (bool) – Whether all sampled classes in a batch are unique.

  • range_max (int) – The number of possible classes, must be positive.

  • seed (int) – Used for random number generation, must be non-negative. If seed has a value of 0, the seed will be replaced with a randomly generated value. Default: 0.

  • remove_accidental_hits (bool) – Whether accidental hit is removed. Default: False.

Returns:

  • sampled_candidates (Tensor) - The sampled_candidates is independent of the true classes. Shape: \((num_sampled, )\) .

  • true_expected_count (Tensor) - The expected counts under the sampling distribution of each of true_classes. Shape: \((batch_size, num_true)\) .

  • sampled_expected_count (Tensor) - The expected counts under the sampling distribution of each of sampled_candidates. Shape: \((num_sampled, )\) .

Raises:
  • TypeError – If neither num_true nor num_sampled is an int.

  • TypeError – If neither unique nor remove_accidental_hits is a bool.

  • TypeError – If neither range_max nor seed is an int.

  • TypeError – If true_classes is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data = Tensor(np.array([[1], [3], [4], [6], [3]], dtype=np.int64))
>>> output1, output2, output3 = ops.uniform_candidate_sampler(data, 1, 3, False, 4, 1)
>>> print(output1.shape)
(3,)
>>> print(output2.shape)
(5, 1)
>>> print(output3.shape)
(3,)
tinyms.primitives.unique(input)[source]

Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor.

The output contains Tensor y and Tensor idx, the format is probably similar to (y, idx). The shape of Tensor y and Tensor idx is different in most cases, because Tensor y will be deduplicated, and the shape of Tensor idx is consistent with the input.

To get the same shape between idx and y, please ref to :class:’mindspore.ops.UniqueWithPad’ operator.

Parameters:

input (Tensor) – The input tensor. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Warning

This is an experimental API that is subject to change or deletion.

Returns:

Tuple, containing Tensor objects (y, idx), y is a tensor with the same type as input, and contains the unique elements in input. idx is a tensor containing indices of elements in the input corresponding to the output tensor, have the same shape with input.

Raises:

TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
>>> output = ops.unique(x)
>>> print(output)
(Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
>>> y = output[0]
>>> print(y)
[1 2 5]
>>> idx = output[1]
>>> print(idx)
[0 1 2 1]
tinyms.primitives.unique_consecutive(input, return_idx=False, return_counts=False, axis=None)[source]

Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor.

Parameters:
  • input (Tensor) – The input tensor.

  • return_idx (bool, optional) – Whether to return the index of where the element in the original input maps to the position in the output. Default: False.

  • return_counts (bool, optional) – Whether to return the counts of each unique element. Default: False.

  • axis (int, optional) – The dimension to apply unique. If None, the unique of the flattened input is returned. If specified, it must be int32 or int64. Default: None.

Returns:

A tensor or a tuple of tensors containing tensor objects (output, idx, counts). output has the same type as input and is used to represent the output list of unique scalar elements. If return_idx is True, there will be an additional returned tensor, idx, which has the same shape as input and represents the index of where the element in the original input maps to the position in the output. If return_counts is True, there will be an additional returned tensor, counts, which represents the number of occurrences for each unique value or tensor.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not supported.

  • TypeError – If return_idx is not a bool.

  • TypeError – If return_counts is not a bool.

  • TypeError – If axis is not an int.

  • ValueError – If axis is not in the range of \([-ndim, ndim-1]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 1, 2, 2, 3, 1, 1, 2]), mstype.int32)
>>> output, idx, counts = ops.unique_consecutive(x, True, True, None)
>>> print(output)
[1 2 3 1 2]
>>> print(idx)
[0 0 1 1 2 3 3 4]
>>> print(counts)
[2 2 1 2 1]
tinyms.primitives.unique_with_pad(x, pad_num)[source]

Returns unique elements and relative indexes in 1-D tensor, filled with padding num.

The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function. The returned tuple(y, idx) after the input Tensor x is processed by the unique operator, in which the shapes of y and idx are mostly not equal. Therefore, in order to solve the above situation, the UniqueWithPad operator will fill the y Tensor with the pad_num specified by the user to make it have the same shape as the Tensor idx.

Parameters:
  • x (Tensor) – The tensor need to be unique. Must be 1-D vector with types: int32, int64.

  • pad_num (int) – Pad num. The data type is an int.

Returns:

tuple(Tensor), tuple of 2 tensors, y and idx.

  • y (Tensor) - The unique elements filled with pad_num, the shape and data type same as x.

  • idx (Tensor) - The index of each value of x in the unique output y, the shape and data type same as x.

Raises:
  • TypeError – If dtype of x is neither int32 nor int64.

  • ValueError – If length of shape of x is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, nn
>>> from mindspore import ops
>>> x = Tensor(np.array([1, 2, 2, 3, 5, 5]), mindspore.int32)
>>> output = ops.unique_with_pad(x, 0)
>>> print(output)
(Tensor(shape=[6], dtype=Int32, value= [1, 2, 3, 5, 0, 0]),
 Tensor(shape=[6], dtype=Int32, value= [0, 1, 1, 2, 3, 3]))
>>> y = output[0]
>>> print(y)
[1 2 3 5 0 0]
>>> idx = output[1]
>>> print(idx)
[0 1 1 2 3 3]
tinyms.primitives.unsorted_segment_max(x, segment_ids, num_segments)[source]

Computes the maximum along segments of a tensor.

The following figure shows the calculation process of unsorted_segment_max:

tinyms/UnsortedSegmentMax.png
\[\text { output }_i=\text{max}_{j \ldots} \text { data }[j \ldots]\]

where \(max\) over tuples \(j...\) such that \(segment\_ids[j...] == i\).

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with the minimum value of the x’s type.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_max(x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 5. 6.]]
tinyms.primitives.unsorted_segment_min(x, segment_ids, num_segments)[source]

Computes the minimum of a tensor along segments.

The following figure shows the calculation process of unsorted_segment_min:

tinyms/UnsortedSegmentMin.png
\[\text { output }_i=\text{min}_{j \ldots} \text { data }[j \ldots]\]

where \(min\) over tuples \(j...\) such that \(segment\_ids[j...] == i\).

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with the maximum value of the x’s type.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_min(x, segment_ids, num_segments)
>>> print(output)
[[1. 2. 3.]
 [4. 2. 1.]]
tinyms.primitives.unsorted_segment_prod(x, segment_ids, num_segments)[source]

Computes the product of a tensor along segments.

The following figure shows the calculation process of unsorted_segment_prod:

tinyms/UnsortedSegmentProd.png

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with 1.

  • The segment_ids must be non-negative tensor.

Parameters:
  • x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). With float16, float32 or int32 data type.

  • segment_ids (Tensor) – A 1-D tensor whose shape is \((x_1)\), the value must be non-negative tensor. The data type must be int32.

  • num_segments (int) – The value specifies the number of distinct segment_ids.

Returns:

Tensor, set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int.

  • ValueError – If length of shape of segment_ids is not equal to 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import numpy as np
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
>>> num_segments = 2
>>> output = ops.unsorted_segment_prod(x, segment_ids, num_segments)
>>> print(output)
[[4. 4. 3.]
 [4. 5. 6.]]
tinyms.primitives.unsorted_segment_sum(input_x, segment_ids, num_segments)[source]

Computes the sum of a tensor along segments.

Calculates a tensor such that \(\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]\), where \(j,...\) is a tuple describing the index of element in data. segment_ids selects which elements in data to sum up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range.

The following figure shows the calculation process of unsorted_segment_sum:

tinyms/UnsortedSegmentSum.png

Note

  • If the segment_id i is absent in the segment_ids, then output[i] will be filled with 0.

  • On Ascend, if the value of segment_id is less than 0 or greater than the length of the input data shape, an execution error will occur.

If the sum of the given segment_ids \(i\) is empty, then \(\text{output}[i] = 0\). If the given segment_ids is negative, the value will be ignored. ‘num_segments’ must be equal to the number of different segment_ids.

Parameters:
  • input_x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) – Set the shape as \((x_1, x_2, ..., x_N)\), where 0 < N <= R.

  • num_segments (Union[int, Tensor], optional) – Set \(z\) as num_segments.

Returns:

Tensor, the shape is \((z, x_{N+1}, ..., x_R)\).

Raises:
  • TypeError – If num_segments is not an int or 0-D Tensor.

  • ValueError – If length of shape of segment_ids is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import ops
>>> import mindspore
>>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> output = ops.unsorted_segment_sum(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 0.]
>>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
>>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
>>> num_segments = 6
>>> output = ops.unsorted_segment_sum(input_x, segment_ids, num_segments)
>>> print(output)
[3. 3. 4. 2. 5. 0.]
tinyms.primitives.unsqueeze(input, dim)[source]

Adds an additional dimension to input at the given dim.

Parameters:
  • input (Tensor) – The shape of tensor is \((n_1, n_2, ..., n_R)\).

  • dim (int) – Specifies the dimension index at which to expand the shape of input. The value of dim must be in the range [-input.ndim-1, input.ndim]. Only constant value is allowed.

Returns:

Tensor, the shape of tensor is \((1, n_1, n_2, ..., n_R)\) if the value of dim is 0. It has the same data type as input.

Raises:
  • TypeError – If dim is not an int.

  • ValueError – If dim is not in the valid range \([-input.ndim-1, input.ndim]\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> output = ops.unsqueeze(input_tensor, dim=0)
>>> print(output)
[[[2. 2.]
  [2. 2.]]]
tinyms.primitives.unstack(input_x, axis=0)[source]

Unstacks tensor in specified axis.

Unstacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters:
  • input_x (Tensor) – The shape is \((x_1, x_2, ..., x_R)\). A tensor to be unstacked and the rank of the tensor must be greater than 0.

  • axis (int) – Dimension along which to unpack. Default: 0. Negative values wrap around. The range is [-R, R).

Returns:

A tuple of tensors, the shape of each objects is the same.

Raises:

ValueError – If axis is out of the range [-len(input_x.shape), len(input_x.shape)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = ops.unstack(input_x, 0)
>>> print(output)
(Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
tinyms.primitives.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]

Alias for mindspore.ops.interpolate() .

Supported Platforms:

Ascend GPU CPU

tinyms.primitives.value_and_grad(fn, grad_position=0, weights=None, has_aux=False)[source]

A wrapper function to generate the function to calculate forward output and gradient for the input function.

As for gradient, three typical cases are included:

  1. gradient with respect to inputs. In this case, grad_position is not None while weights is None.

  2. gradient with respect to weights. In this case, grad_position is None while weights is not None.

  3. gradient with respect to inputs and weights. In this case, grad_position and weights are not None.

Parameters:
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[NoneType, int, tuple[int]]) – Index to specify which inputs to be differentiated. If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. grad_position begins with 0. If None, none derivative of any input will be solved, and in this case, weights is required. Default: 0.

  • weights (Union[ParameterTuple, Parameter, list[Parameter]]) – The parameters of the training network that need to calculate the gradient. weights can be got through weights = net.trainable_params() . Default: None.

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Function, returns the gradient function to calculate forward output and gradient for the input function or cell. For example, as for out1, out2 = fn(*args) , gradient function will return outputs like ((out1, out2), gradient) . When has_aux is set True, only out1 contributes to the differentiation.

Raises:
  • ValueError – If both grad_position and weights are None.

  • TypeError – If type of Args does not belong to required ones.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, ops, nn
>>> from mindspore import value_and_grad
>>>
>>> # Cell object to be differentiated
>>> class Net(nn.Cell):
...     def construct(self, x, y, z):
...         return x * y * z
>>> x = Tensor([1, 2], mindspore.float32)
>>> y = Tensor([-2, 3], mindspore.float32)
>>> z = Tensor([0, 3], mindspore.float32)
>>> net = Net()
>>> grad_fn = value_and_grad(net, grad_position=1)
>>> output, inputs_gradient = grad_fn(x, y, z)
>>> print(output)
[-0. 18.]
>>> print(inputs_gradient)
[0. 6.]
>>>
>>> # Function object to be differentiated
>>> def fn(x, y, z):
...     res = x * ops.exp(y) * ops.pow(z, 2)
...     return res, z
>>> x = Tensor(np.array([3, 3]).astype(np.float32))
>>> y = Tensor(np.array([0, 0]).astype(np.float32))
>>> z = Tensor(np.array([5, 5]).astype(np.float32))
>>> output, inputs_gradient = value_and_grad(fn, grad_position=(1, 2), weights=None, has_aux=True)(x, y, z)
>>> print(output)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 5.00000000e+00,  5.00000000e+00]))
>>> print(inputs_gradient)
(Tensor(shape=[2], dtype=Float32, value= [ 7.50000000e+01,  7.50000000e+01]),
 Tensor(shape=[2], dtype=Float32, value= [ 3.00000000e+01,  3.00000000e+01]))
>>>
>>> # For given network to be differentiated with both inputs and weights, there are 3 cases.
>>> net = nn.Dense(10, 1)
>>> loss_fn = nn.MSELoss()
>>> def forward(inputs, labels):
...     logits = net(inputs)
...     loss = loss_fn(logits, labels)
...     return loss, logits
>>> inputs = Tensor(np.random.randn(16, 10).astype(np.float32))
>>> labels = Tensor(np.random.randn(16, 1).astype(np.float32))
>>> weights = net.trainable_params()
>>>
>>> # Case 1: gradient with respect to inputs.
>>> # For has_aux is set True, only loss contributes to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=0, weights=None, has_aux=True)
>>> (loss, logits), inputs_gradient = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(inputs.shape, inputs_gradient.shape)
(16, 10) (16, 10)
>>>
>>> # Case 2: gradient with respect to weights.
>>> # For has_aux is set True, only loss contributes to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=None, weights=weights, has_aux=True)
>>> (loss, logits), params_gradient = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(len(weights), len(params_gradient))
2 2
>>>
>>> # Case 3: gradient with respect to inputs and weights.
>>> # For has_aux is set False, both loss and logits contribute to the gradient.
>>> grad_fn = value_and_grad(forward, grad_position=0, weights=weights, has_aux=False)
>>> (loss, logits), (inputs_gradient, params_gradient) = grad_fn(inputs, labels)
>>> print(logits.shape)
(16, 1)
>>> print(inputs.shape, inputs_gradient.shape)
(16, 10) (16, 10)
>>> print(len(weights), len(params_gradient))
2 2
tinyms.primitives.var(input, axis=None, ddof=0, keepdims=False)[source]

Returns the variance of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate variance. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

Tensor, the variance. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output = ms.ops.var(input, 1, 2, True)
>>> print(output)
[[ 2.5]
 [54.5]]
tinyms.primitives.var_mean(input, axis=None, ddof=0, keepdims=False)[source]

Returns the variance and mean of each row of the input Tensor by default, or it can calculate them in specified dimension axis. If axis is a list of dimensions, reduce over all of them.

Note

If ddof is 0, 1, True or False, the supported device is only Ascend and CPU. In other cases, the supported device is Ascend, GPU and CPU.

Parameters:
  • input (Tensor[Number]) – Input Tensor with a dtype of number.Number, its shape should be \((N, *)\) where \(*\) means any number of additional dims.

  • axis (Union[int, tuple(int)], optional) – The dimensions to reduce. Only constant value is allowed. Must be in the range [-rank(input), rank(input)). Default: None, reduce all dimensions.

  • ddof (Union[int, bool], optional) – Means Delta Degrees of Freedom. If ddof is an integer, the divisor used in calculations is \(N - ddof\), where \(N\) represents the number of elements. If ddof is True, will use the Bessel correction unbiased estimation. If ddof is False, will through the biased estimation to calculate the variance. Default: 0.

  • keepdims (bool, optional) – Whether the output Tensor has dim retained or not. If true, keep these reduced dimensions and the length is 1. If false, don’t keep these dimensions. Default: False.

Returns:

A tuple containing the variance and mean. Suppose the shape of input is \((x_0, x_1, ..., x_R)\):

  • If axis is () and keepdims is set to False, returns a 0-D Tensor, indicating the standard deviation of all elements in input.

  • If axis is int 1 and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

  • If axis is tuple(int) or list(int), e.g. (1, 2) and keepdims is set to False, then the returned Tensor has shape \((x_0, x_2, ..., x_R)\).

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If axis is not one of the following: None, int, tuple.

  • TypeError – If keepdims is not a bool.

  • ValueError – If axis is out of range.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> input = ms.Tensor([[1, 2, 3, 4], [-1, 1, 4, -10]], ms.float32)
>>> output_var, output_mean = ms.ops.var_mean(input, 1, 2, True)
>>> print(output_var)
[[ 2.5]
 [54.5]]
>>> print(output_mean)
[[ 2.5]
 [-1.5]]
tinyms.primitives.view_as_real(input)[source]

View a complex Tensor as a real Tensor. The size of last dimension of the returned real Tensor is 2, and the last dimension is composed of the real and imaginary components of complex numbers.

Parameters:

input (Tensor) – the input must be a complex Tensor.

Returns:

A real Tensor.

Raises:

TypeError – If the input Tensor is not a complex Tensor.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor([2+1j,2+3j,2-1j,2], mstype.complex64)
>>> print(ops.view_as_real(x))
[[ 2.  1.]
 [ 2.  3.]
 [ 2. -1.]
 [ 2.  0.]]
tinyms.primitives.vjp(fn, *inputs, has_aux=False)[source]

Compute the vector-jacobian-product of the given network. vjp matches reverse-mode differentiation.

Parameters:
  • fn (Union[Function, Cell]) – The function or net that takes Tensor inputs and returns single Tensor or tuple of Tensors.

  • inputs (Union[Tensor, tuple[Tensor], list[Tensor]]) – The inputs to fn .

  • has_aux (bool) – If True, only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False.

Returns:

Forward outputs and function to calculate vjp.

  • net_output (Union[Tensor, tuple[Tensor]]) - The output of fn(inputs). Specially, when has_aux is set True, netout is the first output of fn(inputs).

  • vjp_fn (Function) - To calculate vector-jacobian-product. Its inputs are the vectors whose shape and type should be the same as netout .

  • aux_value (Union[Tensor, tuple[Tensor]], optional) - When has_aux is True, aux_value will be returned. It means the second to last outputs of fn(inputs). Specially, aux_value does not contribute to gradient.

Raises:

TypeErrorinputs or v does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import vjp
>>> from mindspore import Tensor
>>> class Net(nn.Cell):
...     def construct(self, x, y):
...         return x**3 + y
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> v = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> outputs, vjp_fn = vjp(Net(), x, y)
>>> print(outputs)
[[ 2. 10.]
 [30. 68.]]
>>> gradient = vjp_fn(v)
>>> print(gradient)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 3.00000000e+00,  1.20000000e+01],
 [ 2.70000000e+01,  4.80000000e+01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
>>> def fn(x, y):
...     return 2 * x + y, y ** 3
>>> outputs, vjp_fn, aux = vjp(fn, x, y, has_aux=True)
>>> gradient = vjp_fn(v)
>>> print(outputs)
[[ 3.  6.]
 [ 9. 12.]]
>>> print(aux)
[[ 1.  8.]
 [27. 64.]]
>>> print(gradient)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.00000000e+00,  2.00000000e+00],
 [ 2.00000000e+00,  2.00000000e+00]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.00000000e+00,  1.00000000e+00],
 [ 1.00000000e+00,  1.00000000e+00]]))
tinyms.primitives.vmap(fn, in_axes=0, out_axes=0)[source]

Vectorizing map (vmap) is a kind of higher-order function to map fn along the parameter axes.

Vmap is pioneered by Jax and it removes the restriction of batch dimension on the operator, and provides a more convenient and unified operator expression. Moreover, it allows users to composite with other functional modules such as mindspore.grad(), to improve the development efficiency. In addition, the vectorizing map does not execute loops outside the function, but sinks loops into the primitive operations of the function for better performance. When combined with Graph Kernel Fusion, operational efficiency would be further improved.

Warning

This is an experimental API that is subject to change or deletion.

Note

1. The power of vmap comes from the implementation of VmapRules of primitives. Although we have designed a generalized rule for user custom operators, we can not guarantee that it works well for all operators, please be aware the risk of use. If you want to achieve a better performance, please refer to the tutorial to implement the specific VmapRule for the custom operator, which won’t take too much time. 2. When calling the random number generation methods within the scope of vmap, the same random number is generated among vector functions each time. If you expect each vector branch to use different random numbers, you need to generate batch random numbers externally in advance and then transfer them to vmap.

Parameters:
  • fn (Union[Cell, Function, CellList]) – Function to be mapped along the parameter axes, which takes at least one argument and returns one or more Tensors or the type of data supported by the MindSpore Tensor. When it is a CellList, the model ensembling scenario, please make sure that the structure of each cell is the same and the number of cells is consistent with the sizes of the mapped axes (axis_size).

  • in_axes (Union[int, list, tuple]) – Specifies which dimensions (axes) of the inputs should be mapped over. If in_axes is an integer, all arguments of fn are mapped over according to this axis index. If in_axes is a tuple or list, which only composed of integers or Nones and the length should equal to the number of positional arguments to fn, indicates which axis to map for each corresponding positional argument. Note that, axis integers must be in range \([-ndim, ndim)\) for each argument, where ndim is the number of dimensions of the corresponding argument. None means not mapping along any axis. Also the mapping axis index of the in_axes must have at least one positional parameter not None. The sizes of the mapped axes (axis_size) for all arguments must be equal. Default: 0.

  • out_axes (Union[int, list, tuple]) – Specifies where the mapped dimensions (axes) should appear in the outputs. If out_axes is an integer, all outputs of fn are specified according to this axis. If out_axes is a tuple or list, which only composed of integers or Nones. And its length also should be equal to the number of outputs of fn. Note that, axis integers must be in range \([-ndim, ndim)\) for each output, where ndim is the dimension of the output of the vmap-mapped function. All outputs with a non-None mapped axis must specify a non-None out_axes, and if outputs with None mapped axis specifies a non-None out_axes, the result broadcasts across the mapped axis. Default: 0.

Returns:

Function, returns the Vectorized/Batched version function of fn. The arguments and outputs of this function correspond to those of fn, but it adds an extra batch dimension at positions specified by in_axes and out_axes.

Raises:

RuntimeError – If base elements in in_axes or out_axes are not a None or an integer. If the all base elements in in_axes or out_axes are None. If in_axes is not single integer, and the length of in_axes is not equal to the arguments sizes. If out_axes is not single integer, and the length of out_axes is not equal to the outputs sizes. If the axis_size of each arguments in the scope of vmap are not equal. If the axis in in_axes or out_axes is out of bounds.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> from mindspore import vmap
>>> def test_vmap(x, y, z):                                              # ([a],[a],[a]) -> [a]
...     return x + y + z
>>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]).astype(np.float32))    # [b, a]
>>> y = Tensor(np.array([[-3, -2, -1], [3, 2, 1]]).astype(np.float32))   # [a, b]
>>> z = Tensor(np.array([0, 3]).astype(np.float32))                      # [a]
>>> output = vmap(test_vmap, in_axes=(0, 1, None), out_axes=1)(x, y, z)  # ([b, a],[a, b],[a]) -> [a, b]
>>> print(output)
[[-2  1  4]
 [ 8  9 10]]
tinyms.primitives.vsplit(input, indices_or_sections)[source]

Splits input with two or more dimensions, into multiple sub-tensors vertically according to indices_or_sections.

It is equivalent to ops.tensor_split with \(axis=0\) .

Parameters:
  • input (Tensor) – A Tensor to be divided.

  • indices_or_sections (Union[int, tuple(int), list(int)]) – See argument in mindspore.ops.tensor_split().

Returns:

A list of sub-tensors.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(9).reshape((3, 3)).astype('float32')
>>> output = ops.vsplit(Tensor(input_x), 3)
>>> print(output)
(Tensor(shape=[1, 3], dtype=Float32, value=[[ 0.00000000e+00,  1.00000000e+00,  2.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32, value=[[ 3.00000000e+00,  4.00000000e+00,  5.00000000e+00]]),
 Tensor(shape=[1, 3], dtype=Float32, value=[[ 6.00000000e+00,  7.00000000e+00,  8.00000000e+00]]))
tinyms.primitives.vstack(inputs)[source]

Stacks tensors in sequence vertically.

This is equivalent to concatenation along the first axis. 1-D tensors \((N,)\) should firstly be reshaped to \((1, N)\), and then be concatenated along the first axis.

Parameters:

inputs (Union(List[tensor], Tuple[tensor])) – A sequence of 1-D or 2-D tensors. The tensors must have the same shape along all but the first axis. 1-D tensors must have the same shape.

Returns:

Tensor, formed by stacking the given tensors, will be at least 3-D. The output shape is similar to the output of numpy.vstack() function.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> x1 = np.array([3, 1, 4])
>>> x2 = np.array([1, 5, 9])
>>> out = ops.vstack([x1, x2])
>>> print(out)
[[3 1 4]
 [1 5 9]]
tinyms.primitives.where(condition, x, y)[source]

Selects elements from x or y based on condition and returns a tensor.

\[\begin{split}output_i = \begin{cases} x_i,\quad &if\ condition_i \\ y_i,\quad &otherwise \end{cases}\end{split}\]
Parameters:
  • condition (Tensor[bool]) – If True, yield x, otherwise yield y.

  • x (Union[Tensor, Scalar]) – When condition is True, values to select from.

  • y (Union[Tensor, Scalar]) – When condition is False, values to select from.

Returns:

Tensor, elements are selected from x and y.

Raises:
  • TypeError – If condition is not a Tensor.

  • TypeError – If both x and y are scalars.

  • ValueError – If condition, x and y can not broadcast to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.arange(4).reshape((2, 2)), mstype.float32)
>>> b = Tensor(np.ones((2, 2)), mstype.float32)
>>> condition = a < 3
>>> output = ops.where(condition, a, b)
>>> print(output)
[[0. 1.]
 [2. 1.]]
tinyms.primitives.xdivy(x, y)[source]

Divides the first input tensor by the second input tensor element-wise. Returns zero when x is zero.

Inputs of x and y comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Note

When x and y are both of datatype complex, they should be both complex64 or complex128 at the same time.

Parameters:
  • x (Union[Tensor, Number, bool]) – Tensor of datatype number.Number或bool, or it can be a bool or number.

  • y (Union[Tensor, Number, bool]) – Tensor of datatype number.Number或bool, or it can be a bool or number. x and y can not be both bool at the same time.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If x and y is not one of the following: Tensor, Number, bool.

  • TypeError – If dtype of x and ‘y’ is not in [float16, float32, float64, complex64, complex128, bool].

  • ValueError – If x could not be broadcast to a tensor with shape of y.

  • RuntimeError – If the data type of x, y conversion of Parameter is given but data type conversion of Parameter is not supported.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([2, 4, -1]), mindspore.float32)
>>> y = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> output = ops.xdivy(x, y)
>>> print(output)
[ 1.   2.  -0.5]
tinyms.primitives.xlogy(input, other)[source]

Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. Returns zero when input is zero.

\[out_i = input_{i}\ln{other_{i}}\]

Inputs of input and other comply with the implicit type conversion rules to make the data types consistent. The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast. When the inputs are one tensor and one scalar, the scalar could only be a constant.

Warning

  • On Ascend, the data type of input and other must be float16 or float32.

Parameters:
  • input (Union[Tensor, number.Number, bool]) –

    The first input is a number.Number or a bool or a tensor whose data type is number or bool_.

  • other (Union[Tensor, number.Number, bool]) – The second input is a number.Number or a bool when the first input is a tensor or a tensor whose data type is number or bool_. When the first input is Scalar, the second input must be a Tensor whose data type is number or bool_.

Returns:

Tensor, the shape is the same as the one after broadcasting, and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
  • TypeError – If input and other is not a number.Number or a bool or a Tensor.

  • TypeError – If dtype of input and other is not in [float16, float32, float64, complex64, complex128].

  • ValueError – If input could not be broadcast to a tensor with shape of other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.array([-5, 0, 4]), mindspore.float32)
>>> other = Tensor(np.array([2, 2, 2]), mindspore.float32)
>>> output = ops.xlogy(input, other)
>>> print(output)
[-3.465736   0.        2.7725887]
tinyms.primitives.zeros(size, dtype=None)[source]

Creates a tensor filled with 0 with shape described by shape and fills it with value 0 in type of dtype.

Parameters:
  • size (Union[tuple[int], int]) – The specified shape of output tensor. Only constant positive int is allowed.

  • dtype (mindspore.dtype, optional) – The specified type of output tensor. If dtype is None, mindspore.float32 will be used. Default: None.

Returns:

Tensor, has the same dtype and size as input.

Raises:

TypeError – If size is neither a tuple of int nor an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> output = ops.zeros((2, 2), mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]
tinyms.primitives.zeros_like(input, *, dtype=None)[source]

Creates a tensor filled with 0, with the same size as x, and the given dtype.

If dtype = None, the tensor will have the same dtype as input input.

Parameters:

input (Tensor) – Tensor of any dimension.

Keyword Arguments:

dtype (mindspore.dtype, optional) – The specified dtype of the output tensor. If dtype is None, the dtype of the input tensor will be used. Default: None.

Returns:

Tensor, filled with 0.

Raises:

TypeError – If dtype is not a MindSpore dtype.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.arange(4).reshape(2, 2))
>>> output = ops.zeros_like(x, dtype=mindspore.float32)
>>> print(output)
[[0. 0.]
 [0. 0.]]

tinyms.layers

Layer module contains pre-defined building blocks or computing units to construct neural networks.

The high-level components (Layers) used to construct the neural network.

class tinyms.layers.Layer(auto_prefix=True, flags=None)[source]

Base class for all neural networks.

A ‘Layer’ could be a single neural network layer, such as conv2d, relu, batch_norm, etc. or a composition of cells to constructing a network.

Note

In general, the autograd algorithm will automatically generate the implementation of the gradient function, but if back-propagation(bprop) method is implemented, the gradient function will be replaced by the bprop. The bprop implementation will receive a Tensor dout containing the gradient of the loss w.r.t. the output, and a Tensor out containing the forward result. The bprop needs to compute the gradient of the loss w.r.t. the inputs, gradient of the loss w.r.t. Parameter variables are not supported currently. The bprop method must contain the self parameter.

Parameters:

auto_prefix (bool) – Recursively generate namespaces. Default: True.

Examples

>>> from tinyms import layers, primitives as P
>>>
>>> class MyNet(layers.Layer):
...    def __init__(self):
...        super(MyNet, self).__init__()
...        self.relu = P.ReLU()
...
...    def construct(self, x):
...        return self.relu(x)
add_flags(**flags)

Add customized attributes for cell.

This method is also called when the cell class is instantiated and the class parameter ‘flags’ is set to True.

Parameters:

flags (dict) – Network configuration information, currently it is used for the binding of network and dataset. Users can also customize network attributes by this parameter. Default: None.

add_flags_recursive(**flags)

If a cell contains child cells, this method can recursively customize attributes of all cells.

Parameters:

flags (dict) – Network configuration information, currently it is used for the binding of network and dataset. Users can also customize network attributes by this parameter. Default: None.

apply(fn)

Applies fn recursively to every subcell (as returned by .cells()) as well as self. Typical use includes initializing the parameters of a model.

Parameters:

fn (function) – function to be applied to each subcell.

Returns:

Cell, self.

Examples

>>> import mindspore.nn as nn
>>> from mindspore.common.initializer import initializer, One
>>> net = nn.SequentialCell(nn.Dense(2, 2), nn.Dense(2, 2))
>>> def func(cell):
...     if isinstance(cell, nn.Dense):
...         cell.weight.set_data(initializer(One(), cell.weight.shape, cell.weight.dtype))
>>> net.apply(func)
SequentialCell<
  (0): Dense<input_channels=2, output_channels=2, has_bias=True>
  (1): Dense<input_channels=2, output_channels=2, has_bias=True>
  >
>>> print(net[0].weight.asnumpy())
[[1. 1.]
 [1. 1.]]
auto_cast_inputs(inputs)

Auto cast inputs in mixed precision scenarios.

Parameters:

inputs (tuple) – the inputs of construct.

Returns:

Tuple, the inputs after data type cast.

auto_parallel_compile_and_run()

Whether or not to execute compile and run in ‘AUTO_PARALLEL’ or ‘SEMI_AUTO_PARALLEL’ mode.

Note

This interface is deprecated.

property bprop_debug

Get whether cell custom bprop debug is enabled.

cast_inputs(inputs, dst_type)

Cast inputs to specified type.

Parameters:
  • inputs (tuple[Tensor]) – The cell inputs.

  • dst_type (mindspore.dtype) – The specified data type.

Returns:

tuple[Tensor], the result with destination data type.

cast_param(param)

Cast parameter according to auto mix precision level in pynative mode.

This interface is currently used in the case of auto mix precision and usually needs not to be used explicitly.

Parameters:

param (Parameter) – Parameters, the type of which should be cast.

Returns:

Parameter, the input parameter with type automatically cast.

cells()

Returns an iterator over immediate cells.

Returns:

Iteration, the immediate cells in the cell.

cells_and_names(cells=None, name_prefix='')

Returns an iterator over all cells in the network, including the cell’s name and itself.

Parameters:
  • cells (str) – Cells to iterate over. Default: None.

  • name_prefix (str) – Namespace. Default: ‘’.

Returns:

Iteration, all the child cells and corresponding names in the cell.

Examples

>>> from mindspore import nn
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv = nn.Conv2d(3, 64, 3)
...     def construct(self, x):
...         out = self.conv(x)
...         return out
>>> names = []
>>> n = Net()
>>> for m in n.cells_and_names():
...     if m[0]:
...         names.append(m[0])
check_names()

Check the names of cell parameters.

compile(*args, **kwargs)

Compile Cell as a computation graph, the input must be consistent with the input defined in construct.

Parameters:
  • args (tuple) – Args of the Cell object.

  • kwargs (dict) – Kwargs of the Cell object.

compile_and_run(*args, **kwargs)

Compile and run Cell, the input must be consistent with the input defined in construct.

Note

It is not recommended to call directly.

Parameters:
  • args (tuple) – Args of the Cell object.

  • kwargs (dict) – Kwargs of the Cell object.

Returns:

Object, the result of executing.

construct(*args, **kwargs)

Defines the computation to be performed. This method must be overridden by all subclasses.

Note

It is not supported currently that inputs contain both tuple and non-tuple types at same time.

Parameters:
  • args (tuple) – Tuple of variable parameters.

  • kwargs (dict) – Dictionary of variable keyword parameters.

Returns:

Tensor, returns the computed result.

exec_checkpoint_graph()

Executes saving checkpoint graph operation.

extend_repr()

Expand the description of Cell.

To print customized extended information, re-implement this method in your own cells.

flatten_weights(fusion_size=0)

Reset data for weight parameters so that they are using contiguous memory chunks grouped by data type.

Note

By default, parameters with same data type will using a single contiguous memory chunk. but for some models with huge number of parameters, splitting a large memory chunk into several smaller memory chunks has the potential for performance gains, if this is the case, we can use ‘fusion_size’ to limit the maximum memory chunk size.

Parameters:

fusion_size (int) – Maximum memory chunk size in bytes, 0 for unlimited. Default: 0.

generate_scope()

Generate the scope for each cell object in the network.

get_flags()

Get the self_defined attributes of the cell, which can be added by add_flags method.

get_func_graph_proto()

Return graph binary proto.

get_inputs()

Returns the dynamic_inputs of a cell object in one network.

Returns:

inputs (tuple), Inputs of the Cell object.

Warning

This is an experimental API that is subject to change or deletion.

get_mixed_precision_type(self: mindspore._c_expression.Cell_) → mindspore._c_expression.MixedPrecisionType

Get mixed precision type.

get_parameters(expand=True)

Returns an iterator over cell parameters.

Yields parameters of this cell. If expand is true, yield parameters of this cell and all subcells.

Parameters:

expand (bool) – If true, yields parameters of this cell and all subcells. Otherwise, only yield parameters that are direct members of this cell. Default: True.

Returns:

Iteration, all parameters at the cell.

Examples

>>> from mindspore import nn
>>> net = nn.Dense(3, 4)
>>> parameters = []
>>> for item in net.get_parameters():
...     parameters.append(item)
get_scope()

Returns the scope of a cell object in one network.

Returns:

String, scope of the cell.

infer_param_pipeline_stage()

Infer pipeline stages of all parameters in the cell.

Note

  • If a parameter does not belong to any cell which has been set pipeline_stage, the parameter should use add_pipeline_stage to add it’s pipeline_stage information.

  • If a parameter P has been used by two operators in different stages “stageA” and “stageB”, the parameter P should use P.add_pipeline_stage(stageA) and P.add_pipeline_stage(stageB) to add it’s stage information before using infer_param_pipeline_stage.

Returns:

The params belong to current stage in pipeline parallel.

Raises:

RuntimeError – If there is a parameter does not belong to any stage.

init_parameters_data(auto_parallel_mode=False)

Initialize all parameters and replace the original saved parameters in cell.

Note

trainable_params() and other similar interfaces may return different parameter instance after init_parameters_data, do not save these results.

Parameters:

auto_parallel_mode (bool) – If running in auto_parallel_mode. Default: False.

Returns:

Dict[Parameter, Parameter], returns a dict of original parameter and replaced parameter.

insert_child_to_cell(child_name, child_cell)

Adds a child cell to the current cell with a given name.

Parameters:
  • child_name (str) – Name of the child cell.

  • child_cell (Cell) – The child cell to be inserted.

Raises:
  • KeyError – Child Cell’s name is incorrect or duplicated with the other child name.

  • TypeError – If type of child_name is not str.

  • TypeError – Child Cell’s type is incorrect.

insert_param_to_cell(param_name, param, check_name_contain_dot=True)

Adds a parameter to the current cell.

Inserts a parameter with given name to the cell. The method is currently used in mindspore.nn.Cell.__setattr__.

Parameters:
  • param_name (str) – Name of the parameter.

  • param (Parameter) – Parameter to be inserted to the cell.

  • check_name_contain_dot (bool) – Determines whether the name input is compatible. Default: True.

Raises:
  • KeyError – If the name of parameter is null or contains dot.

  • TypeError – If the type of parameter is not Parameter.

load_parameter_slice(params)

Replace parameters with sliced tensors by parallel strategies.

Note

This interface is deprecated.

name_cells()

Returns an iterator over all immediate cells in the network.

Include name of the cell and cell itself.

Returns:

Dict, all the child cells and corresponding names in the cell.

property param_prefix

Param prefix is the prefix of current cell’s direct child parameter.

property parameter_layout_dict

parameter_layout_dict represents the tensor layout of a parameter, which is inferred by shard strategy and distributed operator information.

parameters_and_names(name_prefix='', expand=True)

Returns an iterator over cell parameters.

Includes the parameter’s name and itself.

Parameters:
  • name_prefix (str) – Namespace. Default: ‘’.

  • expand (bool) – If true, yields parameters of this cell and all subcells. Otherwise, only yield parameters that are direct members of this cell. Default: True.

Returns:

Iteration, all the names and corresponding parameters in the cell.

Examples

>>> from mindspore import nn
>>> n = nn.Dense(3, 4)
>>> names = []
>>> for m in n.parameters_and_names():
...     if m[0]:
...         names.append(m[0])
parameters_broadcast_dict(recurse=True)

Gets the parameters broadcast dictionary of this cell.

Parameters:

recurse (bool) – Whether contains the parameters of subcells. Default: True.

Returns:

OrderedDict, return parameters broadcast dictionary.

parameters_dict(recurse=True)

Gets the parameters dictionary of this cell.

Parameters:

recurse (bool) – Whether contains the parameters of subcells. Default: True.

Returns:

OrderedDict, return parameters dictionary.

place(role, rank_id)

Set the label for all operators in this cell. This label tells MindSpore compiler on which process this cell should be launched. And each process’s identical label consists of input role and rank_id. So by setting different cells with different labels, which will be launched on different processes, users can launch a distributed training or predicting job.

Note

  • This method is effective only after mindspore.communication.init() is called for dynamic cluster building.

Parameters:
  • role (str) – The role of the process on which this cell will be launched. Only ‘MS_WORKER’ is supported for now.

  • rank_id (int) – The rank id of the process on which this cell will be launched. The rank is unique in processes with the same role.

Examples

>>> from mindspore import context
>>> import mindspore.nn as nn
>>> context.set_context(mode=context.GRAPH_MODE)
>>> fc = nn.Dense(2, 3)
>>> fc.place('MS_WORKER', 0)
recompute(**kwargs)

Set the cell recomputed. All the primitive in the cell except the outputs will be set recomputed. If a primitive set recomputed feeds into some backward nodes for computing gradient, rather than storing the intermediate activation computed in forward pass, we will recompute it in backward pass.

Note

  • If the computation involves something like randomization or global variable, the equivalence is not guaranteed currently.

  • If the recompute api of a primitive in this cell is also called, the recompute mode of this primitive is subject to the recompute api of the primitive.

  • The interface can be configured only once. Therefore, when the parent cell is configured, the child cell should not be configured.

  • The outputs of cell are excluded from recomputation by default, which is based on our configuration experience to reduce memory footprint. If a cell has only one primitive and the primitive is wanted to be set recomputed, use the recompute api of the primtive.

  • When the memory remains after applying the recomputation, configuring ‘mp_comm_recompute=False’ to improve performance if necessary.

  • When the memory still not enough after applying the recompute, configuring ‘parallel_optimizer_comm_recompute=True’ to save more memory if necessary. Cells in the same fusion group should have the same parallel_optimizer_comm_recompute configures.

Parameters:
  • mp_comm_recompute (bool) – Specifies whether the model parallel communication operators in the cell are recomputed in auto parallel or semi auto parallel mode. Default: True.

  • parallel_optimizer_comm_recompute (bool) – Specifies whether the communication operator allgathers introduced by optimizer shard are recomputed in auto parallel or semi auto parallel mode. Default: False.

register_backward_hook(hook_fn)

Register the backward hook function.

Note

  • The register_backward_hook(hook_fn) does not work in graph mode or functions decorated with ‘jit’.

  • The ‘hook_fn’ must be defined as the following code. cell_id is the information of registered Cell object, including name and ID. grad_input is the gradient passed to the Cell. grad_output is the gradient computed and passed to the next Cell or primitive, which may be modified by returning a new output gradient.

  • The ‘hook_fn’ should have the following signature: hook_fn(cell_id, grad_input, grad_output) -> New output gradient or none.

  • The ‘hook_fn’ is executed in the python environment. In order to prevent running failed when switching to graph mode, it is not recommended to write it in the construct function of Cell object. In the pynative mode, if the register_backward_hook function is called in the construct function of the Cell object, a hook function will be added at each run time of Cell object.

Parameters:

hook_fn (function) – Python function. Backward hook function.

Returns:

Handle, it is an instance of mindspore.common.hook_handle.HookHandle and corresponding to the hook_fn . The handle can be used to remove the added hook_fn by calling handle.remove() .

Raises:

TypeError – If the hook_fn is not a function of python.

Supported Platforms: Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> from mindspore.ops import GradOperation
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> def backward_hook_fn(cell_id, grad_input, grad_output):
...     print("backward input: ", grad_input)
...     print("backward output: ", grad_output)
...
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.relu = nn.ReLU()
...         self.handle = self.relu.register_backward_hook(backward_hook_fn)
...
...     def construct(self, x):
...         x = x + x
...         x = self.relu(x)
...         return x
>>> grad = GradOperation(get_all=True)
>>> net = Net()
>>> output = grad(net)(Tensor(np.ones([1]).astype(np.float32)))
backward input: (Tensor(shape=[1], dtype=Float32, value= [ 1.00000000e+00]),)
backward output: (Tensor(shape=[1], dtype=Float32, value= [ 1.00000000e+00]),)
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.00000000e+00]),)
register_forward_hook(hook_fn)

Set the Cell forward hook function.

Note

  • The register_forward_hook(hook_fn) does not work in graph mode or functions decorated with ‘jit’.

  • ‘hook_fn’ must be defined as the following code. cell_id is the information of registered Cell object, including name and ID. inputs is the forward input objects passed to the Cell. output is the forward output object of the Cell. The ‘hook_fn’ can modify the forward output object by returning new forward output object.

  • It should have the following signature: hook_fn(cell_id, inputs, output) -> new output object or none.

  • In order to prevent running failed when switching to graph mode, it is not recommended to write it in the construct function of Cell object. In the pynative mode, if the register_forward_hook function is called in the construct function of the Cell object, a hook function will be added at each run time of Cell object.

Parameters:

hook_fn (function) – Python function. Forward hook function.

Returns:

Handle, it is an instance of mindspore.common.hook_handle.HookHandle and corresponding to the hook_fn . The handle can be used to remove the added hook_fn by calling handle.remove() .

Raises:

TypeError – If the hook_fn is not a function of python.

Supported Platforms: Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> from mindspore.ops import GradOperation
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> def forward_hook_fn(cell_id, inputs, output):
...     print("forward inputs: ", inputs)
...     print("forward output: ", output)
...
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.mul = nn.MatMul()
...         self.handle = self.mul.register_forward_hook(forward_hook_fn)
...
...     def construct(self, x, y):
...         x = x + x
...         x = self.mul(x, y)
...         return x
>>> grad = GradOperation(get_all=True)
>>> net = Net()
>>> output = grad(net)(Tensor(np.ones([1]).astype(np.float32)), Tensor(np.ones([1]).astype(np.float32)))
forward inputs: (Tensor(shape=[1], dtype=Float32, value= [ 2.00000000e+00]), Tensor(shape=[1],
                dtype=Float32, value= [ 1.00000000e+00]))
forward output: 2.0
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.00000000e+00]), Tensor(shape=[1], dtype=Float32,
value= [ 2.00000000e+00]))
register_forward_pre_hook(hook_fn)

Register forward pre hook function for Cell object.

Note

  • The register_forward_pre_hook(hook_fn) does not work in graph mode or functions decorated with ‘jit’.

  • ‘hook_fn’ must be defined as the following code. cell_id is the information of registered Cell object, including name and ID. inputs is the forward input objects passed to the Cell. The ‘hook_fn’ can modify the forward input objects by returning new forward input objects.

  • It should have the following signature: hook_fn(cell_id, inputs) -> new input objects or none.

  • In order to prevent running failed when switching to graph mode, it is not recommended to write it in the construct function of Cell object. In the pynative mode, if the register_forward_pre_hook function is called in the construct function of the Cell object, a hook function will be added at each run time of Cell object.

Parameters:

hook_fn (function) – Python function. Forward pre hook function.

Returns:

Handle, it is an instance of mindspore.common.hook_handle.HookHandle and corresponding to the hook_fn . The handle can be used to remove the added hook_fn by calling handle.remove() .

Raises:

TypeError – If the hook_fn is not a function of python.

Supported Platforms: Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> from mindspore.ops import GradOperation
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> def forward_pre_hook_fn(cell_id, inputs):
...     print("forward inputs: ", inputs)
...
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.mul = nn.MatMul()
...         self.handle = self.mul.register_forward_pre_hook(forward_pre_hook_fn)
...
...     def construct(self, x, y):
...         x = x + x
...         x = self.mul(x, y)
...         return x
>>> grad = GradOperation(get_all=True)
>>> net = Net()
>>> output = grad(net)(Tensor(np.ones([1]).astype(np.float32)), Tensor(np.ones([1]).astype(np.float32)))
forward inputs: (Tensor(shape=[1], dtype=Float32, value= [ 2.00000000e+00]), Tensor(shape=[1],
                dtype=Float32, value= [ 1.00000000e+00]))
>>> print(output)
(Tensor(shape=[1], dtype=Float32, value= [ 2.00000000e+00]), Tensor(shape=[1], dtype=Float32,
value= [ 2.00000000e+00]))
remove_redundant_parameters()

Remove the redundant parameters.

This interface usually needs not to be used explicitly.

run_construct(cast_inputs, kwargs)

Run the construct function.

Note

This function will be removed in a future version. It is not recommended to call this function.

Parameters:
  • cast_inputs (tuple) – The input objects of Cell.

  • kwargs (dict) – Provide keyword arguments.

Returns:

output, the output object of Cell.

set_auto_parallel()

Set the cell to auto parallel mode.

Note

This interface is deprecated.

set_boost(boost_type)

In order to improve the network performance, configure the network auto enable to accelerate the algorithm in the algorithm library.

If boost_type is not in the algorithm library, please view the algorithm in the algorithm library through algorithm library.

Note

Some acceleration algorithms may affect the accuracy of the network, please choose carefully.

Parameters:

boost_type (str) – accelerate algorithm.

Returns:

Cell, the cell itself.

Raises:

ValueError – If boost_type is not in the algorithm library.

set_broadcast_flag(mode=True)

Set parameter broadcast mode for this cell.

Parameters:

mode (bool) – Specifies whether the mode is parameter broadcast. Default: True.

set_comm_fusion(fusion_type, recurse=True)

Set comm_fusion for all the parameters in this cell. Please refer to the description of mindspore.Parameter.comm_fusion.

Note

The value of attribute will be overwritten when the function is called multiply.

Parameters:
  • fusion_type (int) – The value of comm_fusion.

  • recurse (bool) – Whether sets the trainable parameters of subcells. Default: True.

set_data_parallel()

For all primitive ops in this cell(including ops of cells that wrapped by this cell), if parallel strategy is not specified, then instead of auto-searching, data parallel strategy will be generated for those primitive ops.

Note

Only effective while using auto_parallel_context = ParallelMode.AUTO_PARALLEL under graph mode.

Examples

>>> import mindspore.nn as nn
>>> net = nn.Dense(3, 4)
>>> net.set_data_parallel()
set_grad(requires_grad=True)

Sets the cell flag for gradient. In pynative mode, this parameter specifies whether the network requires gradients. If true, the backward network needed to compute the gradients will be generated when the forward network is executed.

Parameters:

requires_grad (bool) – Specifies if the net need to grad, if it is true, the cell will construct backward network in pynative mode. Default: True.

Returns:

Cell, the cell itself.

set_inputs(*inputs)

Save set inputs for computation graph. The number of inputs should be the same with that of the datasets. When using Model for dynamic shape, please make sure that all networks and loss functions passed to the Model are configured with set_inputs. The inputs can be Tensor of either dynamic or static shape.

Parameters:

inputs (tuple) – Inputs of the Cell object.

Warning

This is an experimental API that is subject to change or deletion.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import nn, Tensor, context
>>>
>>> class reluNet(nn.Cell):
...     def __init__(self):
...         super(reluNet, self).__init__()
...         self.relu = nn.ReLU()
...     def construct(self, x):
...         return self.relu(x)
>>>
>>> net = reluNet()
>>> input_dyn = Tensor(shape=[3, None], dtype=ms.float32)
>>> net.set_inputs(input_dyn)
>>> input1 = Tensor(np.random.random([3, 10]), dtype=ms.float32)
>>> output = net(input1)
set_jit_config(jit_config)

Set jit config for cell.

Parameters:

jit_config (JitConfig) – Jit config for compile. For details, please refer to mindspore.JitConfig.

set_mixed_precision_type(self: mindspore._c_expression.Cell_, arg0: mindspore._c_expression.MixedPrecisionType) → None

Set mixed precision type.

set_parallel_input_with_inputs(*inputs)

Slice inputs tensors by parallel strategies.

Note

This interface is deprecated.

set_param_fl(push_to_server=False, pull_from_server=False, requires_aggr=True)

Set the way of parameter and server interaction.

Parameters:
  • push_to_server (bool) – Whether the parameter should be pushed to server. Default: False.

  • pull_from_server (bool) – Whether the parameter should be pulled from server. Default: False.

  • requires_aggr (bool) – Whether the parameter should be aggregated in the server. Default: True.

set_param_ps(recurse=True, init_in_server=False)

Set whether the trainable parameters are updated by parameter server and whether the trainable parameters are initialized on server.

Note

It only works when a running task is in the parameter server mode. It is only supported in graph mode.

Parameters:
  • recurse (bool) – Whether sets the trainable parameters of subcells. Default: True.

  • init_in_server (bool) – Whether trainable parameters updated by parameter server are initialized on server. Default: False.

set_train(mode=True)

Sets the cell to training mode.

The cell itself and all children cells will be set to training mode. Layers that have different constructions for training and predicting, such as BatchNorm, will distinguish between the branches by this attribute. If set to true, the training branch will be executed, otherwise another branch.

Note

When execute function Model.train(), framework will call Cell.set_train(True). When execute function Model.eval(), framework will call Cell.set_train(False).

Parameters:

mode (bool) – Specifies whether the model is training. Default: True.

Returns:

Cell, the cell itself.

shard(in_strategy, out_strategy=None, parameter_plan=None, device='Ascend', level=0)

Defining the input and output layouts of this cell and the parallel strategies of remaining ops will be generated by sharding propagation. In PyNative mode, use this method to specify a Cell for distributed execution in graph mode. in_strategy and out_strategy define the input and output layout respectively. in_strategy/out_strategy should be a tuple, each element of which corresponds to the desired layout of this input/output, and None represents data_parallel, which can refer to the description of mindspore.ops.Primitive.shard. The parallel strategies of remaining operators are derived from the strategy specified by the input and output.

Note

Only effective in PYNATIVE_MODE and in either ParallelMode.AUTO_PARALLEL with search_mode in auto_parallel_context set as sharding_propagation. If the input contain Parameter, its strategy should be set in in_strategy.

Parameters:
  • in_strategy (tuple) – Define the layout of inputs, each element of the tuple should be a tuple or None. Tuple defines the layout of the corresponding input and None represents a data parallel strategy.

  • out_strategy (Union[None, tuple]) – Define the layout of outputs similar with in_strategy. It is not in use right now. Default: None.

  • parameter_plan (Union[dict, None]) – Define the layout for the specified parameters. Each element in dict defines the layout of the parameter like “param_name: layout”. The key is a parameter name of type ‘str’. The value is a 1-D integer tuple, indicating the corresponding layout. If the parameter name is incorrect or the corresponding parameter has been set, the parameter setting will be ignored. Default: None.

  • device (string) – Select a certain device target. It is not in use right now. Support [“CPU”, “GPU”, “Ascend”]. Default: “Ascend”.

  • level (int) – Option for parallel strategy infer algorithm, namely the object function, maximize computation over communication ratio, maximize speed performance, minimize memory usage etc. It is not in use right now. Support [“0”, “1”, “2”]. Default: “0”.

Returns:

Cell, the cell itself.

Examples

>>> import mindspore.nn as nn
>>>
>>> class Block(nn.Cell):
...   def __init__(self):
...     self.dense1 = nn.Dense(10, 10)
...     self.relu = nn.ReLU()
...     self.dense2 = nn.Dense2(10, 10)
...   def construct(self, x):
...     x = self.relu(self.dense2(self.relu(self.dense1(x))))
...     return x
>>>
>>> class example(nn.Cell):
...   def __init__(self):
...     self.block1 = Block()
...     self.block2 = Block()
...     self.block2.shard(in_strategy=((2, 1),), out_strategy=(None,),
...                       parameter_plan={'self.block2.shard.dense1.weight': (4, 1)})
...   def construct(self, x):
...     x = self.block1(x)
...     x = self.block2(x)
...     return x
to_float(dst_type)

Add cast on all inputs of cell and child cells to run with certain float type.

If dst_type is mindspore.dtype.float16, all the inputs of Cell, including input, Parameter and Tensor, will be cast to float16. Please refer to the usage in source code of mindspore.amp.build_train_network().

Note

Multiple calls will overwrite.

Parameters:

dst_type (mindspore.dtype) – Transfer cell to run with dst_type. dst_type can be mstype.float16 or mstype.float32.

Returns:

Cell, the cell itself.

Raises:

ValueError – If dst_type is not mstype.float32 or mstype.float16.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> from mindspore import dtype as mstype
>>>
>>> net = nn.Conv2d(120, 240, 4, has_bias=False, weight_init='normal')
>>> net.to_float(mstype.float16)
Conv2d<input_channels=120, output_channels=240, kernel_size=(4, 4), stride=(1, 1), pad_mode=same,
padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>
trainable_params(recurse=True)

Returns all trainable parameters.

Returns a list of all trainable parameters.

Parameters:

recurse (bool) – Whether contains the trainable parameters of subcells. Default: True.

Returns:

List, the list of trainable parameters.

untrainable_params(recurse=True)

Returns all untrainable parameters.

Returns a list of all untrainable parameters.

Parameters:

recurse (bool) – Whether contains the untrainable parameters of subcells. Default: True.

Returns:

List, the list of untrainable parameters.

update_cell_prefix()

Update the param_prefix of all child cells.

After being invoked, it can get all the cell’s children’s name prefix by ‘_param_prefix’.

update_cell_type(cell_type)

The current cell type is updated when a quantization aware training network is encountered.

After being invoked, it can set the cell type to ‘cell_type’.

Parameters:

cell_type (str) – The type of cell to be updated, cell_type can be “quant” or “second-order”.

update_parameters_name(prefix='', recurse=True)

Adds the prefix string to the names of parameters.

Parameters:
  • prefix (str) – The prefix string. Default: ‘’.

  • recurse (bool) – Whether contains the parameters of subcells. Default: True.

Layer module contains pre-defined building blocks or computing units to construct neural networks.

The high-level components (Layers) used to construct the neural network.

class tinyms.layers.SequentialLayer(*args)[source]

Sequential layer container.

A list of Layers will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of cells can also be passed in.

Parameters:

args (Union[list, OrderedDict]) – List of subclass of Layer.

Raises:

TypeError – If the type of the argument is not list or OrderedDict.

Inputs:
  • input (Tensor) - Tensor with shape according to the first Cell in the sequence.

Outputs:

Tensor, the output Tensor with shape depending on the input and defined sequence of Layers.

Examples

>>> import tinyms as ts
>>> from tinyms.layers import SequentialLayer, Conv2d, ReLU
>>>
>>> seq_layer = SequentialLayer([Conv2d(3, 2, 3, pad_mode='valid', weight_init="ones"), ReLU()])
>>> x = ts.ones([1, 3, 4, 4])
>>> print(seq_layer(x))
[[[[27. 27.]
   [27. 27.]]
  [[27. 27.]
   [27. 27.]]]]
class tinyms.layers.LayerList(*args, **kwargs)[source]

Holds Layers in a list.

LayerList can be used like a regular Python list, support ‘__getitem__’, ‘__setitem__’, ‘__delitem__’, ‘__len__’, ‘__iter__’ and ‘__iadd__’, but layers it contains are properly registered, and will be visible by all Layer methods.

Parameters:

args (list, optional) – List of subclass of Layer.

Examples

>>> from tinyms.layers import LayerList, Conv2d, BatchNorm2d, ReLU
>>>
>>> conv = nn.Conv2d(100, 20, 3)
>>> layers = LayerList([BatchNorm2d(20)])
>>> layers.insert(0, Conv2d(100, 20, 3))
>>> layers.append(ReLU())
>>> layers
LayerList<
  (0): Conv2d<input_channels=100, ..., bias_init=None>
  (1): BatchNorm2d<num_features=20, ..., moving_variance=Parameter (name=variance)>
  (2): ReLU<>
  >
class tinyms.layers.TimeDistributed(layer, time_axis, reshape_with_axis=None)[source]

The time distributed layer.

Time distributed is a wrapper which allows to apply a layer to every temporal slice of an input. And the x should be at least 3D. There are two cases in the implementation. When reshape_with_axis provided, the reshape method will be chosen, which is more efficient; otherwise, the method of dividing the inputs along time axis will be used, which is more general. For example, reshape_with_axis could not be provided when deal with Batch Normalization.

Parameters:
  • layer (Union[Cell, Primitive]) – The Cell or Primitive which will be wrapped.

  • time_axis (int) – The axis of time_step.

  • reshape_with_axis (int) – The axis which will be reshaped with time_axis. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((N, T, *)\), where \(*\) means any number of additional dimensions.

Outputs:

Tensor of shape \((N, T, *)\)

Raises:

TypeError – If layer is not a Cell or Primitive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.random([32, 10, 3]), mindspore.float32)
>>> dense = nn.Dense(3, 6)
>>> net = nn.TimeDistributed(dense, time_axis=1, reshape_with_axis=0)
>>> output = net(x)
>>> print(output.shape)
(32, 10, 6)
class tinyms.layers.ForwardValueAndGrad(network, weights=None, get_all=False, get_by_list=False, sens_param=False)[source]

Encapsulate training network.

Including the network and a gradient function. The resulting Cell is trained with input ‘*inputs’. The backward graph will be created in the gradient function to calculating gradient.

Parameters:
  • network (Cell) – The training network.

  • weights (ParameterTuple) – The parameters of the training network that need to calculate the gradient. Default: None.

  • get_all (bool) – If True, get all the gradients with respect to inputs. Default: False.

  • get_by_list (bool) – If True, get all the gradients with respect to Parameter variables. If get_all and get_by_list are both False, get the gradient with respect to first input. If get_all and get_by_list are both True, get the gradients with respect to inputs and Parameter variables at the same time in the form of ((gradients with respect to inputs), (gradients with respect to parameters)). Default: False.

  • sens_param (bool) – Whether to append sensitivity (gradient with respect to output) as input. If sens_param is False, a ‘ones_like(outputs)’ sensitivity will be attached automatically. Default: False. If the sens_param is True, a sensitivity (gradient with respect to output) needs to be transferred through the input parameter.

Inputs:
  • *inputs (Tuple(Tensor…)) - Tuple of inputs with shape \((N, \ldots)\).

  • sens - A sensitivity (gradient with respect to output) as the input of backpropagation. If network has single output, the sens is a tensor. If network has multiple outputs, the sens is the tuple(tensor).

Outputs:
  • forward value - The result of network forward running.

  • gradients (tuple(tensor)) - The gradients of network parameters and inputs.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, nn, common, ops, ParameterTuple, Parameter
>>>
>>> class Net(nn.Cell):
...    def __init__(self):
...        super(Net, self).__init__()
...        self.weight = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="weight")
...        self.matmul = ops.MatMul()
...
...    def construct(self, x):
...        out = self.matmul(x, self.weight)
...        return out
...
>>> net = Net()
>>> criterion = nn.SoftmaxCrossEntropyWithLogits()
>>> net_with_criterion = nn.WithLossCell(net, criterion)
>>> weight = ParameterTuple(net.trainable_params())
>>> train_network = nn.ForwardValueAndGrad(net_with_criterion, weights=weight, get_all=True, get_by_list=True)
>>> inputs = Tensor(np.ones([1, 2]).astype(np.float32))
>>> labels = Tensor(np.ones([1, 2]).astype(np.float32))
>>> result = train_network(inputs, labels)
>>> print(result)
 (Tensor(shape=[1], dtype=Float32, value= [ 1.38629436e+00]), ((Tensor(shape=[1, 2], dtype=Float32, value=
[[ -1.00000000e+00,  -1.00000000e+00]]), Tensor(shape=[1, 2], dtype=Float32, value=
[[ 0.00000000e+00,  0.00000000e+00]])), (Tensor(shape=[2, 2], dtype=Float32, value=
[[ -5.00000000e-01,  -5.00000000e-01],
 [ -5.00000000e-01,  -5.00000000e-01]]),)))
class tinyms.layers.TrainOneStepCell(network, optimizer, sens=1.0)[source]

Network training package class.

Wraps the network with the optimizer. The resulting Cell is trained with input ‘*inputs’. The backward graph will be created in the construct function to update the parameter. Different parallel modes are available for training.

Parameters:
  • network (Cell) – The training network. The network only supports single output.

  • optimizer (Union[Cell]) – Optimizer for updating the network parameters.

  • sens (numbers.Number) – The scaling number to be filled as the input of backpropagation. Default value is 1.0.

Inputs:
  • *inputs (Tuple(Tensor)) - Tuple of input tensors with shape \((N, \ldots)\).

Outputs:

Tensor, a tensor means the loss value, the shape of which is usually \(()\).

Raises:

TypeError – If sens is not a numbers.Number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits()
>>> optim = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> #1) Using the WithLossCell provided by MindSpore
>>> loss_net = nn.WithLossCell(net, loss_fn)
>>> train_net = nn.TrainOneStepCell(loss_net, optim)
>>>
>>> #2) Using user-defined WithLossCell
>>> class MyWithLossCell(Cell):
...    def __init__(self, backbone, loss_fn):
...        super(MyWithLossCell, self).__init__(auto_prefix=False)
...        self._backbone = backbone
...        self._loss_fn = loss_fn
...
...    def construct(self, x, y, label):
...        out = self._backbone(x, y)
...        return self._loss_fn(out, label)
...
...    @property
...    def backbone_network(self):
...        return self._backbone
...
>>> loss_net = MyWithLossCell(net, loss_fn)
>>> train_net = nn.TrainOneStepCell(loss_net, optim)
class tinyms.layers.WithLossCell(backbone, loss_fn)[source]

Cell with loss function.

Wraps the network with loss function. This Cell accepts data and label as inputs and the computed loss will be returned.

Parameters:
  • backbone (Cell) – The backbone network to wrap.

  • loss_fn (Cell) – The loss function used to compute loss.

Inputs:
  • data (Tensor) - Tensor of shape \((N, \ldots)\).

  • label (Tensor) - Tensor of shape \((N, \ldots)\).

Outputs:

Tensor, a tensor means the loss value, the shape of which is usually \(()\).

Raises:

TypeError – If dtype of data or label is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = Net()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> net_with_criterion = nn.WithLossCell(net, loss_fn)
>>>
>>> batch_size = 2
>>> data = Tensor(np.ones([batch_size, 1, 32, 32]).astype(np.float32) * 0.01)
>>> label = Tensor(np.ones([batch_size, 10]).astype(np.float32))
>>>
>>> output_data = net_with_criterion(data, label)
property backbone_network

Get the backbone network.

Returns:

Cell, the backbone network.

class tinyms.layers.WithGradCell(network, loss_fn=None, sens=None)[source]

Cell that returns the gradients.

Wraps the network with backward cell to compute gradients. A network with a loss function is necessary as argument. If loss function in None, the network must be a wrapper of network and loss function. This Cell accepts ‘*inputs’ as inputs and returns gradients for each trainable parameter.

Note

Run in PyNative mode.

Parameters:
  • network (Cell) – The target network to wrap. The network only supports single output.

  • loss_fn (Cell) – Primitive loss function used to compute gradients. Default: None.

  • sens (Union[None, Tensor, Scalar, Tuple ...]) – The sensitive for backpropagation, the type and shape must be same as the network output. If None, we will fill one to a same type shape of output value. Default: None.

Inputs:
  • *inputs (Tuple(Tensor)) - Tuple of input tensors with shape \((N, \ldots)\).

Outputs:

list, a list of Tensors with identical shapes as trainable weights.

Raises:

TypeError – If sens is not one of None, Tensor, Scalar or Tuple.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # For a defined network Net without loss function
>>> net = Net()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits()
>>> grad_net = nn.WithGradCell(net, loss_fn)
>>>
>>> # For a network wrapped with loss function
>>> net = Net()
>>> net_with_criterion = nn.WithLossCell(net, loss_fn)
>>> grad_net = nn.WithGradCell(net_with_criterion)
class tinyms.layers.MicroBatchInterleaved(network, interleave_num=2)[source]

This function splits the input at the 0th into interleave_num pieces and then performs the computation of the wrapped cell. Application scenario: When there is model parallelism in semi-automatic mode and network, if the first slice data is calculating forward, the second slice data will execute the communication operators at the same time, to achieve the performance acceleration of communication and computing concurrency.

Note

The output of the input network must be a single tensor.

Parameters:
  • network (Cell) – The target network to wrap.

  • interleave_num (int, optional) – split num of batch size. Default: 2.

Inputs:

tuple[Tensor]. It’s the same with the input of the network .

Outputs:

Tensor. The output of the input network .

Supported Platforms:

Ascend GPU

Examples

>>> net = Net()
>>> net = MicroBatchInterleaved(net, 2)
class tinyms.layers.PipelineCell(network, micro_size)[source]

Wrap the network with Micro Batch.

Note

micro_size must be greater or equal to pipeline stages.

Parameters:
  • network (Cell) – The target network to wrap.

  • micro_size (int) – MicroBatch size.

Supported Platforms:

Ascend GPU

Examples

>>> net = Net()
>>> net = PipelineCell(net, 4)
class tinyms.layers.WithEvalCell(network, loss_fn, add_cast_fp32=False)[source]

Wraps the forward network with the loss function.

It returns loss, forward output and label to calculate the metrics.

Parameters:
  • network (Cell) – The forward network.

  • loss_fn (Cell) – The loss function.

  • add_cast_fp32 (bool) – Whether to adjust the data type to float32. Default: False.

Inputs:
  • data (Tensor) - Tensor of shape \((N, \ldots)\).

  • label (Tensor) - Tensor of shape \((N, \ldots)\).

Outputs:

Tuple(Tensor), containing a scalar loss Tensor, a network output Tensor of shape \((N, \ldots)\) and a label Tensor of shape \((N, \ldots)\).

Raises:

TypeError – If add_cast_fp32 is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # Forward network without loss function
>>> net = Net()
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits()
>>> eval_net = nn.WithEvalCell(net, loss_fn)
class tinyms.layers.GetNextSingleOp(dataset_types, dataset_shapes, queue_name)[source]

Cell to run for getting the next operation.

For detailed information, refer to mindspore.ops.GetNext.

Parameters:
  • dataset_types (list[mindspore.dtype]) – The types of dataset.

  • dataset_shapes (list[tuple[int]]) – The shapes of dataset.

  • queue_name (str) – Queue name to fetch the data.

Outputs:

tuple[Tensor], the data get from Dataset.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> from mindspore import ops, nn
>>> from mindspore import dataset as ds
>>> from mindspore.common import dtype as mstype
>>>
>>> data_path =  "/path/to/MNIST_Data/train/"
>>> train_dataset = ds.MnistDataset(data_path, num_samples=10)
>>> dataset_helper = mindspore.DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> dataset = dataset_helper.iter.dataset
>>> dataset_types, dataset_shapes = dataset_helper.types_shapes()
>>> queue_name = dataset.__transfer_dataset__.queue_name
>>> get_next_single_op_net = nn.GetNextSingleOp(dataset_types, dataset_shapes, queue_name)
>>> data, label = get_next_single_op_net()
>>> relu = ops.ReLU()
>>> result = relu(data.astype(mstype.float32))
>>> print(result.shape)
(28, 28, 1)
class tinyms.layers.TrainOneStepWithLossScaleCell(network, optimizer, scale_sense)[source]

Network training with loss scaling.

This is a training step with loss scaling. It takes a network, an optimizer and a scale update Cell(or a Tensor) as args. The loss scale value can be updated in both host side or device side. If you want to update it on host side, using a value of Tensor type as scale_sense, otherwise, using a Cell instance for updating loss scale as scale_sense.

Parameters:
  • network (Cell) – The training network. The network only supports single output.

  • optimizer (Cell) – Optimizer for updating the network parameters.

  • scale_sense (Union[Tensor, Cell]) – If this value is a Cell, it will be called by TrainOneStepWithLossScaleCell to update loss scale. If this value is a Tensor, the loss scale can be modified by set_sense_scale, the shape should be \(()\) or \((1,)\).

Inputs:
  • *inputs (Tuple(Tensor)) - Tuple of input tensors with shape \((N, \ldots)\).

Outputs:

Tuple of 3 Tensor, the loss, overflow flag and current loss scale value.

  • loss (Tensor) - A scalar, the loss value.

  • overflow (Tensor) - A scalar, whether overflow occur or not, the type is bool.

  • loss scale (Tensor) - The loss scale value, the shape is \(()\) or \((1,)\).

Raises:
  • TypeError – If scale_sense is neither Cell nor Tensor.

  • ValueError – If shape of scale_sense is neither \((1,)\) nor \(()\).

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor, Parameter, nn, ops
>>> from mindspore import dtype as mstype
>>>
>>> class Net(nn.Cell):
...     def __init__(self, in_features, out_features):
...         super(Net, self).__init__()
...         self.weight = Parameter(Tensor(np.ones([in_features, out_features]).astype(np.float32)),
...                                 name='weight')
...         self.matmul = ops.MatMul()
...
...     def construct(self, x):
...         output = self.matmul(x, self.weight)
...         return output
...
>>> size, in_features, out_features = 16, 16, 10
>>> #1) when the type of scale_sense is Cell:
>>> net = Net(in_features, out_features)
>>> loss = nn.MSELoss()
>>> optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> net_with_loss = nn.WithLossCell(net, loss)
>>> manager = nn.DynamicLossScaleUpdateCell(loss_scale_value=2**12, scale_factor=2, scale_window=1000)
>>> train_network = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer, scale_sense=manager)
>>> input = Tensor(np.ones([out_features, in_features]), mindspore.float32)
>>> labels = Tensor(np.ones([out_features,]), mindspore.float32)
>>> output = train_network(input, labels)
>>>
>>> #2) when the type of scale_sense is Tensor:
>>> net = Net(in_features, out_features)
>>> loss = nn.MSELoss()
>>> optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> net_with_loss = nn.WithLossCell(net, loss)
>>> inputs = Tensor(np.ones([size, in_features]).astype(np.float32))
>>> label = Tensor(np.zeros([size, out_features]).astype(np.float32))
>>> scaling_sens = Tensor([1024], dtype=mstype.float32)
>>> train_network = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer, scale_sense=scaling_sens)
>>> output = train_network(inputs, label)
>>>
>>> # update scaling sens and train the network
>>> scaling_sens = Tensor([1], dtype=mstype.float32)
>>> train_network.set_sense_scale(scaling_sens)
>>> output = train_network(inputs, label)
get_overflow_status(status, compute_output)[source]

Get floating-point overflow status.

Get overflow results after executing the target process for overflow detection. User-defined training network based on this class can also call this interface to process the overflow.

Parameters:
  • status (object) – To control the execution sequence with start_overflow_check, it should be set as the first output of start_overflow_check.

  • compute_output – Overflow detection should be performed in a certain computation process. Set compute_output as the output of the computation process.

Returns:

bool, whether the overflow occurs or not.

process_loss_scale(overflow)[source]

Calculate loss scale according to the overflow.

User-defined training network based on this class can also call this interface to process the overflow.

Parameters:

overflow (bool) – Whether the overflow occurs or not.

Returns:

bool, the input overflow value.

set_sense_scale(sens)[source]

If the user has set the scale_sense of Tensor type, he can call this function to reassign the value.

Parameters:

sens (Tensor) – The new sense whose shape and type are the same with original scale_sense.

start_overflow_check(pre_cond, compute_input)[source]

Start floating-point overflow detection. Create and clear the overflow detection state.

Specify the argument ‘pre_cond’ and ‘compute_input’ to make sure overflow status is cleared at the right time. Taking this situation as an example, we need to execute state clearing after loss calculation and then detect overflow in the process of gradient calculation. In this case, pre_cond should be the output of the loss function, and compute_input should be the input of gradients-computing function. User-defined training network based on this class can also call this interface to process the overflow.

Parameters:
  • pre_cond (Tensor) – A precondition for starting overflow detection. It determines the executing order of overflow state clearing and prior processions. It makes sure that the function ‘start_overflow’ clears status after finishing the process of precondition.

  • compute_input (object) – The input of subsequent process. Overflow detection should be performed on a certain computation. Set compute_input as the input of the computation, to ensure overflow status is cleared before executing the computation.

Returns:

Tuple[object, object], the first output is used to control the execution sequence. To ensure that the start_overflow_check is executed before get_overflow_status after compilation optimization is performed. This value should be used as the first input of get_overflow_status. The second output is the same as the input of compute_input, used to control the execution sequence, and make ensure that the overflow flag is cleaned up when the function returns.

class tinyms.layers.DistributedGradReducer(parameters, mean=None, degree=None, fusion_type=1, group='hccl_world_group')[source]

A distributed optimizer.

Aggregate the gradients for all cards by using AllReduce in data parallel.

Parameters:
  • parameters (list) – the parameters to be updated.

  • mean (bool) – When mean is true, the mean coefficient (degree) would apply on gradients. When it is not specified, using the configuration gradients_mean in auto_parallel_context. Default: None.

  • degree (int) – The mean coefficient. Usually it equals to device number. Default: None.

  • fusion_type (int) – The type of all reduce fusion. Default: 1.

  • group (str) – The communication group to work on. Normally, the group should be created by create_group, otherwise, using the default group. Default: GlobalComm.WORLD_COMM_GROUP.

Raises:

ValueError – If degree is not an int or less than 0.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.communication import init
>>> from mindspore import ops
>>> from mindspore import Parameter, Tensor
>>> from mindspore import nn
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> ms.reset_auto_parallel_context()
>>> ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL)
>>>
>>> class TrainingWrapper(nn.Cell):
...     def __init__(self, network, optimizer, sens=1.0):
...         super(TrainingWrapper, self).__init__(auto_prefix=False)
...         self.network = network
...         self.network.add_flags(defer_inline=True)
...         self.weights = optimizer.parameters
...         self.optimizer = optimizer
...         self.grad = ops.GradOperation(get_by_list=True, sens_param=True)
...         self.sens = sens
...         self.reducer_flag = False
...         self.grad_reducer = None
...         self.parallel_mode = context.get_auto_parallel_context("parallel_mode")
...         self.depend = ops.Depend()
...         if self.parallel_mode in [ms.ParallelMode.DATA_PARALLEL, ms.ParallelMode.HYBRID_PARALLEL]:
...             self.reducer_flag = True
...         if self.reducer_flag:
...             mean = context.get_auto_parallel_context("gradients_mean")
...             degree = context.get_auto_parallel_context("device_num")
...             self.grad_reducer = nn.DistributedGradReducer(optimizer.parameters, mean, degree)
...
...     def construct(self, *args):
...         weights = self.weights
...         loss = self.network(*args)
...         sens = ops.Fill()(ops.DType()(loss), ops.Shape()(loss), self.sens)
...         grads = self.grad(self.network, weights)(*args, sens)
...         if self.reducer_flag:
...             # apply grad reducer on grads
...             grads = self.grad_reducer(grads)
...         return self.depend(loss, self.optimizer(grads))
>>>
>>> class Net(nn.Cell):
...     def __init__(self, in_features, out_features):
...         super(Net, self).__init__()
...         self.weight = Parameter(Tensor(np.ones([in_features, out_features]).astype(np.float32)),
...                                 name='weight')
...         self.matmul = ops.MatMul()
...
...     def construct(self, x):
...         output = self.matmul(x, self.weight)
...         return output
>>>
>>> size, in_features, out_features = 16, 16, 10
>>> network = Net(in_features, out_features)
>>> loss = nn.MSELoss()
>>> net_with_loss = nn.WithLossCell(network, loss)
>>> optimizer = nn.Momentum(net_with_loss.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> train_cell = TrainingWrapper(net_with_loss, optimizer)
>>> inputs = Tensor(np.ones([size, in_features]).astype(np.float32))
>>> label = Tensor(np.zeros([size, out_features]).astype(np.float32))
>>> grads = train_cell(inputs, label)
>>> print(grads)
256.0
construct(grads)[source]

Under certain circumstances, the data precision of grads could be mixed with float16 and float32. Thus, the result of AllReduce is unreliable. To solve the problem, grads must be cast to float32 before AllReduce, and cast back after the operation.

Parameters:

grads (Union[Tensor, tuple[Tensor]]) – The gradient tensor or tuple before operation.

Returns:

new_grads (Union[Tensor, tuple[Tensor]]), the gradient tensor or tuple after operation.

class tinyms.layers.ParameterUpdate(param)[source]

Cell that updates parameter.

With this Cell, one can manually update param with the input Tensor.

Parameters:

param (Parameter) – The parameter to be updated manually.

Inputs:
  • x (Tensor) - A tensor whose shape and type are the same with param.

Outputs:

Tensor, the updated value.

Raises:

KeyError – If parameter with the specified name does not exist.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import nn, Tensor
>>> network = nn.Dense(3, 4)
>>> param = network.parameters_dict()['weight']
>>> update = nn.ParameterUpdate(param)
>>> update.phase = "update_param"
>>> weight = Tensor(np.arange(12).reshape((4, 3)), mindspore.float32)
>>> output = update(weight)
>>> print(output)
[[ 0.  1.  2.]
 [ 3.  4.  5.]
 [ 6.  7.  8.]
 [ 9. 10. 11.]]
class tinyms.layers.DynamicLossScaleUpdateCell(loss_scale_value, scale_factor, scale_window)[source]

Dynamic Loss scale update cell.

For loss scaling training, the initial loss scaling value will be set to be loss_scale_value. In each training step, the loss scaling value will be decreased by loss_scale/scale_factor when there is an overflow. And it will be increased by loss_scale * scale_factor if there is no overflow for a continuous scale_window steps.

get_update_cell method of mindspore.amp.DynamicLossScaleManager will return this class. It will be called by mindspore.nn.TrainOneStepWithLossScaleCell during training to update loss scale.

Parameters:
  • loss_scale_value (float) – Initializes loss scale.

  • scale_factor (int) – Coefficient of increase and decrease.

  • scale_window (int) – Maximum continuous training steps that do not have overflow to increase the loss scale.

Inputs:
  • loss_scale (Tensor) - The loss scale value during training with shape \(()\).

  • overflow (bool) - Whether the overflow occurs or not.

Outputs:

bool, the input overflow.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, Parameter, nn
>>> import mindspore.ops as ops
>>>
>>> class Net(nn.Cell):
...     def __init__(self, in_features, out_features):
...         super(Net, self).__init__()
...         self.weight = Parameter(Tensor(np.ones([in_features, out_features]).astype(np.float32)),
...                                 name='weight')
...         self.matmul = ops.MatMul()
...
...     def construct(self, x):
...         output = self.matmul(x, self.weight)
...         return output
...
>>> in_features, out_features = 16, 10
>>> net = Net(in_features, out_features)
>>> loss = nn.MSELoss()
>>> optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> net_with_loss = nn.WithLossCell(net, loss)
>>> manager = nn.DynamicLossScaleUpdateCell(loss_scale_value=2**12, scale_factor=2, scale_window=1000)
>>> train_network = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer, scale_sense=manager)
>>> input = Tensor(np.ones([out_features, in_features]), mindspore.float32)
>>> labels = Tensor(np.ones([out_features,]), mindspore.float32)
>>> output = train_network(input, labels)
get_loss_scale()[source]

Get Loss Scale value.

Returns:

float, the loss scale value.

class tinyms.layers.FixedLossScaleUpdateCell(loss_scale_value)[source]

Update cell with fixed loss scaling value.

get_update_cell method of mindspore.amp.FixedLossScaleManager will return this class. It will be called by mindspore.nn.TrainOneStepWithLossScaleCell during trainning.

Parameters:

loss_scale_value (float) – Initializes loss scale.

Inputs:
  • loss_scale (Tensor) - The loss scale value during training with shape \(()\), it is ignored in this class.

  • overflow (bool) - Whether the overflow occurs or not.

Outputs:

bool, the input overflow.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, Parameter, nn, ops
>>>
>>> class Net(nn.Cell):
...     def __init__(self, in_features, out_features):
...         super(Net, self).__init__()
...         self.weight = Parameter(Tensor(np.ones([in_features, out_features]).astype(np.float32)),
...                                 name='weight')
...         self.matmul = ops.MatMul()
...
...     def construct(self, x):
...         output = self.matmul(x, self.weight)
...         return output
...
>>> in_features, out_features = 16, 10
>>> net = Net(in_features, out_features)
>>> loss = nn.MSELoss()
>>> optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> net_with_loss = nn.WithLossCell(net, loss)
>>> manager = nn.FixedLossScaleUpdateCell(loss_scale_value=2**12)
>>> train_network = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer, scale_sense=manager)
>>> input = Tensor(np.ones([out_features, in_features]), mindspore.float32)
>>> labels = Tensor(np.ones([out_features,]), mindspore.float32)
>>> output = train_network(input, labels)
get_loss_scale()[source]

Get Loss Scale value.

Returns:

float, the loss scale value.

class tinyms.layers.VirtualDatasetCellTriple(backbone)[source]

Wrap the network with virtual dataset to convert data parallel layout to model parallel layout.

VirtualDatasetCellTriple is a virtual Primitive, it does not exist in the final executing graph. Inputs and outputs of VirtualDatasetCellTriple are distributed in data parallel pattern, tensor redistribution Primitives is inserted dynamically during the graph compile process.

Note

Only used in semi auto parallel and auto parallel mode. There are three inputs, as contrary to two inputs in _VirtualDatasetCell.

Parameters:

backbone (Cell) – The target network to wrap.

Examples

>>> net = Net()
>>> net = VirtualDatasetCellTriple(net)
class tinyms.layers.Softmin(axis=-1)[source]

Softmin activation function, which is a two-category function mindspore.nn.Sigmoid in the promotion of multi-classification, and the purpose is to show the results of multi-classification in the form of probability.

Calculate the value of the exponential function for the elements of the input Tensor on the axis, and then normalized to lie in range [0, 1] and sum up to 1.

Softmin is defined as:

\[\text{softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_{j=0}^{n-1}\exp(-x_j)},\]

where \(x_{i}\) is the \(i\)-th slice in the given dimension of the input Tensor.

Parameters:

axis (Union[int, tuple[int]]) – The axis to apply Softmin operation, if the dimension of input x is x.ndim, the range of axis is [-x.ndim, x.ndim). -1 means the last dimension. Default: -1.

Inputs:
  • x (Tensor) - Tensor for computing Softmin functions with data type of float16 or float32.

Outputs:

Tensor, which has the same type and shape as x with values in the range [0,1].

Raises:
  • TypeError – If axis is neither an int nor a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in the range [-x.ndim, x.ndim).

Supported Platforms:

Ascend GPU CPU

Examples

>>> # axis = -1(default), and the sum of return value is 1.0.
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> softmin = nn.Softmin()
>>> output = softmin(x)
>>> print(output)
[0.2341  0.636  0.0862  0.01165  0.03168 ]
>>> assert(1.0 == output.sum())
class tinyms.layers.Softmax(axis=-1)[source]

Softmax activation function, which is a two-category function mindspore.nn.Sigmoid in the promotion of multi-classification, the purpose is to show the results of multi-classification in the form of probability.

Calculate the value of the exponential function for the elements of the input Tensor on the axis, and then normalized to lie in range [0, 1] and sum up to 1.

Softmax is defined as:

\[\text{softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_{j=0}^{n-1}\exp(x_j)},\]

where \(x_{i}\) is the \(i\)-th slice in the given dimension of the input Tensor.

Parameters:

axis (Union[int, tuple[int]]) – The axis to apply Softmax operation, if the dimension of input x is x.ndim, the range of axis is [-x.ndim, x.ndim), -1 means the last dimension. Default: -1.

Inputs:
  • x (Tensor) - The input of Softmax with data type of float16 or float32.

Outputs:

Tensor, which has the same type and shape as x with values in the range[0,1].

Raises:
  • TypeError – If axis is neither an int nor a tuple.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is a tuple whose length is less than 1.

  • ValueError – If axis is a tuple whose elements are not all in range [-len(x), len(x)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> # axis = -1(default), and the sum of return value is 1.0.
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> softmax = nn.Softmax()
>>> output = softmax(x)
>>> print(output)
[0.03168 0.01166 0.0861  0.636   0.2341 ]
>>> assert(1.0 == output.sum())
class tinyms.layers.Softmax2d[source]

Softmax function applied to 2D features data.

Applies Softmax to each location \((c, h, w)\) with an input Tensor of shape \((C, H, W)\) .

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, which has the same type and shape as x with values in the range[0,1].

Raises:
  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘CHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0.1, 0.2]], [[0.3, 0.4]], [[0.6, 0.5]]]]), mindspore.float32)
>>> softmax2d = nn.Softmax2d()
>>> output = softmax2d(x)
>>> print(output)
[[[[0.258, 0.28]], [[0.316, 0.342]], [[0.426, 0.378]]]
class tinyms.layers.LogSoftmax(axis=-1)[source]

Applies the LogSoftmax function to n-dimensional input tensor.

The input is transformed by the Softmax function and then by the log function to lie in range[-inf,0).

Logsoftmax is defined as:

\[\text{logsoftmax}(x_i) = \log \left(\frac{\exp(x_i)}{\sum_{j=0}^{n-1} \exp(x_j)}\right),\]
Parameters:

axis (int) – The axis to apply LogSoftmax operation, -1 means the last dimension. Default: -1.

Inputs:
  • x (Tensor) - The input of LogSoftmax, with float16 or float32 data type.

Outputs:

Tensor, which has the same type and shape as x with output values in the range[-inf,0).

Raises:
  • TypeError – If axis is not an int.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If axis is not in range [-len(x), len(x)).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> log_softmax = nn.LogSoftmax()
>>> output = log_softmax(x)
>>> print(output)
[[-5.00672150e+00 -6.72150636e-03 -1.20067215e+01]
 [-7.00091219e+00 -1.40009127e+01 -9.12250078e-04]]
class tinyms.layers.ReLU[source]

Rectified Linear Unit activation function.

\[\text{ReLU}(x) = (x)^+ = \max(0, x),\]

It returns element-wise \(\max(0, x)\). Specially, the neurons with the negative output will be suppressed and the active neurons will stay the same.

The picture about ReLU looks like this ReLU .

Inputs:
  • x (Tensor) - The input of ReLU is a Tensor of any dimension. The data type is number .

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is not a number.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> relu = nn.ReLU()
>>> output = relu(x)
>>> print(output)
[0. 2. 0. 2. 0.]
class tinyms.layers.ReLU6[source]

Compute ReLU6 activation function.

ReLU6 is similar to ReLU with a upper limit of 6, which if the inputs are greater than 6, the outputs will be suppressed to 6. It computes element-wise as

\[Y = \min(\max(0, x), 6).\]

The input is a Tensor of any valid shape.

Inputs:
  • x (Tensor) - The input of ReLU6 with data type of float16 or float32.

Outputs:

Tensor, which has the same type as x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> relu6 = nn.ReLU6()
>>> output = relu6(x)
>>> print(output)
[0. 0. 0. 2. 1.]
class tinyms.layers.RReLU(lower=0.125, upper=0.3333333333333333)[source]

Randomized Leaky ReLU activation function.

The activation function is defined as:

\[\text{RReLU}(x_{ji}) = \begin{cases}x_{ji}, &\text{if } x_{ji} \geq 0; \cr {\alpha_{ji}} * x_{ji}, &\text{otherwise.}\end{cases}\]

where \(\alpha_{ji}\) ~ \(U(l, u)\), \(l \le u\).

Applies the RReLU function elementally, as described in the paper: Empirical Evaluation of Rectified Activations in Convolution Network .

Parameters:
  • lower (Union[int, float]) – Slope of the activation function at x < 0. Default: 1/8.

  • upper (Union[int, float]) – Slope of the activation function at x < 0. Default: 1/3.

Inputs:
  • x (Tensor) - The input of RReLU is a Tensor of any dimension.

Outputs:

Tensor, after RReLU, has the same type and shape as the x.

Raises:
  • TypeError – If lower is not a float or an int.

  • TypeError – If upper is not a float or an int.

  • TypeError – If x is not a Tensor.

  • TypeError – If x is not a Tensor of mindspore.float16 or mindpore.float32.

  • ValueError – If lower is greater than upper.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.array([[-1.0, 4.0], [2.0, 0]]), mindspore.float32)
>>> r_relu = nn.RReLU()
>>> output = r_relu(x)
>>> print(output)
[[-0.31465699  4.        ]
 [ 2.          0.        ]]
class tinyms.layers.SeLU[source]

Activation function SeLU (Scaled exponential Linear Unit).

Refer to mindspore.ops.selu() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> selu = nn.SeLU()
>>> output = selu(input_x)
>>> print(output)
[[-1.1113307 4.202804 -1.7575096]
[ 2.101402 -1.7462534 9.456309 ]]
class tinyms.layers.SiLU[source]

Sigmoid Linear Unit activation function.

Applies the sigmoid linear unit function element-wise.

\[\text{SiLU}(x) = x * \sigma(x),\]

where \(x_i\) is input, \(\sigma(x)\) is Sigmoid function.

\[\text{sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

The picture about SiLU looks like this SiLU .

Inputs:
  • x (Tensor) - Input with the data type float16 or float32.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> silu = nn.SiLU()
>>> output = silu(x)
>>> print(output)
[-0.269  1.762  -0.1423  1.762  -0.269]
class tinyms.layers.Tanh[source]

Applies the Tanh function element-wise, returns a new tensor with the hyperbolic tangent of the elements of input, The input is a Tensor with any valid shape.

Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • x (Tensor) - Tensor of any dimension, input with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 2, 1]), mindspore.float16)
>>> tanh = nn.Tanh()
>>> output = tanh(x)
>>> print(output)
[0.7617 0.964  0.995  0.964  0.7617]
class tinyms.layers.Tanhshrink[source]

Tanhshrink activation function.

The tanhshrink function is evaluated by element and returns a new tensor.

Tanh function is defined as:

\[tanhshrink(x_i) =x_i- \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = x_i-\frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • x (Tensor) - Tensor of any dimension.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> import numpy as np
>>> x = Tensor(np.array([1, 2, 3, 2, 1]), ms.float16)
>>> tanhshrink = nn.Tanhshrink()
>>> output = tanhshrink(x)
>>> print(output)
[0.2383 1.036  2.004  1.036  0.2383]
class tinyms.layers.Hardtanh(min_val=-1.0, max_val=1.0)[source]

Applies the Hardtanh function element-wise. The activation function is defined as:

\[\begin{split}\text{Hardtanh}(x) = \begin{cases} 1, & \text{ if } x > 1; \\ -1, & \text{ if } x < -1; \\ x, & \text{ otherwise. } \end{cases}\end{split}\]

Linear region range \([-1, 1]\) can be adjusted using min_val and max_val.

Note

On Ascend, data type of float16 might lead to accidental accuracy problem.

Parameters:
  • min_val (Union[int, float]) – Minimum value of the linear region range. Default: -1.0.

  • max_val (Union[int, float]) – Maximum value of the linear region range. Default: 1.0.

Inputs:
  • x (Tensor) - Input Tensor with data type of float16 or float32. On CPU and Ascend support dimension 0-7D. On GPU support dimension 0-4D.

Outputs:

Tensor, with the same dtype and shape as x.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

  • TypeError – If dtype of min_val is neither float nor int.

  • TypeError – If dtype of max_val is neither float nor int.

  • ValueError – If min_val is not less than max_val.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> hardtanh = nn.Hardtanh(min_val=-1.0, max_val=1.0)
>>> output = hardtanh(x)
>>> print(output)
[-1. -1.  0.  1.  1.]
class tinyms.layers.GELU(approximate=True)[source]

Gaussian error linear unit activation function.

Applies GELU function to each element of the input. The input is a Tensor with any valid shape.

GELU is defined as:

\[GELU(x_i) = x_i*P(X < x_i),\]

where \(P\) is the cumulative distribution function of standard Gaussian distribution and \(x_i\) is the element of the input.

The picture about GELU looks like this GELU.

Parameters:

approximate (bool) –

Whether to enable approximation. Default: True.

If approximate is True, The gaussian error linear activation is:

\(0.5 * x * (1 + tanh(\sqrt(2 / \pi) * (x + 0.044715 * x^3)))\)

else, it is:

\(x * P(X <= x) = 0.5 * x * (1 + erf(x / \sqrt(2)))\), where P(X) ~ N(0, 1).

Inputs:
  • x (Tensor) - The input of GELU with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> gelu = nn.GELU()
>>> output = gelu(x)
>>> print(output)
[[-1.5880802e-01  3.9999299e+00 -3.1077917e-21]
 [ 1.9545976e+00 -2.2918017e-07  9.0000000e+00]]
>>> gelu = nn.GELU(approximate=False)
>>> # CPU not support "approximate=False", using "approximate=True" instead
>>> output = gelu(x)
>>> print(output)
[[-1.5865526e-01  3.9998732e+00 -0.0000000e+00]
 [ 1.9544997e+00 -1.4901161e-06  9.0000000e+00]]
class tinyms.layers.FastGelu[source]

Fast Gaussian error linear unit activation function.

Applies FastGelu function to each element of the input. The input is a Tensor with any valid shape.

FastGelu is defined as:

\[FastGelu(x_i) = \frac {x_i} {1 + \exp(-1.702 * \left| x_i \right|)} * \exp(0.851 * (x_i - \left| x_i \right|))\]

where \(x_i\) is the element of the input.

Inputs:
  • x (Tensor) - The input of FastGelu with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = nn.FastGelu()
>>> output = fast_gelu(x)
>>> print(output)
[[-1.5418735e-01  3.9921875e+00 -9.7473649e-06]
 [ 1.9375000e+00 -1.0052517e-03  8.9824219e+00]]
class tinyms.layers.Sigmoid[source]

Sigmoid activation function.

Applies sigmoid-type activation element-wise.

Sigmoid function is defined as:

\[\text{sigmoid}(x_i) = \frac{1}{1 + \exp(-x_i)},\]

where \(x_i\) is the element of the input.

The picture about Sigmoid looks like this Sigmoid.

Inputs:
  • input_x (Tensor) - The input of Sigmoid with data type of float16 or float32. Tensor of any dimension.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> sigmoid = nn.Sigmoid()
>>> output = sigmoid(x)
>>> print(output)
[0.2688  0.11914 0.5     0.881   0.7305 ]
class tinyms.layers.Softsign[source]

Softsign activation function.

Refer to mindspore.ops.softsign() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
>>> softsign = nn.Softsign()
>>> output = softsign(x)
>>> print(output)
[ 0.        -0.5         0.6666667  0.9677419 -0.9677419]
class tinyms.layers.PReLU(channel=1, w=0.25)[source]

PReLU activation function.

Applies the PReLU function element-wise.

PReLU is defined as:

\[PReLU(x_i)= \max(0, x_i) + w * \min(0, x_i),\]

where \(x_i\) is an element of an channel of the input.

Here \(w\) is a learnable parameter with a default initial value 0.25. Parameter \(w\) has dimensionality of the argument channel. If called without argument channel, a single parameter \(w\) will be shared across all channels.

The picture about PReLU looks like this PReLU.

Parameters:
  • channel (int) – The elements number of parameter w. It could be an int, and the value is 1 or the channels number of input tensor x. Default: 1.

  • w (Union[float, list, Tensor]) – The initial value of parameter. It could be a float, a float list or a tensor has the same dtype as the input tensor x. Default: 0.25.

Inputs:
  • x (Tensor) - The input of PReLU with data type of float16 or float32. The shape is \((N, *)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same dtype and shape as the x.

Raises:
  • TypeError – If channel is not an int.

  • TypeError – If w is not one of a float, a float list, a float Tensor.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If the x is a 0-D or 1-D Tensor on Ascend.

  • ValueError – If channel is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[[0.1, 0.6], [0.9, 0.9]]]]), mindspore.float32)
>>> prelu = nn.PReLU()
>>> output = prelu(x)
>>> print(output)
[[[[0.1 0.6]
   [0.9 0.9]]]]
tinyms.layers.get_activation(name, prim_name=None)[source]

Gets the activation function.

Parameters:
  • name (str) – The name of the activation function.

  • prim_name (Union[str, None]) – The name of primitive. Default: None.

Returns:

Function, the activation function.

Supported Platforms:

Ascend GPU CPU

Examples

>>> sigmoid = nn.get_activation('sigmoid')
>>> print(sigmoid)
Sigmoid<>
class tinyms.layers.LeakyReLU(alpha=0.2)[source]

Leaky ReLU activation function.

The activation function is defined as:

\[\text{leaky_relu}(x) = \begin{cases}x, &\text{if } x \geq 0; \cr {\alpha} * x, &\text{otherwise.}\end{cases}\]

where \(\alpha\) represents the alpha parameter.

For more details, see Rectifier Nonlinearities Improve Neural Network Acoustic Models.

Parameters:

alpha (Union[int, float]) – Slope of the activation function at x < 0. Default: 0.2.

Inputs:
  • x (Tensor) - The input of LeakyReLU is a Tensor of any dimension.

Outputs:

Tensor, has the same type and shape as the x.

Raises:

TypeError – If alpha is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> leaky_relu = nn.LeakyReLU()
>>> output = leaky_relu(x)
>>> print(output)
[[-0.2  4.  -1.6]
 [ 2.  -1.   9. ]]
class tinyms.layers.HSigmoid[source]

Hard sigmoid activation function. Calculates the output according to the input elements.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{x_{i} + 3}{6})),\]
Inputs:
  • input_x (Tensor) - The input of HSigmoid. Tensor of any dimension.

Outputs:

Tensor, with the same type and shape as the input_x.

Raises:

TypeError – If input_x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> hsigmoid = nn.HSigmoid()
>>> result = hsigmoid(x)
>>> print(result)
[0.3333 0.1666 0.5    0.8335 0.6665]
class tinyms.layers.HSwish[source]

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6},\]
Inputs:
  • x (Tensor) - The input of HSwish, data type must be float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> hswish = nn.HSwish()
>>> result = hswish(x)
>>> print(result)
[-0.3333 -0.3333  0.      1.667   0.6665]
class tinyms.layers.ELU(alpha=1.0)[source]

Exponential Linear Unit activation function.

Applies the exponential linear unit function element-wise. The activation function is defined as:

\[E_{i} = \begin{cases} x_i, &\text{if } x_i \geq 0; \cr \alpha * (\exp(x_i) - 1), &\text{otherwise.} \end{cases}\]

where \(x_i\) represents the element of the input and \(\alpha\) represents the alpha parameter.

The picture about ELU looks like this ELU.

Parameters:

alpha (float) – The alpha value of ELU, the data type is float. Default: 1.0.

Inputs:
  • x (Tensor) - The input of ELU is a Tensor of any dimension with data type of float16 or float32.

Outputs:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If alpha is not a float.

  • TypeError – If dtype of x is neither float16 nor float32.

  • ValueError – If alpha is not equal to 1.0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float32)
>>> elu = nn.ELU()
>>> result = elu(x)
>>> print(result)
[-0.63212055  -0.86466473  0.  2.  1.]
class tinyms.layers.LogSigmoid[source]

Applies logsigmoid activation element-wise. The input is a Tensor with any valid shape.

Logsigmoid is defined as:

\[\text{logsigmoid}(x_{i}) = log(\frac{1}{1 + \exp(-x_i)}),\]

where \(x_{i}\) is the element of the input.

Inputs:
  • x (Tensor) - The input of LogSigmoid with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> net = nn.LogSigmoid()
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = net(x)
>>> print(output)
[-0.31326166 -0.12692806 -0.04858734]
class tinyms.layers.LRN(depth_radius=5, bias=1.0, alpha=1.0, beta=0.5, norm_region='ACROSS_CHANNELS')[source]

Local Response Normalization.

Refer to mindspore.ops.lrn() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[[[0.1], [0.2]],
...                       [[0.3], [0.4]]]]), mindspore.float32)
>>> output = nn.LRN()(input_x)
>>> print(output)
[[[[0.09534626]
   [0.1825742 ]]
  [[0.2860388 ]
   [0.3651484 ]]]]
class tinyms.layers.SoftShrink(lambd=0.5)[source]

Applies the SoftShrink function element-wise.

\[\begin{split}\text{SoftShrink}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:

lambd (float) – the \(\lambda\) must be no less than zero for the SoftShrink formulation. Default: 0.5.

Inputs:
  • input_x (Tensor) - The input of SoftShrink with data type of float16 or float32. Any number of additional dimensions.

Outputs:

Tensor, has the same shape and data type as input_x.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If input_x is not a Tensor.

  • TypeError – If dtype of input_x is neither float16 nor float32.

  • ValueError – If lambd is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Tensor(np.array([[ 0.5297,  0.7871,  1.1754], [ 0.7836,  0.6218, -1.1542]]), mstype.float16)
>>> softshrink = nn.SoftShrink()
>>> output = softshrink(input_x)
>>> print(output)
[[ 0.02979  0.287    0.676  ]
 [ 0.2837   0.1216  -0.6543 ]]
class tinyms.layers.HShrink(lambd=0.5)[source]

Hard Shrink activation function. Calculates the output according to the input elements.

The formula is defined as follows:

\[\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]
Parameters:

lambd (float) – The threshold \(\lambda\) defined by the Hard Shrink formula. Default: 0.5.

Inputs:
  • input_x (Tensor) - The input of Hard Shrink with data type of float16 or float32.

Outputs:

Tensor, the same shape and data type as the input.

Raises:
  • TypeError – If lambd is not a float.

  • TypeError – If dtype of input_x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> input_x = Tensor(np.array([[ 0.5,  1,  2.0], [0.0533,0.0776,-2.1233]]), mindspore.float32)
>>> hshrink = nn.HShrink()
>>> output = hshrink(input_x)
>>> print(output)
[[ 0.      1.      2.    ]
[ 0.      0.     -2.1233]]
class tinyms.layers.CELU(alpha=1.0)[source]

Continuously differentiable exponential linear units activation function.

Applies the continuously differentiable exponential linear units function element-wise.

\[\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))\]

The picture about CELU looks like this CELU.

Parameters:

alpha (float) – The \(\alpha\) value for the Celu formulation. Default: 1.0

Inputs:
  • x (Tensor) - The input of CELU. The required dtype is float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, with the same type and shape as the x.

Raises:
  • TypeError – If alpha is not a float.

  • ValueError – If alpha has the value of 0.

  • TypeError – If x is not a Tensor.

  • TypeError – If the dtype of ‘input_x’ is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
>>> celu = nn.CELU()
>>> output = celu(x)
>>> print(output)
[-0.86466473 -0.63212055  1.          2.        ]
class tinyms.layers.Threshold(threshold, value)[source]

Thresholds each element of the input Tensor.

The formula is defined as follows:

\[\begin{split}y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases}\end{split}\]
Parameters:
  • threshold (Union[int, float]) – The value to threshold at.

  • value (Union[int, float]) – The value to replace with when element is less than threshold.

Inputs:
  • input_x (Tensor) - The input of Threshold with data type of float16 or float32.

Outputs:

Tensor, the same shape and data type as the input.

Raises:
  • TypeError – If threshold is not a float or an int.

  • TypeError – If value is not a float or an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> m = nn.Threshold(0.1, 20)
>>> inputs = mindspore.Tensor([0.1, 0.2, 0.3], mindspore.float32)
>>> outputs = m(inputs)
>>> print(outputs)
[ 20.0     0.2      0.3]
class tinyms.layers.Mish[source]

Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

Refer to mindspore.ops.mish() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> mish = nn.Mish()
>>> output = mish(x)
>>> print(output)
[[-0.3034014  3.9974129 -0.0026832]
 [ 1.9439590  -0.0033576 9.0000000]]
class tinyms.layers.GLU(axis=-1)[source]

The gated linear unit function.

\[{GLU}(a, b)= a \otimes \sigma(b)\]

where \(a\) is the first half of the input matrices and \(b\) is the second half.

Here \(\sigma\) is the sigmoid function, and \(\otimes\) is the Hadamard product.

Parameters:

axis (int) – the axis to split the input. Default: -1, the last axis in x.

Inputs:
  • x (Tensor) - \((\ast_1, N, \ast_2)\) where * means, any number of additional dimensions.

Outputs:

Tensor, the same dtype as the x, with the shape \((\ast_1, M, \ast_2)\) where \(M=N/2\).

Supported Platforms:

Ascend GPU CPU

Examples

>>> m = nn.GLU()
>>> input = Tensor([[0.1,0.2,0.3,0.4],[0.5,0.6,0.7,0.8]])
>>> output = m(input)
>>> print(output)
[[0.05744425 0.11973753]
 [0.33409387 0.41398472]]
class tinyms.layers.BatchNorm1d(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW')[source]

This layer applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D or 2D inputs) to reduce internal covariate shift. Batch Normalization is widely used in convolutional networks. For the setailed contents, refer to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Note

The implementation of BatchNorm is different in graph mode and pynative mode, therefore the mode is not recommended to be changed after net was initialized.

Parameters:
  • num_features (int) – number of features or channels C of the input x .

  • eps (float) – \(\epsilon\) added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.9.

  • affine (bool) – A bool value. When set to True, \(\gamma\) and \(\beta\) can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\gamma\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\beta\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving mean. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_var_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving variance. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • use_batch_statistics (bool) – If true, use the mean value and variance value of current batch data. If false, use the mean value and variance value of specified value. If None, the training process will use the mean and variance of current batch data and track the running mean and variance, the evaluation process will use the running mean and variance. Default: None.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\) or \((N, C, L)\) , where N is the batch size, C is the number of features or channels, and L is the sequence length.

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C)\) or \((N, C, L)\) .

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm1d(num_features=4)
>>> x = Tensor(np.array([[0.7, 0.5, 0.5, 0.6],
...                      [0.5, 0.4, 0.6, 0.9]]).astype(np.float32))
>>> output = net(x)
>>> print(output)
[[ 0.6999965   0.4999975  0.4999975  0.59999704 ]
 [ 0.4999975   0.399998   0.59999704 0.89999545 ]]
class tinyms.layers.BatchNorm2d(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW')[source]

Batch Normalization is widely used in convolutional networks. This layer applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Note

The implementation of BatchNorm is different in graph mode and pynative mode, therefore that mode can not be changed after net was initialized. Note that the formula for updating the \(moving\_mean\) and \(moving\_var\) is

\[\begin{split}\text{moving_mean}=\text{moving_mean*momentum}+μ_β\text{*(1−momentum)}\\ \text{moving_var}=\text{moving_var*momentum}+σ^2_β\text{*(1−momentum)}\end{split}\]

where \(moving\_mean\) is the updated mean, \(moving\_var\) is the updated variance, \(μ_β, σ^2_β\) are the observed value (mean and variance) of each batch of data.

Parameters:
  • num_features (int) – The number of channels of the input tensor. Expected input size is \((N, C, H, W)\), C represents the number of channels.

  • eps (float) – \(\epsilon\) added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.9.

  • affine (bool) – A bool value. When set to True, \(\gamma\) and \(\beta\) can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\gamma\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\beta\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving mean. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_var_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving variance. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • use_batch_statistics (bool) –

    • If true, use the mean value and variance value of current batch data and track running mean and running variance.

    • If false, use the mean value and variance value of specified value, and not track statistical value.

    • If None, the use_batch_statistics is automatically set to true or false according to the training and evaluation mode. During training, the parameter is set to true, and during evaluation, the parameter is set to false. Default: None.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, H, W)\).

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C, H, W)\).

Raises:
  • TypeError – If num_features is not an int.

  • TypeError – If eps is not a float.

  • ValueError – If num_features is less than 1.

  • ValueError – If momentum is not in range [0, 1].

  • ValueError – If data_format is neither ‘NHWC’ not ‘NCHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm2d(num_features=3)
>>> x = Tensor(np.ones([1, 3, 2, 2]).astype(np.float32))
>>> output = net(x)
>>> print(output)
[[[[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]
  [[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]
  [[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]]]
class tinyms.layers.BatchNorm3d(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None)[source]

Batch Normalization is widely used in convolutional networks. This layer applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) to avoid internal covariate shift.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Note

The implementation of BatchNorm is different in graph mode and pynative mode, therefore that mode can not be changed after net was initialized. Note that the formula for updating the running_mean and running_var is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times x_t + \text{momentum} \times \hat{x}\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.

Parameters:
  • num_features (int) – C from an expected input of size \((N, C, D, H, W)\) .

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.9.

  • affine (bool) – A bool value. When set to True, gamma and beta can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the gamma weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the beta weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving mean. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘zeros’.

  • moving_var_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving variance. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. Default: ‘ones’.

  • use_batch_statistics (bool) – If true, use the mean value and variance value of current batch data. If false, use the mean value and variance value of specified value. If None, the training process will use the mean and variance of current batch data and track the running mean and variance, the evaluation process will use the running mean and variance. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C_{out}, D_{out},H_{out}, W_{out})\).

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm3d(num_features=3)
>>> x = Tensor(np.ones([16, 3, 10, 32, 32]).astype(np.float32))
>>> output = net(x)
>>> print(output.shape)
(16, 3, 10, 32, 32)
class tinyms.layers.LayerNorm(normalized_shape, begin_norm_axis=-1, begin_params_axis=-1, gamma_init='ones', beta_init='zeros', epsilon=1e-07)[source]

Applies Layer Normalization over a mini-batch of inputs.

Layer Normalization is widely used in recurrent neural networks. It applies normalization on a mini-batch of inputs for each single training case as described in the paper Layer Normalization. Unlike Batch Normalization, Layer Normalization performs exactly the same computation at training and testing time. It is applied across all channels and pixel but only one batch size. It can be described using the following formula:

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]
Parameters:
  • normalized_shape (Union(tuple[int], list[int])) – The normalization is performed over axis begin_norm_axis … R - 1.

  • begin_norm_axis (int) – The first normalization dimension: normalization will be performed along dimensions begin_norm_axis: rank(inputs), the value should be in [-1, rank(input)). Default: -1.

  • begin_params_axis (int) – The first parameter(beta, gamma)dimension: scale and centering parameters will have dimensions begin_params_axis: rank(inputs) and will be broadcast with the normalized inputs accordingly, the value should be in [-1, rank(input)). Default: -1.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\gamma\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘ones’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\beta\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘zeros’.

  • epsilon (float) – \(\epsilon\) added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • x (Tensor) - The shape of x is \((x_1, x_2, ..., x_R)\), and input_shape[begin_norm_axis:] is equal to normalized_shape.

Outputs:

Tensor, the normalized and scaled offset tensor, has the same shape and data type as the x.

Raises:
  • TypeError – If normalized_shape is neither a list nor tuple.

  • TypeError – If begin_norm_axis or begin_params_axis is not an int.

  • TypeError – If epsilon is not a float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([20, 5, 10, 10]), mindspore.float32)
>>> shape1 = x.shape[1:]
>>> m = nn.LayerNorm(shape1,  begin_norm_axis=1, begin_params_axis=1)
>>> output = m(x).shape
>>> print(output)
(20, 5, 10, 10)
class tinyms.layers.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, gamma_init='ones', beta_init='zeros')[source]

Group Normalization over a mini-batch of inputs.

Group Normalization is widely used in recurrent neural networks. It applies normalization on a mini-batch of inputs for each single training case as described in the paper Group Normalization. Group Normalization divides the channels into groups and computes within each group the mean and variance for normalization, and it performs very stable over a wide range of batch size. It can be described using the following formula:

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]
Parameters:
  • num_groups (int) – The number of groups to be divided along the channel dimension.

  • num_channels (int) – The number of input channels.

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • affine (bool) – A bool value, this layer will have learnable affine parameters when set to true. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the gamma weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘ones’. If gamma_init is a Tensor, the shape must be \((num\_channels)\).

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the beta weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘zeros’. If beta_init is a Tensor, the shape must be \((num\_channels)\).

Inputs:
  • x (Tensor) - The input feature with shape \((N, C, H, W)\) .

Outputs:

Tensor, the normalized and scaled offset tensor, has the same shape and data type as the x.

Raises:
  • TypeError – If num_groups or num_channels is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If affine is not a bool.

  • ValueError – If num_groups or num_channels is less than 1.

  • ValueError – If num_channels is not divided by num_groups.

Supported Platforms:

Ascend GPU CPU

Examples

>>> group_norm_op = nn.GroupNorm(2, 2)
>>> x = Tensor(np.ones([1, 2, 4, 4], np.float32))
>>> output = group_norm_op(x)
>>> print(output)
[[[[0. 0. 0. 0.]
   [0. 0. 0. 0.]
   [0. 0. 0. 0.]
   [0. 0. 0. 0.]]
  [[0. 0. 0. 0.]
   [0. 0. 0. 0.]
   [0. 0. 0. 0.]
   [0. 0. 0. 0.]]]]
class tinyms.layers.SyncBatchNorm(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, process_groups=None)[source]

Sync Batch Normalization layer over a N-dimension input.

Sync Batch Normalization is cross device synchronized Batch Normalization. The implementation of Batch Normalization only normalizes the data within each device. Sync Batch Normalization will normalize the input within the group. It has been described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Note

Currently, SyncBatchNorm only supports 2D and 4D inputs.

Parameters:
  • num_features (int) – C from an expected input of size \((N, C, H, W)\).

  • eps (float) – \(\epsilon\), a value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.9.

  • affine (bool) – A bool value. When set to True, \(\gamma\) and \(\beta\) can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\gamma\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘ones’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\beta\) weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘zeros’.

  • moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving mean. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘zeros’.

  • moving_var_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the moving variance. The values of str refer to the function initializer including ‘zeros’, ‘ones’, ‘xavier_uniform’, ‘he_uniform’, etc. Default: ‘ones’.

  • use_batch_statistics (bool) – If true, use the mean value and variance value of current batch data. If false, use the mean value and variance value of specified value. If None, training process will use the mean and variance of current batch data and track the running mean and variance, eval process will use the running mean and variance. Default: None.

  • process_groups (list) – A list to divide devices into different sync groups, containing N subtraction lists. Each subtraction list contains int numbers identifying rank ids which need to be synchronized in the same group. All int values must be in [0, rank_size) and different from each other. Default: None, indicating synchronization across all devices.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • TypeError – If num_features is not an int.

  • TypeError – If eps is not a float.

  • TypeError – If process_groups is not a list.

  • ValueError – If num_features is less than 1.

  • ValueError – If momentum is not in range [0, 1].

  • ValueError – If rank_id in process_groups is not in range [0, rank_size).

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details.

For the GPU devices, users need to prepare the host file and mpi, please see the GPU tutorial .

This example should be run with multiple devices.

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.communication import init
>>> from mindspore import Tensor
>>> from mindspore import nn
>>> from mindspore import dtype as mstype
>>>
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> init()
>>> ms.reset_auto_parallel_context()
>>> ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL)
>>> sync_bn_op = nn.SyncBatchNorm(num_features=3, process_groups=[[0, 1], [2, 3]])
>>> x = Tensor(np.ones([1, 3, 2, 2]), mstype.float32)
>>> output = sync_bn_op(x)
>>> print(output)
[[[[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]
  [[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]
  [[ 0.999995 0.999995 ]
   [ 0.999995 0.999995 ]]]]
class tinyms.layers.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros')[source]

This layer applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with additional channel dimension). Refer to the paper Instance Normalization: The Missing Ingredient for Fast Stylization. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

The size of \(\gamma\) and \(\beta\), learnable parameters vectors, is num_features if affine is True. The standard-deviation is calculated via the biased estimator.

This layer uses instance statistics computed from input data in both training and evaluation modes.

InstanceNorm1d and BatchNorm1d are very similar, but have some differences. InstanceNorm1d is applied on each channel of channeled data like RGB images, but BatchNorm1d is usually applied on each batch of batched data.

Note

Note that the formula for updating the running_mean and running_var is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times x_t + \text{momentum} \times \hat{x}\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.

Parameters:
  • num_features (int) – C from an expected input of size \((N, C, L)\).

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.1.

  • affine (bool) – A bool value. When set to True, gamma and beta can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the gamma weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the beta weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, L)\). Data type: float16 or float32.

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C, L)\). Same type and shape as the x.

Raises:
  • TypeError – If the type of num_features is not int.

  • TypeError – If the type of eps is not float.

  • TypeError – If the type of momentum is not float.

  • TypeError – If the type of affine is not bool.

  • TypeError – If the type of gamma_init/beta_init is not same, or if the initialized element type is not float32.

  • ValueError – If num_features is less than 1.

  • ValueError – If momentum is not in range [0, 1].

  • ValueError – If the shape of gamma_init / beta_init is not \((C)\).

  • KeyError – If any of gamma_init/beta_init is str and the homonymous class inheriting from Initializer not exists.

Supported Platforms:

GPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.InstanceNorm1d(3)
>>> x = Tensor(np.ones([2, 3, 5]), mindspore.float32)
>>> output = net(x)
>>> print(output.shape)
(2, 3, 5)
class tinyms.layers.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros')[source]

This layer applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension). Refer to the paper Instance Normalization: The Missing Ingredient for Fast Stylization. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

\(\gamma\) and \(\beta\) are learnable parameter vectors of size num_features if affine is True. The standard-deviation is calculated via the biased estimator.

This layer uses instance statistics computed from input data in both training and evaluation modes.

InstanceNorm2d and BatchNorm2d are very similar, but have some differences. InstanceNorm2d is applied on each channel of channeled data like RGB images, but BatchNorm2d is usually applied on each batch of batched data.

Note

Note that the formula for updating the running_mean and running_var is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times x_t + \text{momentum} \times \hat{x}\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.

Parameters:
  • num_features (int) – C from an expected input of size \((N, C, H, W)\).

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.1.

  • affine (bool) – A bool value. When set to True, gamma and beta can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the gamma weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the beta weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, H, W)\). Data type: float16 or float32.

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C, H, W)\). Same type and shape as the x.

Raises:
  • TypeError – If the type of num_features is not int.

  • TypeError – If the type of eps is not float.

  • TypeError – If the type of momentum is not float.

  • TypeError – If the type of affine is not bool.

  • TypeError – If the type of gamma_init/beta_init is not same, or if the initialized element type is not float32.

  • ValueError – If num_features is less than 1.

  • ValueError – If momentum is not in range [0, 1].

  • ValueError – If the shape of gamma_init / beta_init is not \((C)\).

  • KeyError – If any of gamma_init/beta_init is str and the homonymous class inheriting from Initializer not exists.

Supported Platforms:

GPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.InstanceNorm2d(3)
>>> x = Tensor(np.ones([2, 3, 2, 2]), mindspore.float32)
>>> output = net(x)
>>> print(output.shape)
(2, 3, 2, 2)
class tinyms.layers.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros')[source]

This layer applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension). Refer to the paper Instance Normalization: The Missing Ingredient for Fast Stylization. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

\(\gamma\) and \(\beta\) are learnable parameter vectors of size num_features if affine is True. The standard-deviation is calculated via the biased estimator.

This layer uses instance statistics computed from input data in both training and evaluation modes.

InstanceNorm3d and BatchNorm3d are very similar, but have some differences. InstanceNorm3d is applied on each channel of channeled data like RGB images, but BatchNorm3d is usually applied on each batch of batched data.

Note

Note that the formula for updating the running_mean and running_var is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times x_t + \text{momentum} \times \hat{x}\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.

Parameters:
  • num_features (int) – C from an expected input of size \((N, C, D, H, W)\).

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.1.

  • affine (bool) – A bool value. When set to True, gamma and beta can be learned. Default: True.

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the gamma weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the beta weight. The values of str refer to the function initializer including ‘zeros’, ‘ones’, etc. When initialized with Tensor, the shape should be \((C)\). Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D, H, W)\). Data type: float16 or float32.

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C, D, H, W)\). Same type and shape as the x.

Raises:
  • TypeError – If the type of num_features is not int.

  • TypeError – If the type of eps is not float.

  • TypeError – If the type of momentum is not float.

  • TypeError – If the type of affine is not bool.

  • TypeError – If the type of gamma_init/beta_init is not same, or if the initialized element type is not float32.

  • ValueError – If num_features is less than 1.

  • ValueError – If momentum is not in range [0, 1].

  • ValueError – If the shape of gamma_init / beta_init is not \((C)\).

  • KeyError – If any of gamma_init/beta_init is str and the homonymous class inheriting from Initializer not exists.

Supported Platforms:

GPU

Examples

>>> import mindspore
>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.InstanceNorm3d(3)
>>> x = Tensor(np.ones([2, 3, 5, 2, 2]), mindspore.float32)
>>> output = net(x)
>>> print(output.shape)
(2, 3, 5, 2, 2)
class tinyms.layers.SequentialCell(*args)[source]

Sequential Cell container. For more details about Cell, please refer to Cell.

A list of Cells will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of cells can also be passed in.

Note

SequentialCell and torch.nn.ModuleList are different, ModuleList is a list for storing modules. However, the layers in a Sequential are connected in a cascading way.

Parameters:

args (list, OrderedDict) – List or OrderedDict of subclass of Cell.

Inputs:
  • x (Tensor) - Tensor with shape according to the first Cell in the sequence.

Outputs:

Tensor, the output Tensor with shape depending on the input x and defined sequence of Cells.

Raises:

TypeError – If the type of the args is not list or OrderedDict.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>>
>>> conv = nn.Conv2d(3, 2, 3, pad_mode='valid', weight_init="ones")
>>> relu = nn.ReLU()
>>> seq = nn.SequentialCell([conv, relu])
>>> x = Tensor(np.ones([1, 3, 4, 4]), dtype = mindspore.float32)
>>> output = seq(x)
>>> print(output)
[[[[27. 27.]
   [27. 27.]]
  [[27. 27.]
   [27. 27.]]]]
>>> from collections import OrderedDict
>>> d = OrderedDict()
>>> d["conv"] = conv
>>> d["relu"] = relu
>>> seq = nn.SequentialCell(d)
>>> x = Tensor(np.ones([1, 3, 4, 4]), dtype=mindspore.float32)
>>> output = seq(x)
>>> print(output)
[[[[27. 27.]
   [27. 27.]]
  [[27. 27.]
   [27. 27.]]]]
append(cell)[source]

Appends a given Cell to the end of the list.

Parameters:

cell (Cell) – The Cell to be appended.

Examples

>>> from mindspore import Tensor
>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>>
>>> conv = nn.Conv2d(3, 2, 3, pad_mode='valid', weight_init="ones")
>>> bn = nn.BatchNorm2d(2)
>>> relu = nn.ReLU()
>>> seq = nn.SequentialCell([conv, bn])
>>> seq.append(relu)
>>> x = Tensor(np.ones([1, 3, 4, 4]), dtype=mindspore.float32)
>>> output = seq(x)
>>> print(output)
[[[[26.999863 26.999863]
   [26.999863 26.999863]]
  [[26.999863 26.999863]
   [26.999863 26.999863]]]]
class tinyms.layers.CellList(*args, **kwargs)[source]

Holds Cells in a list. For more details about Cell, please refer to Cell.

CellList can be used like a regular Python list, the Cells it contains have been initialized. Unlike the SequentialCell, the cells in CellList are not connected.

Parameters:

args (list, optional) – List of subclass of Cell.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>> import mindspore as ms
>>> import numpy as np
>>>
>>> conv = nn.Conv2d(100, 20, 3)
>>> bn = nn.BatchNorm2d(20)
>>> relu = nn.ReLU()
>>> cell_ls = nn.CellList([bn])
>>> cell_ls.insert(0, conv)
>>> cell_ls.append(relu)
>>> cell_ls.extend([relu, relu])
>>> cell_ls_3 = cell_ls[3]
>>> input1 = ms.Tensor(np.ones([2, 3]), ms.float32)
>>> output = cell_ls_3(input1)
>>> print(output)
[[1. 1. 1.]
[1. 1. 1.]]
append(cell)[source]

Appends a given Cell to the end of the list.

Parameters:

cell (Cell) – The subcell to be appended.

extend(cells)[source]

Appends Cells from a Python iterable to the end of the list.

Parameters:

cells (list) – The Cells to be extended.

Raises:

TypeError – If the argument cells are not a list of Cells.

insert(index, cell)[source]

Inserts a given Cell before a given index in the list.

Parameters:
  • index (int) – The Insert index in the CellList.

  • cell (Cell) – The Cell to be inserted.

class tinyms.layers.Conv2d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCHW')[source]

Calculates the 2D convolution on the input tensor. The input is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is a number of channels, \(H_{in}, W_{in}\) are the height and width of the feature layer respectively. For the tensor of each batch, its shape is \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(ccor\) is the cross-correlation, \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the \(j\)-th channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. In this case, data_format of the input tensor is ‘NCHW’ and the shape of full convolution kernel is \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the number of groups to split the input x in the channel dimension. If data_format of the input tensor is ‘NHWC’, the shape of full convolution kernel will be \((C_{out}, \text{kernel_size[0]}, \text{kernel_size[1]}), C_{in} / \text{group}\).

For more details, please refers to the paper Gradient Based Learning Applied to Document Recognition.

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when group>1, condition in_channels = out_channels = group must be satisfied.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv2d layer.

  • out_channels (int) – The channel number of the output tensor of the Conv2d layer.

  • kernel_size (Union[int, tuple[int]]) – Specifies the height and width of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the height and width of the convolution kernel. A tuple of two integers represents the height and width of the convolution kernel respectively.

  • stride (Union[int, tuple[int]]) – The movement stride of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the movement step size in both height and width directions. A tuple of two integers represents the movement step size in the height and width directions respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (Union[int, tuple[int]]) – The number of padding on the height and width directions of the input. The data type is an integer or a tuple of four integers. If padding is an integer, then the top, bottom, left, and right padding are all equal to padding. If padding is a tuple of 4 integers, then the top, bottom, left, and right padding is equal to padding[0], padding[1], padding[2], and padding[3] respectively. The value should be greater than or equal to 0. Default: 0.

  • dilation (Union[int, tuple[int]]) – Dilation size of 2D convolution kernel. The data type is an integer or a tuple of two integers. If \(k > 1\), the kernel is sampled every k elements. The value of k on the height and width directions is in range of [1, H] and [1, W] respectively. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. If the group is equal to in_channels and out_channels, this 2D convolution layer also can be called 2D depthwise convolution layer. Default: 1.

  • has_bias (bool) – Whether the Conv2d layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((N, H_{in}, W_{in}, C_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\) or \((N, H_{out}, W_{out}, C_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[0] + padding[1] - (\text{kernel_size[0]} - 1) \times \text{dilation[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[2] + padding[3] - (\text{kernel_size[1]} - 1) \times \text{dilation[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
Raises:
  • TypeError – If in_channels, out_channels or group is not an int.

  • TypeError – If kernel_size, stride, padding or dilation is neither an int not a tuple.

  • ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is not equal to (0, 0, 0, 0).

  • ValueError – If data_format is neither ‘NCHW’ not ‘NHWC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Conv2d(120, 240, 4, has_bias=False, weight_init='normal')
>>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32)
>>> output = net(x).shape
>>> print(output)
(1, 240, 1024, 640)
class tinyms.layers.Conv2dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, output_padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros')[source]

Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution).

The input is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is space dimension, \(H_{in}, W_{in}\) are the height and width of the feature layer respectively.

When Conv2d and Conv2dTranspose are initialized with the same parameters, and pad_mode is set to ‘pad’, \(dilation * (kernel\_size - 1) - padding\) amount of zero will be paded to the height and width directions of the input, they are inverses of each other in regard to the input and output shapes in this case. However, when stride > 1, Conv2d maps multiple input shapes to the same output shape. Deconvolutional network can refer to Deconvolutional Networks.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv2dTranspose layer.

  • out_channels (int) – The channel number of the output tensor of the Conv2dTranspose layer.

  • kernel_size (Union[int, tuple[int]]) – Specifies the height and width of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the height and width of the convolution kernel. A tuple of two integers represents the height and width of the convolution kernel respectively.

  • stride (Union[int, tuple[int]]) – The movement stride of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the movement step size in both height and width directions. A tuple of two integers represents the movement step size in the height and width directions respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (Union[int, tuple[int]]) – The number of padding on the height and width directions of the input. The data type is an integer or a tuple of four integers. If padding is an integer, then the top, bottom, left, and right padding are all equal to padding. If padding is a tuple of 4 integers, then the top, bottom, left, and right padding is equal to padding[0], padding[1], padding[2], and padding[3] respectively. The value should be greater than or equal to 0. Default: 0.

  • output_padding (Union[int, tuple[int]]) – The number of padding on the height and width directions of the output. The data type is an integer or a tuple of two integers. If output_padding is an integer, then the bottom and right padding are all equal to output_padding. If output_padding is a tuple of 2 integers, then the bottom and right padding is equal to output_padding[0], output_padding[1] respectively. If output_padding is not equal to 0, pad_mode must be pad. The value should be in range of [0, max(stride, dilation)) . Default: 0.

  • dilation (Union[int, tuple[int]]) – Dilation size of 2D convolution kernel. The data type is an integer or a tuple of two integers. If \(k > 1\), the kernel is sampled every k elements. The value of k on the height and width directions is in range of [1, H] and [1, W] respectively. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. Default: 1.

  • has_bias (bool) – Whether the Conv2dTranspose layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \text H_{in}\times \text {stride[0]} \\ W_{out} = \text W_{in}\times \text {stride[1]} \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \text H_{in}\times \text {stride[0]} + \max\{(\text{dilation[0]} - 1) \times (\text{kernel_size[0]} - 1) - \text {stride[0]}, 0 \} \\ W_{out} = \text W_{in}\times \text {stride[1]} + \max\{(\text{dilation[1]} - 1) \times (\text{kernel_size[1]} - 1) - \text {stride[1]}, 0 \} \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ H_{out} = \text H_{in}\times \text {stride[0]} - (padding[0] + padding[1]) + \text{kernel_size[0]} + (\text{dilation[0]} - 1) \times (\text{kernel_size[0]} - 1) - \text {stride[0]} + \text {output_padding[0]} \\ W_{out} = \text W_{in}\times \text {stride[1]} - (padding[2] + padding[3]) + \text{kernel_size[1]} + (\text{dilation[1]} - 1) \times (\text{kernel_size[1]} - 1) - \text {stride[1]} + \text {output_padding[1]} \\ \end{array}\end{split}\]
Raises:
  • TypeError – If in_channels, out_channels or group is not an int.

  • TypeError – If kernel_size, stride, padding or dilation is neither an int not a tuple.

  • ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is not equal to (0, 0, 0, 0).

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Conv2dTranspose(3, 64, 4, has_bias=False, weight_init='normal', pad_mode='pad')
>>> x = Tensor(np.ones([1, 3, 16, 50]), mindspore.float32)
>>> output = net(x).shape
>>> print(output)
(1, 64, 19, 53)
class tinyms.layers.Conv1d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros')[source]

Calculates the 1D convolution on the input tensor. The input is typically of shape \((N, C_{in}, L_{in})\), where \(N\) is batch size, \(C_{in}\) is a number of channels and \(L_{in}\) is a length of sequence. For the tensor of each batch, its shape is \((C_{in}, L_{in})\), and the formula is defined as:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(ccor\) is the cross-correlation, \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the \(j\)-th channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \(\text{kernel_size}\), where \(\text{kernel_size}\) is the width of the convolution kernel. \(\text{bias}\) is the bias parameter, and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{group}, \text{kernel_size})\), where group is the number of groups to split the input x in the channel dimension.

For more details, please refers to the paper Gradient Based Learning Applied to Document Recognition.

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when group>1, condition in_channels = out_channels = group must be satisfied.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv1d layer.

  • out_channels (int) – The channel number of the output tensor of the Conv1d layer.

  • kernel_size (int) – Specifies the width of the 1D convolution kernel.

  • stride (int) – The movement stride of the 1D convolution kernel. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (int) – The number of padding on both sides of input. The value should be greater than or equal to 0. Default: 0.

  • dilation (int) – Dilation size of 1D convolution kernel. If \(k > 1\), the kernel is sampled every k elements. The value of k is in range of [1, L]. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. Default: 1.

  • has_bias (bool) – Whether the Conv1d layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, L_{out})\).

pad_mode is ‘same’:

\[L_{out} = \left \lceil{\frac{L_{in}}{\text{stride}}} \right \rceil\]

pad_mode is ‘valid’:

\[L_{out} = \left \lceil{\frac{L_{in} - \text{dilation} \times (\text{kernel_size} - 1) } {\text{stride}}} \right \rceil\]

pad_mode is ‘pad’:

\[L_{out} = \left \lfloor{\frac{L_{in} + 2 \times padding - (\text{kernel_size} - 1) \times \text{dilation} - 1 }{\text{stride}} + 1} \right \rfloor\]
Raises:
  • TypeError – If in_channels, out_channels, kernel_size, stride, padding or dilation is not an int.

  • ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Conv1d(120, 240, 4, has_bias=False, weight_init='normal')
>>> x = Tensor(np.ones([1, 120, 640]), mindspore.float32)
>>> output = net(x).shape
>>> print(output)
(1, 240, 640)
class tinyms.layers.Conv1dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros')[source]

Calculates a 1D transposed convolution, which can be regarded as Conv1d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution).

The input is typically of shape \((N, C_{in}, L_{in})\), where \(N\) is batch size, \(C\) is a number of channels and \(L_{in}\) is a length of sequence.

When Conv1d and ConvTranspose1d are initialized with the same parameters, and pad_mode is set to ‘pad’, \(dilation * (kernel\_size - 1) - padding\) amount of zero will be paded to both sizes of input, they are inverses of each other in regard to the input and output shapes in this case. However, when stride > 1, Conv1d maps multiple input shapes to the same output shape. Deconvolutional network can refer to Deconvolutional Networks.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv1dTranspose layer.

  • out_channels (int) – The channel number of the output tensor of the Conv1dTranspose layer.

  • kernel_size (int) – Specifies the width of the 1D convolution kernel.

  • stride (int) – The movement stride of the 1D convolution kernel. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (int) – The number of padding on both sides of input. The value should be greater than or equal to 0. Default: 0.

  • dilation (int) – Dilation size of 1D convolution kernel. If \(k > 1\), the kernel is sampled every k elements. The value of k is in range of [1, L]. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. When group > 1, the Ascend platform is not supported yet. Default: 1.

  • has_bias (bool) – Whether the Conv1dTranspose layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, L_{out})\).

pad_mode is ‘same’:

\[L_{out} = \left \lfloor{\frac{L_{in}}{\text{stride}} + 1} \right \rfloor\]

pad_mode is ‘valid’:

\[L_{out} = \left \lfloor{\frac{L_{in} - \text{dilation} \times (\text{kernel_size} - 1) } {\text{stride}} + 1} \right \rfloor\]

pad_mode is ‘pad’:

\[L_{out} = \left \lfloor{\frac{L_{in} + 2 \times padding - (\text{dilation} - 1) \times \text{kernel_size} - 1 }{\text{stride}} + 1} \right \rfloor\]
Raises:
  • TypeError – If in_channels, out_channels, kernel_size, stride, padding or dilation is not an int.

  • ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Conv1dTranspose(3, 64, 4, has_bias=False, weight_init='normal', pad_mode='pad')
>>> x = Tensor(np.ones([1, 3, 50]), mindspore.float32)
>>> output = net(x).shape
>>> print(output)
(1, 64, 53)
class tinyms.layers.Conv3d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW')[source]

Calculates the 3D convolution on the input tensor. The input is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is a number of channels, \(D_{in}, H_{in}, W_{in}\) are the depth, height and width of the feature layer respectively. For the tensor of each batch, its shape is \((C_{in}, D_{in}, H_{in}, W_{in})\), the formula is defined as:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(ccor\) is the cross-correlation, \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the \(j\)-th channel of the output and \(j\) is in the range of \([0, C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where group is the number of groups to split the input x in the channel dimension.

For more details, please refers to the paper Gradient Based Learning Applied to Document Recognition.

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when group>1, condition in_channels = out_channels = group must be satisfied.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv3d layer.

  • out_channels (int) – The channel number of the output tensor of the Conv3d layer.

  • kernel_size (Union[int, tuple[int]]) – Specifies the depth, height and width of the 3D convolution kernel. The data type is an integer or a tuple of three integers. An integer represents the depth, height and width of the convolution kernel. A tuple of three integers represents the depth, height and width of the convolution kernel respectively.

  • stride (Union[int, tuple[int]]) – The movement stride of the 3D convolution kernel. The data type is an integer or a tuple of three integers. An integer represents the movement step size in depth, height and width directions. A tuple of three integers represents the movement step size in the depth, height and width directions respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int])) – The number of padding on the depth, height and width directions of the input. The data type is an integer or a tuple of six integers. If padding is an integer, then the head, tail, top, bottom, left, and right padding are all equal to padding. If padding is a tuple of six integers, then the head, tail, top, bottom, left, and right padding is equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] respectively. The value should be greater than or equal to 0. Default: 0.

  • dilation (Union[int, tuple[int]]) – Dilation size of 3D convolution kernel. The data type is an integer or a tuple of three integers. If \(k > 1\), the kernel is sampled every k elements. The value of k on the depth, height and width directions is in range of [1, D], [1, H] and [1, W] respectively. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. Default: 1. Only 1 is currently supported.

  • has_bias (bool) – Whether the Conv3d layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

  • data_format (str) – The optional value for data format. Currently only support “NCDHW”.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

Outputs:

Tensor of shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
Raises:
  • TypeError – If in_channels, out_channels or group is not an int.

  • TypeError – If kernel_size, stride, padding or dilation is neither an int nor a tuple.

  • ValueError – If out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float32)
>>> conv3d = nn.Conv3d(in_channels=3, out_channels=32, kernel_size=(4, 3, 3))
>>> output = conv3d(x)
>>> print(output.shape)
(16, 32, 10, 32, 32)
class tinyms.layers.Conv3dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, output_padding=0, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW')[source]

Calculates a 3D transposed convolution, which can be regarded as Conv3d for the gradient of the input. It also called deconvolution (although it is not an actual deconvolution).

he input is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is a number of channels, \(D_{in}, H_{in}, W_{in}\) are the depth, height and width of the feature layer respectively.

When Conv3d and Conv3dTranspose are initialized with the same parameters, and pad_mode is set to ‘pad’, \(dilation * (kernel\_size - 1) - padding\) amount of zero will be paded to the depth, height and width directions of the input, they are inverses of each other in regard to the input and output shapes in this case. However, when stride > 1, Conv2d maps multiple input shapes to the same output shape. Deconvolutional network can refer to Deconvolutional Networks.

Parameters:
  • in_channels (int) – The channel number of the input tensor of the Conv3dTranspose layer.

  • out_channels (int) – The channel number of the output tensor of the Conv3dTranspose layer.

  • kernel_size (Union[int, tuple[int]]) – Specifies the depth, height and width of the 3D convolution kernel. The data type is an integer or a tuple of three integers. An integer represents the depth, height and width of the convolution kernel. A tuple of three integers represents the depth, height and width of the convolution kernel respectively.

  • stride (Union[int, tuple[int]]) – The movement stride of the 3D convolution kernel. The data type is an integer or a tuple of three integers. An integer represents the movement step size in depth, height and width directions. A tuple of three integers represents the movement step size in the depth, height and width directions respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: The width of the output is the same as the value of the input divided by stride. If this mode is set, the value of padding must be 0.

    • valid: Returns a valid calculated output without padding. Excess pixels that do not satisfy the calculation will be discarded. If this mode is set, the value of padding must be 0.

    • pad: Pads the input. Padding padding size of zero on both sides of the input. If this mode is set, the value of padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int])) – The number of padding on the depth, height and width directions of the input. The data type is an integer or a tuple of six integers. If padding is an integer, then the head, tail, top, bottom, left, and right padding are all equal to padding. If padding is a tuple of six integers, then the head, tail, top, bottom, left, and right padding is equal to padding[0], padding[1], padding[2], padding[3], padding[4] and padding[5] respectively. The value should be greater than or equal to 0. Default: 0.

  • dilation (Union[int, tuple[int]]) – Dilation size of 3D convolution kernel. The data type is an integer or a tuple of three integers. If \(k > 1\), the kernel is sampled every k elements. The value of k on the depth, height and width directions is in range of [1, D], [1, H] and [1, W] respectively. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by group. Default: 1. Only 1 is currently supported.

  • output_padding (Union(int, tuple[int])) – The number of padding on the depth, height and width directions of the output. The data type is an integer or a tuple of six integers. If output_padding is an integer, then the head, tail, top, bottom, left, and right padding are all equal to output_padding. If output_padding is a tuple of six integers, then the head, tail, top, bottom, left, and right padding is equal to output_padding[0], output_padding[1], output_padding[2], output_padding[3], output_padding[4] and output_padding[5] respectively. The value should be greater than or equal to 0. Default: 0.

  • has_bias (bool) – Whether the Conv3dTranspose layer has a bias parameter. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initialization method of bias parameter. Available initialization methods are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

  • data_format (str) – The optional value for data format. Currently only support ‘NCDHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

Outputs:

Tensor, the shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in}}{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in}}{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in}}{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
Raises:
  • TypeError – If in_channels, out_channels or group is not an int.

  • TypeError – If kernel_size, stride, padding , dilation or output_padding is neither an int not a tuple of three.

  • TypeError – If input data type is not float16 or float32.

  • ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float32)
>>> conv3d_transpose = nn.Conv3dTranspose(in_channels=16, out_channels=3, kernel_size=(4, 6, 2),
...                                       pad_mode='pad')
>>> output = conv3d_transpose(x)
>>> print(output.shape)
(32, 3, 13, 37, 33)
class tinyms.layers.BiDense(in1_channels, in2_channels, out_channels, weight_init=None, bias_init=None, has_bias=True)[source]

The bilinear dense connected layer.

Applies dense connected layer for two inputs. This layer implements the operation as:

\[y = x_1^T A x_2 + b,\]

where \(x_{1}\) is the first input tensor, \(x_{2}\) is the second input tensor , \(A\) is a weight matrix with the same data type as the \(x_{*}\) created by the layer , and \(b\) is a bias vector with the same data type as the \(x_{*}\) created by the layer (only if has_bias is True).

Parameters:
  • in1_channels (int) – The number of channels in the input1 space.

  • in2_channels (int) – The number of channels in the input2 space.

  • out_channels (int) – The number of channels in the output space.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The values of str refer to the function initializer. Default: None.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The values of str refer to the function initializer. Default: None.

  • has_bias (bool) – Specifies whether the layer uses \(\text{bias}\) vector. Default: True.

Shape:
  • input1 - \((*, H_{in1})\) where \(H_{in1}=\text{in1_channels}\) and \(*\) means any number of additional dimensions including none. All but the last dimension of the inputs should be the same.

  • input2 - \((*, H_{in2})\) where \(H_{in2}=\text{in2_channels}\) and \(*\) means any number of additional dimensions including none. All but the last dimension of the inputs should be the same.

  • output - \((*, H_{out})\) where \(H_{out}=\text{out_channels}\) and \(*\) means any number of additional dimensions including none. All but the last dimension are the same shape as the inputs.

Dtype:
  • input1 (Tensor) - The dtype must be float16 or float32 and be same as input2.

  • input1 (Tensor) - The dtype must be float16 or float32 and be same as input1.

  • output (Tensor) - With the same dtype as the inputs.

Weights:
  • weight (Parameter) - The learnable weights with shape \((\text{out_channels}, \text{in1_channels}, \text{in2_channels})\). When weight_init is None, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1_channels}}\).

  • bias (Parameter) - The learnable bias of shape \((\text{out_channels})\). If has_bias is True and bias_init is None, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1_channels}}\).

Raises:
  • TypeError – If in1_channels, in2_channels or out_channels is not an int.

  • TypeError – If has_bias is not a bool.

  • ValueError – If length of shape of weight_init is not equal to 3 or shape[0] of weight_init is not equal to out_channels or shape[1] of weight_init is not equal to in1_channels or shape[2] of weight_init is not equal to in2_channels.

  • ValueError – If length of shape of bias_init is not equal to 1 or shape[0] of bias_init is not equal to out_channels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor(np.random.randn(128, 20), mindspore.float32)
>>> x2 = Tensor(np.random.randn(128, 30), mindspore.float32)
>>> net = nn.BiDense(20, 30, 40)
>>> output = net(x1, x2)
>>> print(output.shape)
(128, 40)
class tinyms.layers.LSTMCell(**kwargs)[source]

A LSTM (Long Short-Term Memory) cell.

\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ix} x_t + b_{ix} + W_{ih} h_{(t-1)} + b_{ih}) \\ f_t = \sigma(W_{fx} x_t + b_{fx} + W_{fh} h_{(t-1)} + b_{fh}) \\ \tilde{c}_t = \tanh(W_{cx} x_t + b_{cx} + W_{ch} h_{(t-1)} + b_{ch}) \\ o_t = \sigma(W_{ox} x_t + b_{ox} + W_{oh} h_{(t-1)} + b_{oh}) \\ c_t = f_t * c_{(t-1)} + i_t * \tilde{c}_t \\ h_t = o_t * \tanh(c_t) \\ \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\). Details can be found in paper LONG SHORT-TERM MEMORY and Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling.

The encapsulated LSTMCell can be simplified to the following formula:

\[h^{'},c^{'} = LSTMCell(x, (h_0, c_0))\]
Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, input\_size)\).

  • hx (tuple) - A tuple of two Tensors (h_0, c_0) both of data type mindspore.float32 and shape \((batch\_size, hidden\_size)\). The data type of hx must be the same as x.

Outputs:
  • hx’ (Tensor) - A tuple of two Tensors (h’, c’) both of data shape \((batch\_size, hidden\_size)\).

Raises:
  • TypeError – If input_size, hidden_size is not an int.

  • TypeError – If has_bias is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.LSTMCell(10, 16)
>>> x = Tensor(np.ones([5, 3, 10]).astype(np.float32))
>>> h = Tensor(np.ones([3, 16]).astype(np.float32))
>>> c = Tensor(np.ones([3, 16]).astype(np.float32))
>>> output = []
>>> for i in range(5):
...     hx = net(x[i], (h, c))
...     output.append(hx)
>>> print(output[0][0].shape)
(3, 16)
class tinyms.layers.GRUCell(input_size: int, hidden_size: int, has_bias: bool = True)[source]

A GRU(Gated Recurrent Unit) cell.

\[\begin{split}\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ir}, b_{ir}\) are the weight and bias used to transform from input \(x\) to \(r\). Details can be found in paper Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation.

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • has_bias (bool) – Whether the cell has bias b_in and b_hn. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, input\_size)\).

  • hx (Tensor) - Tensor of data type mindspore.float32 and shape \((batch\_size, hidden\_size)\). Data type of hx must be the same as x.

Outputs:
  • hx’ (Tensor) - Tensor of shape \((batch\_size, hidden\_size)\).

Raises:
  • TypeError – If input_size, hidden_size is not an int.

  • TypeError – If has_bias is not a bool.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.GRUCell(10, 16)
>>> x = Tensor(np.ones([5, 3, 10]).astype(np.float32))
>>> hx = Tensor(np.ones([3, 16]).astype(np.float32))
>>> output = []
>>> for i in range(5):
...     hx = net(x[i], hx)
...     output.append(hx)
>>> print(output[0].shape)
(3, 16)
class tinyms.layers.RNNCell(input_size: int, hidden_size: int, has_bias: bool = True, nonlinearity: str = 'tanh')[source]

An Elman RNN cell with tanh or ReLU non-linearity.

\[h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})\]

Here \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time \(t-1\) or the initial hidden state at time 0. If nonlinearity is relu, then relu is used instead of tanh.

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh. Default: True.

  • nonlinearity (str) – The non-linearity to use. Can be either tanh or relu. Default: tanh.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, input\_size)\) .

  • hx (Tensor) - Tensor of data type mindspore.float32 and shape \((batch\_size, hidden\_size)\) . Data type of hx must be the same as x.

Outputs:
  • hx’ (Tensor) - Tensor of shape \((batch\_size, hidden\_size)\) .

Raises:
  • TypeError – If input_size or hidden_size is not an int or not greater than 0.

  • TypeError – If has_bias is not a bool.

  • ValueError – If nonlinearity is not in [‘tanh’, ‘relu’].

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.RNNCell(10, 16)
>>> x = Tensor(np.ones([5, 3, 10]).astype(np.float32))
>>> hx = Tensor(np.ones([3, 16]).astype(np.float32))
>>> output = []
>>> for i in range(5):
...     hx = net(x[i], hx)
...     output.append(hx)
>>> print(output[0].shape)
(3, 16)
class tinyms.layers.LSTM(*args, **kwargs)[source]

Stacked LSTM (Long Short-Term Memory) layers.

Apply LSTM layer to the input.

There are two pipelines connecting two consecutive cells in a LSTM model; one is cell state pipeline and the other is hidden state pipeline. Denote two consecutive time nodes as \(t-1\) and \(t\). Given an input \(x_t\) at time \(t\), an hidden state \(h_{t-1}\) and an cell state \(c_{t-1}\) of the layer at time \({t-1}\), the cell state and hidden state at time \(t\) is computed using an gating mechanism. Input gate \(i_t\) is designed to protect the cell from perturbation by irrelevant inputs. Forget gate \(f_t\) affords protection of the cell by forgetting some information in the past, which is stored in \(h_{t-1}\). Output gate \(o_t\) protects other units from perturbation by currently irrelevant memory contents. Candidate cell state \(\tilde{c}_t\) is calculated with the current input, on which the input gate will be applied. Finally, current cell state \(c_{t}\) and hidden state \(h_{t}\) are computed with the calculated gates and cell states. The complete formulation is as follows.

\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ix} x_t + b_{ix} + W_{ih} h_{(t-1)} + b_{ih}) \\ f_t = \sigma(W_{fx} x_t + b_{fx} + W_{fh} h_{(t-1)} + b_{fh}) \\ \tilde{c}_t = \tanh(W_{cx} x_t + b_{cx} + W_{ch} h_{(t-1)} + b_{ch}) \\ o_t = \sigma(W_{ox} x_t + b_{ox} + W_{oh} h_{(t-1)} + b_{oh}) \\ c_t = f_t * c_{(t-1)} + i_t * \tilde{c}_t \\ h_t = o_t * \tanh(c_t) \\ \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ix}, b_{ix}\) are the weight and bias used to transform from input \(x\) to \(i\). Details can be found in paper LONG SHORT-TERM MEMORY and Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling.

LSTM hides the cycle of the whole cyclic neural network on the time step of the sequence, and input the sequence and initial state to obtain the matrix spliced by the hidden state of each time step and the hidden state of the last time step. We use the hidden state of the last time step as the coding feature of the input sentence and output it to the next layer.

\[h_{0:n},(h_{n}, c_{n}) = LSTM(x_{0:n},(h_{0},c_{0}))\]
Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked LSTM . Default: 1.

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh. Default: True.

  • batch_first (bool) – Specifies whether the first dimension of input x is batch_size. Default: False.

  • dropout (float, int) – If not 0, append Dropout layer on the outputs of each LSTM layer except the last layer. Default 0. The range of dropout is [0.0, 1.0).

  • bidirectional (bool) – Specifies whether it is a bidirectional LSTM, num_directions=2 if bidirectional=True otherwise 1. Default: False.

Inputs:
  • x (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape \((seq\_len, batch\_size, input\_size)\) or \((batch\_size, seq\_len, input\_size)\).

  • hx (tuple) - A tuple of two Tensors (h_0, c_0) both of data type mindspore.float32 or mindspore.float16 and shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\). The data type of hx must be the same as x.

  • seq_length (Tensor) - The length of each sequence in an input batch. Tensor of shape \((batch\_size)\). Default: None. This input indicates the real sequence length before padding to avoid padded elements have been used to compute hidden state and affect the final output. It is recommended to use this input when x has padding elements.

Outputs:

Tuple, a tuple contains (output, (h_n, c_n)).

  • output (Tensor) - Tensor of shape \((seq\_len, batch\_size, num\_directions * hidden\_size)\) .

  • hx_n (tuple) - A tuple of two Tensor (h_n, c_n) both of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\) .

Raises:
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias, batch_first or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0).

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.LSTM(10, 16, 2, has_bias=True, batch_first=True, bidirectional=False)
>>> x = Tensor(np.ones([3, 5, 10]).astype(np.float32))
>>> h0 = Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32))
>>> c0 = Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32))
>>> output, (hn, cn) = net(x, (h0, c0))
>>> print(output.shape)
(3, 5, 16)
class tinyms.layers.GRU(*args, **kwargs)[source]

Stacked GRU (Gated Recurrent Unit) layers.

Apply GRU layer to the input.

There are two gates in a GRU model. One is update gate and the other is reset gate. Denote two consecutive time nodes as \(t-1\) and \(t\). Given an input \(x_t\) at time \(t\), a hidden state \(h_{t-1}\), the update and reset gate at time \(t\) is computed using a gating mechanism. Update gate \(z_t\) is designed to protect the cell from perturbation by irrelevant inputs and past hidden state. Reset gate \(r_t\) determines how much information should be reset from old hidden state. New memory state \(n_t\) is calculated with the current input, on which the reset gate will be applied. Finally, current hidden state \(h_{t}\) is computed with the calculated update grate and new memory state. The complete formulation is as follows:

\[\begin{split}\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array}\end{split}\]

Here \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product. \(W, b\) are learnable weights between the output and the input in the formula. For instance, \(W_{ir}, b_{ir}\) are the weight and bias used to transform from input \(x\) to \(r\). Details can be found in paper Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation.

Note

When using GRU on Ascend, the hidden size only supports multiples of 16.

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked GRU. Default: 1.

  • has_bias (bool) – Whether the cell has bias b_in and b_hn. Default: True.

  • batch_first (bool) – Specifies whether the first dimension of input x is batch_size. Default: False.

  • dropout (float) – If not 0.0, append Dropout layer on the outputs of each GRU layer except the last layer. Default 0.0. The range of dropout is [0.0, 1.0).

  • bidirectional (bool) – Specifies whether it is a bidirectional GRU, num_directions=2 if bidirectional=True otherwise 1. Default: False.

Inputs:
  • x (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape (seq_len, batch_size, input_size) or (batch_size, seq_len, input_size).

  • hx (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape (num_directions * num_layers, batch_size, hidden_size). The data type of hx must be the same as x.

  • seq_length (Tensor) - The length of each sequence in an input batch. Tensor of shape \((\text{batch_size})\). Default: None. This input indicates the real sequence length before padding to avoid padded elements have been used to compute hidden state and affect the final output. It is recommended to use this input when x has padding elements.

Outputs:

Tuple, a tuple contains (output, h_n).

  • output (Tensor) - Tensor of shape (seq_len, batch_size, num_directions * hidden_size) or (batch_size, seq_len, num_directions * hidden_size).

  • hx_n (Tensor) - Tensor of shape (num_directions * num_layers, batch_size, hidden_size).

Raises:
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias, batch_first or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0).

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.GRU(10, 16, 2, has_bias=True, batch_first=True, bidirectional=False)
>>> x = Tensor(np.ones([3, 5, 10]).astype(np.float32))
>>> h0 = Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32))
>>> output, hn = net(x, h0)
>>> print(output.shape)
(3, 5, 16)
class tinyms.layers.RNN(*args, **kwargs)[source]

Stacked Elman RNN layers.

Apply RNN layer with \(\tanh\) or \(\text{ReLU}\) non-linearity to the input.

For each element in the input sequence, each layer computes the following function:

\[h_t = activation(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})\]

Here \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time \(t-1\) or the initial hidden state at time 0. If nonlinearity is 'relu', then \(\text{ReLU}\) is used instead of \(\tanh\).

Parameters:
  • input_size (int) – Number of features of input.

  • hidden_size (int) – Number of features of hidden layer.

  • num_layers (int) – Number of layers of stacked RNN. Default: 1.

  • nonlinearity (str) – The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'

  • has_bias (bool) – Whether the cell has bias b_ih and b_hh. Default: True.

  • batch_first (bool) – Specifies whether the first dimension of input x is batch_size. Default: False.

  • dropout (float) – If not 0.0, append Dropout layer on the outputs of each RNN layer except the last layer. Default 0.0. The range of dropout is [0.0, 1.0).

  • bidirectional (bool) – Specifies whether it is a bidirectional RNN, num_directions=2 if bidirectional=True otherwise 1. Default: False.

Inputs:
  • x (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape \((seq\_len, batch\_size, input\_size)\) or \((batch\_size, seq\_len, input\_size)\) .

  • hx (Tensor) - Tensor of data type mindspore.float32 or mindspore.float16 and shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\) . The data type of hx must be the same as x.

  • seq_length (Tensor) - The length of each sequence in an input batch. Tensor of shape \((batch\_size)\) . Default: None. This input indicates the real sequence length before padding to avoid padded elements have been used to compute hidden state and affect the final output. It is recommended to use this input when x has padding elements.

Outputs:

Tuple, a tuple contains (output, hx_n).

  • output (Tensor) - Tensor of shape \((seq\_len, batch\_size, num\_directions * hidden\_size)\) or \((batch\_size, seq\_len, num\_directions * hidden\_size)\) .

  • hx_n (Tensor) - Tensor of shape \((num\_directions * num\_layers, batch\_size, hidden\_size)\) .

Raises:
  • TypeError – If input_size, hidden_size or num_layers is not an int.

  • TypeError – If has_bias, batch_first or bidirectional is not a bool.

  • TypeError – If dropout is not a float.

  • ValueError – If dropout is not in range [0.0, 1.0).

  • ValueError – If nonlinearity is not in [‘tanh’, ‘relu’].

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.RNN(10, 16, 2, has_bias=True, batch_first=True, bidirectional=False)
>>> x = Tensor(np.ones([3, 5, 10]).astype(np.float32))
>>> h0 = Tensor(np.ones([1 * 2, 3, 16]).astype(np.float32))
>>> output, hn = net(x, h0)
>>> print(output.shape)
(3, 5, 16)
class tinyms.layers.Dropout(keep_prob=0.5, p=None, dtype=mindspore.float32)[source]

Dropout layer for the input.

Dropout is a regularization method. The operator randomly sets some neurons output to 0 according to the probability of discarding the probability of discarding. During the reasoning, this layer returns the same Tensor as the x.

This technique is proposed in paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting and proved to be effective to reduce over-fitting and prevents neurons from co-adaptation. See more details in Improving neural networks by preventing co-adaptation of feature detectors.

Note

  • Each channel will be zeroed out independently on every construct call.

  • Parameter keep_prob will be removed in a future version, please use parameter p instead. Parameter p means the probability of the element of the input tensor to be zeroed.

  • Parameter dtype will be removed in a future version. It is not recommended to define this parameter.

Parameters:
  • keep_prob (float) – Deprecated. The keep rate, greater than 0 and less equal than 1. E.g. rate=0.9, dropping out 10% of input neurons. Default: 0.5.

  • p (Union[float, int, None]) – The dropout rate, greater than or equal to 0 and less than 1. E.g. rate=0.9, dropping out 90% of input neurons. Default: None.

  • dtype (mindspore.dtype) – Data type of input. Default: mindspore.float32.

Inputs:
  • x (Tensor) - The input of Dropout with data type of float16 or float32.

Outputs:

Tensor, output tensor with the same shape as the x.

Raises:
  • TypeError – If keep_prob is not a float.

  • TypeError – If the dtype of p is not float or int.

  • TypeError – If dtype of x is not neither float16 nor float32.

  • ValueError – If keep_prob is not in range (0, 1].

  • ValueError – If p is not in range [0, 1).

  • ValueError – If length of shape of x is less than 1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([2, 2, 3]), mindspore.float32)
>>> net = nn.Dropout(p=0.2)
>>> net.set_train()
>>> output = net(x)
>>> print(output.shape)
(2, 2, 3)
class tinyms.layers.Flatten(start_dim=1, end_dim=-1)[source]

Flatten the input Tensor along dimensions from start_dim to end_dim.

Parameters:
  • start_dim (int, optional) – The first dimension to flatten. Default: 1.

  • end_dim (int, optional) – The last dimension to flatten. Default: -1.

Inputs:
  • x (Tensor) - The input Tensor to be flattened.

Outputs:

Tensor. If no dimensions are flattened, returns the original x, otherwise return the flattened Tensor. If x is a 0-dimensional Tensor, a 1-dimensional Tensor will be returned.

Raises:
  • TypeError – If x is not a Tensor.

  • TypeError – If start_dim or end_dim is not int.

  • ValueError – If start_dim is greater than end_dim after canonicalized.

  • ValueError – If start_dim or end_dim is not in range of [-x.dim, x.dim-1].

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[[1.2, 1.2], [2.1, 2.1]], [[2.2, 2.2], [3.2, 3.2]]]), mindspore.float32)
>>> net = nn.Flatten()
>>> output = net(x)
>>> print(output)
[[1.2 1.2 2.1 2.1]
 [2.2 2.2 3.2 3.2]]
>>> print(f"before flatten the x shape is {x.shape}")
before flatten the x shape is  (2, 2, 2)
>>> print(f"after flatten the output shape is {output.shape}")
after flatten the output shape is (2, 4)
class tinyms.layers.Dense(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, activation=None)[source]

The dense connected layer.

Applies dense connected layer for the input. This layer implements the operation as:

\[\text{outputs} = \text{activation}(\text{X} * \text{kernel} + \text{bias}),\]

where \(X\) is the input tensors, \(\text{activation}\) is the activation function passed as the activation argument (if passed in), \(\text{kernel}\) is a weight matrix with the same data type as the \(X\) created by the layer, and \(\text{bias}\) is a bias vector with the same data type as the \(X\) created by the layer (only if has_bias is True).

Parameters:
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘zeros’.

  • has_bias (bool) – Specifies whether the layer uses a bias vector \(\text{bias}\). Default: True.

  • activation (Union[str, Cell, Primitive, None]) – activate function applied to the output of the fully connected layer. Both activation name, e.g. ‘relu’, and mindspore activation function, e.g. mindspore.ops.ReLU(), are supported. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((*, in\_channels)\). The in_channels in Args should be equal to \(in\_channels\) in Inputs.

Outputs:

Tensor of shape \((*, out\_channels)\).

Raises:
  • TypeError – If in_channels or out_channels is not an int.

  • TypeError – If has_bias is not a bool.

  • TypeError – If activation is not one of str, Cell, Primitive, None.

  • ValueError – If length of shape of weight_init is not equal to 2 or shape[0] of weight_init is not equal to out_channels or shape[1] of weight_init is not equal to in_channels.

  • ValueError – If length of shape of bias_init is not equal to 1 or shape[0] of bias_init is not equal to out_channels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([[180, 234, 154], [244, 48, 247]]), mindspore.float32)
>>> net = nn.Dense(3, 4)
>>> output = net(x)
>>> print(output.shape)
(2, 4)
class tinyms.layers.ClipByNorm(axis=None)[source]

Clips tensor values to a maximum \(L_2\)-norm.

The output of this layer remains the same if the \(L_2\)-norm of the input tensor is not greater than the argument clip_norm. Otherwise the tensor will be normalized as:

\[\text{output}(X) = \frac{\text{clip_norm} * X}{L_2(X)},\]

where \(L_2(X)\) is the \(L_2\)-norm of \(X\).

Parameters:

axis (Union[None, int, tuple(int)]) – Compute the L2-norm along the Specific dimension. Default: None, all dimensions to calculate.

Inputs:
  • x (Tensor) - Tensor of shape N-D. The type must be float32 or float16.

  • clip_norm (Tensor) - A scalar Tensor of shape \(()\) or \((1)\). Or a tensor shape can be broadcast to input x shape.

Outputs:

Tensor, clipped tensor with the same shape as the x, whose type is float32.

Raises:
  • TypeError – If axis is not one of None, int, tuple.

  • TypeError – If dtype of x is neither float32 nor float16.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.ClipByNorm()
>>> x = Tensor(np.random.randint(0, 10, [4, 16]), mindspore.float32)
>>> clip_norm = Tensor(np.array([100]).astype(np.float32))
>>> output = net(x, clip_norm)
>>> print(output.shape)
(4, 16)
class tinyms.layers.Norm(axis=(), keep_dims=False)[source]

‘nn.Norm’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.norm’ instead.

class tinyms.layers.OneHot(axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype=mindspore.float32)[source]

‘nn.OneHot’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.one_hot’ instead.

class tinyms.layers.Pad(paddings, mode='CONSTANT')[source]

Pads the input tensor according to the paddings and mode.

Parameters:
  • paddings (tuple) –

    The shape of parameter paddings is \((N, 2)\) . N is the rank of input data. All elements of paddings are int type. For D th dimension of the x, paddings[D, 0] indicates how many sizes to be extended ahead of the D th dimension of the input tensor, and paddings[D, 1] indicates how many sizes to be extended behind of the D th dimension of the input tensor. The padded size of each dimension D of the output is: \(paddings[D, 0] + input\_x.dim\_size(D) + paddings[D, 1]\), e.g.:

    mode = "CONSTANT".
    paddings = [[1,1], [2,2]].
    x = [[1,2,3], [4,5,6], [7,8,9]].
    # The above can be seen: 1st dimension of `x` is 3, 2nd dimension of `x` is 3.
    # Substitute into the formula to get:
    # 1st dimension of output is paddings[0][0] + 3 + paddings[0][1] = 1 + 3 + 1 = 5.
    # 2nd dimension of output is paddings[1][0] + 3 + paddings[1][1] = 2 + 3 + 2 = 7.
    # So the shape of output is (5, 7).
    

  • mode (str) – Specifies padding mode. The optional values are “CONSTANT”, “REFLECT”, “SYMMETRIC”. Default: “CONSTANT”.

Inputs:
  • x (Tensor) - The input tensor.

Outputs:

Tensor, the tensor after padding.

  • If mode is “CONSTANT”, it fills the edge with 0, regardless of the values of the x. If the x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[0,0,0,0,0,0,0], [0,0,1,2,3,0,0], [0,0,4,5,6,0,0], [0,0,7,8,9,0,0], [0,0,0,0,0,0,0]].

  • If mode is “REFLECT”, it uses a way of symmetrical copying through the axis of symmetry to fill in. If the x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]].

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the x is [[1,2,3], [4,5,6], [7,8,9]] and paddings is [[1,1], [2,2]], then the Outputs is [[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].

Raises:
  • TypeError – If paddings is not a tuple.

  • ValueError – If length of paddings is more than 4 or its shape is not \((N, 2)\) .

  • ValueError – If mode is not one of ‘CONSTANT’, ‘REFLECT’, ‘SYMMETRIC’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor
>>> import mindspore.nn as nn
>>> import numpy as np
>>> # If `mode` is "CONSTANT"
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="CONSTANT")
...     def construct(self, x):
...         return self.pad(x)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindspore.float32)
>>> pad = Net()
>>> output = pad(x)
>>> print(output)
[[0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 1. 2. 3. 0. 0.]
 [0. 0. 4. 5. 6. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0.]]
>>> # Another way to call
>>> pad = ops.Pad(paddings=((1, 1), (2, 2)))
>>> # From the above code, we can see following:
>>> # "paddings=((1, 1), (2, 2))",
>>> # paddings[0][0] = 1, indicates a row of values is filled top of the input data in the 1st dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0.]
>>> #  [1. 2. 3.]
>>> #  [4. 5. 6.]]
>>> # paddings[0][1] = 1 indicates a row of values is filled below input data in the 1st dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0.]
>>> #  [1. 2. 3.]
>>> #  [4. 5. 6.]
>>> #  [0. 0. 0.]]
>>> # paddings[1][0] = 2, indicates 2 rows of values is filled in front of input data in the 2nd dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0. 0. 0.]
>>> #  [0. 0. 1. 2. 3.]
>>> #  [0. 0. 4. 5. 6.]
>>> #  [0. 0. 0. 0. 0.]]
>>> # paddings[1][1] = 2, indicates 2 rows of values is filled in front of input data in the 2nd dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0. 0. 0. 0. 0.]
>>> #  [0. 0. 1. 2. 3. 0. 0.]
>>> #  [0. 0. 4. 5. 6. 0. 0.]
>>> #  [0. 0. 0. 0. 0. 0. 0.]]
>>> output = pad(x)
>>> print(output)
[[0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 1. 2. 3. 0. 0.]
 [0. 0. 4. 5. 6. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0.]]
>>> # if mode is "REFLECT"
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="REFLECT")
...     def construct(self, x):
...         return self.pad(x)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindspore.float32)
>>> pad = Net()
>>> output = pad(x)
>>> print(output)
[[6. 5. 4. 5. 6. 5. 4.]
 [3. 2. 1. 2. 3. 2. 1.]
 [6. 5. 4. 5. 6. 5. 4.]
 [3. 2. 1. 2. 3. 2. 1.]]
>>> # if mode is "SYMMETRIC"
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="SYMMETRIC")
...     def construct(self, x):
...         return self.pad(x)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindspore.float32)
>>> pad = Net()
>>> output = pad(x)
>>> print(output)
[[2. 1. 1. 2. 3. 3. 2.]
 [2. 1. 1. 2. 3. 3. 2.]
 [5. 4. 4. 5. 6. 6. 5.]
 [5. 4. 4. 5. 6. 6. 5.]]
class tinyms.layers.Unfold(ksizes, strides, rates, padding='valid')[source]

Extracts patches from images. The input tensor must be a 4-D tensor and the data format is NCHW.

Parameters:
  • ksizes (Union[tuple[int], list[int]]) – The size of sliding window, must be a tuple or a list of integers, and the format is [1, ksize_row, ksize_col, 1].

  • strides (Union[tuple[int], list[int]]) – Distance between the centers of the two consecutive patches, must be a tuple or list of int, and the format is [1, stride_row, stride_col, 1].

  • rates (Union[tuple[int], list[int]]) – In each extracted patch, the gap between the corresponding dimension pixel positions, must be a tuple or a list of integers, and the format is [1, rate_row, rate_col, 1].

  • padding (str) –

    The type of padding algorithm, is a string whose value is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Means that the patch can take the part beyond the original image, and this part is filled with 0.

    • valid: Means that the taken patch area must be completely covered in the original image.

Inputs:
  • x (Tensor) - A 4-D tensor whose shape is [in_batch, in_depth, in_row, in_col] and data type is number.

Outputs:

Tensor, a 4-D tensor whose data type is same as x, and the shape is [out_batch, out_depth, out_row, out_col] where out_batch is the same as the in_batch.

  • \(out\_depth = ksize\_row * ksize\_col * in\_depth\)

  • \(out\_row = (in\_row - (ksize\_row + (ksize\_row - 1) * (rate\_row - 1))) // stride\_row + 1\)

  • \(out\_col = (in\_col - (ksize\_col + (ksize\_col - 1) * (rate\_col - 1))) // stride\_col + 1\)

Raises:
  • TypeError – If ksizes, strides or rates is neither a tuple nor list.

  • ValueError – If shape of ksizes, strides or rates is not (1, x_row, x_col, 1).

  • ValueError – If the second and third element of ksizes, strides or rates is less than 1.

Supported Platforms:

Ascend GPU

Examples

>>> net = Unfold(ksizes=[1, 2, 2, 1], strides=[1, 2, 2, 1], rates=[1, 2, 2, 1])
>>> # As stated in the above code:
>>> # ksize_row = 2, ksize_col = 2, rate_row = 2, rate_col = 2, stride_row = 2, stride_col = 2.
>>> image = Tensor(np.ones([2, 3, 6, 6]), dtype=mstype.float16)
>>> # in_batch = 2, in_depth = 3, in_row = 6, in_col = 6.
>>> # Substituting the formula to get:
>>> # out_batch = in_batch = 2
>>> # out_depth = 2 * 2 * 3 = 12
>>> # out_row = (6 - (2 + (2 - 1) * (2 - 1))) // 2 + 1 = 2
>>> # out_col = (6 - (2 + (2 - 1) * (2 - 1))) // 2 + 1 = 2
>>> output = net(image)
>>> print(output.shape)
(2, 12, 2, 2)
class tinyms.layers.Tril[source]

‘nn.Tril’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.tril’ instead.

class tinyms.layers.Triu[source]

‘nn.Triu’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.triu’ instead.

class tinyms.layers.ResizeBilinear(half_pixel_centers=False)[source]

‘nn.ResizeBilinear’ is deprecated from version 2.0 and will be removed in a future version, use mindspore.ops.ResizeBilinearV2 or mindspore.ops.interpolate() instead.

Supported Platforms:

Deprecated

Examples

>>> x = Tensor([[[[1, 2, 3, 4], [5, 6, 7, 8]]]], mindspore.float32)
>>> resize_bilinear = nn.ResizeBilinear()
>>> result = resize_bilinear(x, size=(5,5))
>>> print(x)
[[[[1. 2. 3. 4.]
   [5. 6. 7. 8.]]]]
>>> print(result)
[[[[1.        1.8       2.6       3.4       4.       ]
   [2.6       3.4       4.2000003 5.        5.6000004]
   [4.2       5.0000005 5.8       6.6       7.2      ]
   [5.        5.8       6.6       7.4       8.       ]
   [5.        5.8       6.6       7.4000006 8.       ]]]]
>>> print(result.shape)
(1, 1, 5, 5)
class tinyms.layers.MatrixDiag[source]

‘nn.MatrixDiag’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.diag’ instead.

class tinyms.layers.MatrixDiagPart[source]

‘nn.MatrixDiagPart’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.diagonal’ instead.

class tinyms.layers.MatrixSetDiag[source]

Modifies the batched diagonal part of a batched tensor.

Assume x has \(k+1\) dimensions \([I, J, K, ..., M, N]\) and diagonal has \(k\) dimensions \([I, J, K, ..., min(M, N)]\), the output is a tensor of rank \(k+1\) with dimensions \([I, J, K, ..., M, N]\), where:

\[output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]\ for\ m == n\]
\[output[i, j, k, ..., m, n] = x[i, j, k, ..., m, n]\ for\ m != n\]
Inputs:
  • x (Tensor) - The batched tensor. Rank k+1, where k >= 1. It can be one of the following data types: float32, float16, int32, int8, and uint8.

  • diagonal (Tensor) - The diagonal values. Must have the same type as input x. Rank k, where k >= 1.

Outputs:

Tensor, has the same type and shape as input x.

Raises:
  • TypeError – If dtype of x or diagonal is not one of float32, float16, int32, int8 or uint8.

  • ValueError – If length of shape of x is less than 2.

  • ValueError – If x_shape[-2] < x_shape[-1] and x_shape[:-1] != diagonal_shape.

  • ValueError – If x_shape[-2] >= x_shape[-1] and x_shape[:-2] + x_shape[-1:] != diagonal_shape.

Supported Platforms:

Ascend

Examples

>>> x = Tensor([[[-1, 0], [0, 1]], [[-1, 0], [0, 1]], [[-1, 0], [0, 1]]], mindspore.float32)
>>> diagonal = Tensor([[-1., 2.], [-1., 1.], [-1., 1.]], mindspore.float32)
>>> matrix_set_diag = nn.MatrixSetDiag()
>>> output = matrix_set_diag(x, diagonal)
>>> print(output)
[[[-1.  0.]
  [ 0.  2.]]
 [[-1.  0.]
  [ 0.  1.]]
 [[-1.  0.]
  [ 0.  1.]]]
class tinyms.layers.L1Regularizer(scale)[source]

Applies l1 regularization to weights.

l1 regularization makes weights sparsity.

\[\text{loss}=\lambda * \text{reduce_sum}(\text{abs}(\omega))\]

where \(\lambda\) is scale .

Note

scale(regularization factor) should be a number which greater than 0.

Parameters:

scale (int, float) – l1 regularization factor which greater than 0.

Inputs:
  • weights (Tensor) - The input of L1Regularizer with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions.

Outputs:

Tensor, which dtype is higher precision data type between mindspore.float32 and weights dtype, and Tensor shape is ()

Raises:
  • TypeError – If scale is neither an int nor float.

  • ValueError – If scale is not greater than 0.

  • ValueError – If scale is math.inf or math.nan.

Supported Platforms:

Ascend GPU CPU

Examples

>>> scale = 0.5
>>> net = nn.L1Regularizer(scale)
>>> weights = Tensor(np.array([[1.0, -2.0], [-3.0, 4.0]]).astype(np.float32))
>>> output = net(weights)
>>> print(output.asnumpy())
5.0
class tinyms.layers.Dropout1d(p=0.5)[source]

During training, randomly zeroes entire channels of the input tensor with probability p from a Bernoulli distribution (For a 3-dimensional tensor with a shape of \((N, C, L)\), the channel feature map refers to a 1-dimensional feature map with the shape of \(L\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 1D tensor input[i,j]. Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

The paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

Dropout1d can improve the independence between channel feature maps.

Parameters:

p (float, optional) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means an 80% chance of being set to 0. Default: 0.5.

Inputs:
  • x (Tensor) - A tensor with shape \((N, C, L)\) or \((C, L)\), where N is the batch size, C is the number of channels, L is the feature length. The data type must be int8, int16, int32, int64, float16, float32 or float64.

Outputs:

Tensor, output, with the same shape and data type as x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import nn, Tensor
>>> op = nn.Dropout1d(p=0.6)
>>> op.training = True
>>> a = Tensor(np.ones((3, 3)), ms.float32)
>>> output = op(a)
class tinyms.layers.Dropout2d(p=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of \(NCHW\), the channel feature map refers to a 2-dimensional feature map with the shape of \(HW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 2D tensor input[i,j]. Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Dropout2d can improve the independence between channel feature maps.

Refer to mindspore.ops.dropout2d() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = nn.Dropout2d(p=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output = dropout(x)
>>> print(output.shape)
(2, 1, 2, 3)
class tinyms.layers.Dropout3d(p=0.5)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 5-dimensional tensor with a shape of \(NCDHW\), the channel feature map refers to a 3-dimensional feature map with a shape of \(DHW\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 3D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p.

Dropout3d can improve the independence between channel feature maps.

Refer to mindspore.ops.dropout3d() for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> dropout = nn.Dropout3d(p=0.5)
>>> x = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
>>> output = dropout(x)
>>> print(output.shape)
(2, 1, 2, 1, 2)
class tinyms.layers.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]

For details, please refer to mindspore.ops.interpolate().

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor([[[[1.0, 2.0, 3.0, 4.0], [5.0, 6.0, 7.0, 8.0]]]])
>>> upsample = nn.Upsample(size=(5, 5))
>>> out = upsample(x)
>>> print(x.asnumpy())
[[[[1. 2. 3. 4.]
   [5. 6. 7. 8.]]]]
>>> print(out.asnumpy())
[[[[1. 1. 2. 3. 4.]
   [1. 1. 2. 3. 4.]
   [1. 1. 2. 3. 4.]
   [5. 5. 6. 7. 8.]
   [5. 5. 6. 7. 8.]]]]
>>> print(out.shape)
(1, 1, 5, 5)
class tinyms.layers.Roll(shift, axis)[source]

‘nn.Roll’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.roll’ instead.

class tinyms.layers.Identity[source]

Returns a Tensor with the same shape and contents as input.

Inputs:
  • x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The data type is Number.

Outputs:

Tensor, the shape of tensor and the data type are the same as x.

Raises:

TypeError – If x is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
>>> net = nn.Identity()
>>> output = net(x)
>>> print(output)
[1 2 3 4]
class tinyms.layers.Unflatten(axis, unflattened_size)[source]
Summary:

Unflattens a Tensor dim according to axis and unflattened_size.

Parameters:
  • axis (int) – specifies the dimension of the input Tensor to be unflattened.

  • unflattened_size (Union(tuple[int], list[int])) – the new shape of the unflattened dimension of the Tensor and it can be a tuple of ints or a list of ints. The product of unflattened_size must equal to input_shape[axis].

Inputs:
  • input (Tensor) - The input Tensor to be unflattened.

Outputs:

Tensor that has been unflattend.

Raises:
  • TypeError – If axis is not int.

  • TypeError – If unflattened_size is neither tuple of ints nor list of ints.

  • TypeError – The product of unflattened_size does not equal to input_shape[axis].

Supported Platforms:

Ascend GPU CPU

Examples

>>> input = Tensor(np.arange(0, 100).reshape(2, 10, 5), mindspore.float32)
>>> net = nn.Unflatten(1, (2, 5))
>>> output = net(input)
>>> print(f"before unflatten the input shape is {input.shape}")
before unflatten the input shape is  (2, 10, 5)
>>> print(f"after unflatten the output shape is {output.shape}")
after unflatten the output shape is (2, 2, 5, 5)
class tinyms.layers.Embedding(vocab_size, embedding_size, use_one_hot=False, embedding_table='normal', dtype=mindspore.float32, padding_idx=None)[source]

A simple lookup table that stores embeddings of a fixed dictionary and size.

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.

Note

When ‘use_one_hot’ is set to True, the type of the x must be mindspore.int32.

Parameters:
  • vocab_size (int) – Size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • use_one_hot (bool) – Specifies whether to apply one_hot encoding form. Default: False.

  • embedding_table (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: ‘normal’.

  • dtype (mindspore.dtype) – Data type of x. Default: mindspore.float32.

  • padding_idx (int, None) – When the padding_idx encounters index, the output embedding vector of this index will be initialized to zero. Default: None. The feature is inactivated.

Inputs:
  • x (Tensor) - Tensor of shape \((\text{batch_size}, \text{x_length})\). The elements of the Tensor must be integer and not larger than vocab_size. Otherwise the corresponding embedding vector will be zero. The data type is int32 or int64.

Outputs:

Tensor of shape \((\text{batch_size}, \text{x_length}, \text{embedding_size})\).

Raises:
  • TypeError – If vocab_size or embedding_size is not an int.

  • TypeError – If use_one_hot is not a bool.

  • ValueError – If padding_idx is an int which not in range [0, vocab_size).

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Embedding(20000, 768,  True)
>>> x = Tensor(np.ones([8, 128]), mindspore.int32)
>>> # Maps the input word IDs to word embedding.
>>> output = net(x)
>>> result = output.shape
>>> print(result)
(8, 128, 768)
class tinyms.layers.EmbeddingLookup(vocab_size, embedding_size, param_init='normal', target='CPU', slice_mode='batch_slice', manual_shapes=None, max_norm=None, sparse=True, vocab_cache_size=0)[source]

EmbeddingLookup layer. Same function as the embedding layer, mainly used for heterogeneous parallel scenarios where large-scale embedding layers exist when automatic parallelism or semi-automatic parallelism is present.

Note

When ‘target’ is set to ‘CPU’, this module will use P.EmbeddingLookup().set_device(‘CPU’) which specified ‘offset = 0’ to lookup table. When ‘target’ is set to ‘DEVICE’, this module will use P.Gather() which specified ‘axis = 0’ to lookup table. In field slice mode, the manual_shapes must be given. It is a tuple ,where the element is vocab[i], vocab[i] is the row numbers for i-th part.

Parameters:
  • vocab_size (int) – Size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • param_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: ‘normal’.

  • target (str) – Specifies the target where the op is executed. The value must in [‘DEVICE’, ‘CPU’]. Default: ‘CPU’.

  • slice_mode (str) – The slicing way in semi_auto_parallel/auto_parallel. The value must get through mindspore.nn.EmbeddingLookup. Default: ‘nn.EmbeddingLookup.BATCH_SLICE’.

  • manual_shapes (tuple) – The accompaniment array in field slice mode. Default: None.

  • max_norm (Union[float, None]) – A maximum clipping value. The data type must be float16, float32 or None. Default: None

  • sparse (bool) – Using sparse mode. When ‘target’ is set to ‘CPU’, ‘sparse’ has to be true. Default: True.

  • vocab_cache_size (int) – Cache size of the dictionary of embeddings. Default: 0. It is valid only in parameter server trainning mode and ‘DEVICE’ target. And the moment parameter of corresponding optimizer will also be set to the cache size. In addition, it should be noted that it will cost the ‘DEVICE’ memory, so suggests setting a reasonable value to avoid insufficient memory.

Inputs:
  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of embedding_table, and the exceeding part will be filled with 0 in the output. Values does not support negative and the result is undefined if values are negative. Input_indices must only be a 2d tensor in this interface when run in semi auto parallel/auto parallel mode.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises:
  • TypeError – If vocab_size or embedding_size or vocab_cache_size is not an int.

  • TypeError – If sparse is not a bool or manual_shapes is not a tuple.

  • ValueError – If vocab_size or embedding_size is less than 1.

  • ValueError – If vocab_cache_size is less than 0.

  • ValueError – If target is neither ‘CPU’ nor ‘DEVICE’.

  • ValueError – If slice_mode is not one of ‘batch_slice’ or ‘field_slice’ or ‘table_row_slice’ or ‘table_column_slice’.

  • ValueError – If sparse is False and target is ‘CPU’.

  • ValueError – If slice_mode is ‘field_slice’ and manual_shapes is None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
>>> result = nn.EmbeddingLookup(4,2)(input_indices)
>>> print(result.shape)
(2, 2, 2)
class tinyms.layers.MultiFieldEmbeddingLookup(vocab_size, embedding_size, field_size, param_init='normal', target='CPU', slice_mode='batch_slice', feature_num_list=None, max_norm=None, sparse=True, operator='SUM')[source]

Returns a slice of input tensor based on the specified indices and the field ids. This operation supports looking up embeddings using multi hot and one hot fields simultaneously.

Note

When ‘target’ is set to ‘CPU’, this module will use P.EmbeddingLookup().set_device(‘CPU’) which specified ‘offset = 0’ to lookup table. When ‘target’ is set to ‘DEVICE’, this module will use P.Gather() which specified ‘axis = 0’ to lookup table. The vectors with the same field_ids will be combined by the operator, such as ‘SUM’, ‘MAX’ and ‘MEAN’. Ensure the input_values of the padded id is zero, so that they can be ignored. The final output will be zeros if the sum of absolute weight of the field is zero. This class only supports [‘table_row_slice’, ‘batch_slice’ and ‘table_column_slice’]. For the operation ‘MAX’ on device Ascend, there is a constraint where \(batch\_size * (seq\_length + field\_size) < 3500\).

Parameters:
  • vocab_size (int) – The size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • field_size (int) – The field size of the final outputs.

  • param_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: ‘normal’.

  • target (str) – Specifies the target where the op is executed. The value must in [‘DEVICE’, ‘CPU’]. Default: ‘CPU’.

  • slice_mode (str) – The slicing way in semi_auto_parallel/auto_parallel. The value must get through mindspore.nn.EmbeddingLookup. Default: ‘nn.EmbeddingLookup.BATCH_SLICE’.

  • feature_num_list (tuple) – The accompaniment array in field slice mode. This is unused currently. Default: None.

  • max_norm (Union[float, None]) – A maximum clipping value. The data type must be float16, float32 or None. Default: None

  • sparse (bool) – Using sparse mode. When ‘target’ is set to ‘CPU’, ‘sparse’ has to be true. Default: True.

  • operator (str) – The pooling method for the features in one field. Support ‘SUM’, ‘MEAN’ and ‘MAX’. Default: ‘SUM’.

Inputs:
  • input_indices (Tensor) - The shape of tensor is \((batch\_size, seq\_length)\). Specifies the indices of elements of the original Tensor. Input_indices must be a 2d tensor in this interface. Type is Int32, Int64.

  • input_values (Tensor) - The shape of tensor is \((batch\_size, seq\_length)\). Specifies the weights of elements of the input_indices. The lookout vector will multiply with the input_values. Type is Float32.

  • field_ids (Tensor) - The shape of tensor is \((batch\_size, seq\_length)\). Specifies the field id of elements of the input_indices. Type is Int32.

Outputs:

Tensor, the shape of tensor is \((batch\_size, field\_size, embedding\_size)\). Type is Float32.

Raises:
  • TypeError – If vocab_size or embedding_size or field_size is not an int.

  • TypeError – If sparse is not a bool or feature_num_list is not a tuple.

  • ValueError – If vocab_size or embedding_size or field_size is less than 1.

  • ValueError – If target is neither ‘CPU’ nor ‘DEVICE’.

  • ValueError – If slice_mode is not one of ‘batch_slice’, ‘field_slice’, ‘table_row_slice’, ‘table_column_slice’.

  • ValueError – If sparse is False and target is ‘CPU’.

  • ValueError – If slice_mode is ‘field_slice’ and feature_num_list is None.

  • ValueError – If operator is not one of ‘SUM’, ‘MAX’, ‘MEAN’.

Supported Platforms:

Ascend GPU

Examples

>>> input_indices = Tensor([[2, 4, 6, 0, 0], [1, 3, 5, 0, 0]], mindspore.int32)
>>> input_values = Tensor([[1, 1, 1, 0, 0], [1, 1, 1, 0, 0]], mindspore.float32)
>>> field_ids = Tensor([[0, 1, 1, 0, 0], [0, 0, 1, 0, 0]], mindspore.int32)
>>> net = nn.MultiFieldEmbeddingLookup(10, 2, field_size=2, operator='SUM', target='DEVICE')
>>> out = net(input_indices, input_values, field_ids)
>>> print(out.shape)
(2, 2, 2)
class tinyms.layers.AvgPool3d(kernel_size=1, stride=1, pad_mode='valid', padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)[source]

Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically, the input is of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), and AvgPool3D outputs regional average in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size is \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1, s_2)\), the operation is as follows.

Warning

kernel_size is in the range [1, 255]. stride is in the range [1, 63].

\[\text{output}(N_i, C_j, d, h, w) = \frac{1}{d_{ker} * h_{ker} * w_{ker}} \sum_{l=0}^{d_{ker}-1} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]], optional) – The size of kernel used to take the average value, can be an int number or a single element tuple that represents depth, height and width, or a tuple of three positive integers that represent depth, height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, can be a positive int or a single element tuple that represents the depth, height and width of movement, or a tuple of three positive integers that represents depth, height and width of movement respectively. If the value is None, the default value kernel_size is used. Default: 1.

  • pad_mode (str, optional) –

    Specifies the padding method of pooling, optional values are “same”, “valid” or “pad”, case insensitive. Default: “valid”.

    • same: The depth, height and width of the output is the same as the value after the input is divided by stride.

    • valid: Returns the output obtained by effective calculation without padding. The excess pixels that do not meet the calculation will be discarded.

    • pad: Pads the input. Fill the front, back, top, and bottom of the input with 0s of size padding. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int]), optional) –

    Pooling padding value, only ‘pad’ mode can be set to non-zero. Default: 0. Only the following paddings are supported:

    • padding is an integer or a tuple/list containing one integer, it will be padded in six directions of front, back, top, bottom, left and right of the input.

    • padding is a tuple/list containing three integers, it will be padded in front and back of the input padding[0] times, up and down padding[1] times, and left and right of the input padding[2] times.

  • ceil_mode (bool, optional) – If True, use ceil to compute the output shape instead of floor. Default: False.

  • count_include_pad (bool, optional) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int, optional) – If it is specified as a non-zero parameter, this parameter will be used as the divisor in the average calculation. Otherwise, kernel_size will be used as the divisor. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\). Currently support float16 and float32 data type.

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), with the same data type as x.

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]
\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{kernel_size}[2]}{\text{stride}[2]} + 1\right\rfloor\]
Raises:
  • TypeError – If kernel_size is neither an int nor a tuple.

  • TypeError – If stride is neither an int nor a tuple.

  • TypeError – If padding is neither an int nor a tuple/list.

  • TypeError – If ceil_mode or count_include_pad is not a bool.

  • TypeError – If divisor_override is not an int.

  • ValueError – If numbers in kernel_size or stride are not positive.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 3.

  • ValueError – If padding is a tuple/list whose length is neither 1 nor 3.

  • ValueError – If element of padding is less than 0.

  • ValueError – If length of shape of x is neither 4 nor 5.

  • ValueError – If divisor_override is less than or equal to 0.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> pool = nn.AvgPool3d(kernel_size=3, stride=1)
>>> x = ops.randn(1, 2, 4, 4, 5).astype(ms.float32)
>>> output = pool(x)
>>> print(output.shape)
(1, 2, 2, 2, 3)
>>> x1 = ops.randn(6, 5, 7, 7, 5).astype(ms.float32)
>>> pool2 = nn.AvgPool3d(4, stride=2, pad_mode='pad', padding=(2, 2, 1), divisor_override=10)
>>> output2 = pool2(x1)
>>> print(output2.shape)
(6, 5, 4, 4, 2)
class tinyms.layers.MaxPool3d(kernel_size=1, stride=1, pad_mode='valid', padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

3D max pooling operation.

Applies a 3D max pooling over an input Tensor which can be regarded as a composition of 3D planes.

Typically the input is of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((D_{in}, H_{in}, W_{in})\)-dimension. Given kernel size is \(ks = (d_{ker}, h_{ker}, w_{ker})\) and stride is \(s = (s_0, s_1, s_2)\), the operation is as follows.

\[\text{output}(N_i, C_j, d, h, w) = \max_{l=0, \ldots, d_{ker}-1} \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times d + l, s_1 \times h + m, s_2 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number or a single element tuple that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively. The value must be a positive integer. Default: 1.

  • stride (Union[int, tuple[int]]) – The moving stride of pooling operation, an int number or a single element tuple that represents the moving stride of pooling kernel in the directions of depth, height and the width, or a tuple of three int numbers that represent depth, height and width of movement respectively. The value must be a positive integer. If the value is None, the default value kernel_size is used. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same”, “valid” or “pad”, not case sensitive. Default: “valid”.

    • same: The output shape is the same as the input shape evenly divided by stride.

    • valid: The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: pads the input. Pads the top, bottom, left, and right sides of the input with padding number of zeros. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int])) – Pooling padding value. Default: 0. padding can only be an integer or a tuple/list containing one or three integers. If padding is an integer or a tuple/list containing one integer, it will be padded in six directions of front, back, top, bottom, left and right of the input. If padding is a tuple/list containing three integers, it will be padded in front and back of the input padding[0] times, up and down padding[1] times, and left and right of the input padding[2] times.

  • dilation (Union(int, tuple[int])) – The spacing between the elements of the kernel in convolution, used to increase the receptive field of the pooling operation. If it is a tuple, it must contain one or three integers. Default: 1.

  • return_indices (bool) – If True, output is a Tuple of 2 Tensors, representing the maxpool result and where the max values are generated. Otherwise, only the maxpool result is returned. Default: False.

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, D_{in}, H_{in}, W_{in})\) or \((C_{in}, D_{in}, H_{in}, W_{in})\).

Outputs:

If return_indices is False, output is a Tensor, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\) or \((C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

If return_indices is True, output is a Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, D_{out}, H_{out}, W_{out})\) or \((C_{out}, D_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64.

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]
\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor\]
Raises:
  • ValueError – If length of shape of x is not equal to 4 or 5.

  • TypeError – If kernel_size , stride , padding or dilation is neither an int nor a tuple.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If the padding parameter is neither an integer nor a tuple of length 3.

  • ValueError – If pad_mode is not set to ‘pad’, setting return_indices to True or dilation to a value other than 1.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import numpy as np
>>> np_x = np.random.randint(0, 10, [5, 3, 4, 6, 7])
>>> x = Tensor(np_x, ms.float32)
>>> pool1 = nn.MaxPool3d(kernel_size=2, stride=1, pad_mode='pad', padding=1, dilation=3, return_indices=True)
>>> output = pool1(x)
>>> print(output[0].shape)
(5, 3, 3, 5, 6)
>>> print(output[1].shape)
(5, 3, 3, 5, 6)
>>> pool2 = nn.MaxPool3d(kernel_size=2, stride=1, pad_mode='pad', padding=1, dilation=3, return_indices=False)
>>> output2 = pool2(x)
>>> print(output2.shape)
(5, 3, 3, 5, 6)
class tinyms.layers.AvgPool2d(kernel_size=1, stride=1, pad_mode='valid', padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCHW')[source]

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), AvgPool2d outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows:

\[\text{output}(N_i, C_j, h, w) = \frac{1}{h_{ker} * w_{ker}} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the average value. The data type of kernel_size must be int or a single element tuple and the value represents the height and width, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number or a single element tuple that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    case insensitive. Default: “valid”.

    • same: The height and width of the output is the same as the value after the input is divided by stride.

    • valid: Returns the output obtained by effective calculation without padding. The excess pixels that do not meet the calculation will be discarded.

    • pad: pads the input. Pads the top, bottom, left, and right sides of the input with padding number of zeros. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int])) – Pooling padding value, only ‘pad’ mode can be set to non-zero. Default: 0. padding can only be an integer or a tuple/list containing one or two integers. If padding is an integer or a tuple/list containing one integer, it will be padded padding times in the four directions of the input. If padding is a tuple/list containing two integers, it will be padded padding[0] times in the up-down direction of the input and padding[1] times in the left-right direction of the input.

  • ceil_mode (bool) – If True, use ceil to compute the output shape instead of floor. Default: False.

  • count_include_pad (bool) – If True, averaging calculation will include the zero-padding. Default: True.

  • divisor_override (int) – If it is specified as a non-zero parameter, this parameter will be used as the divisor in the average calculation. Otherwise, kernel_size will be used as the divisor. Default: None.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\) or \((C_{out}, H_{out}, W_{out})\).

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]
\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]
Raises:
  • TypeError – If kernel_size or strides is neither int nor tuple.

  • ValueError – If pad_mode is not ‘valid’ ,’same’ or ‘pad’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If padding, ceil_mode, count_include_pad, or divisor_override is used or pad_mode is pad when data_format is ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of padding tuple/list is not 1 or 2.

  • ValueError – If length of shape of x is not equal to 3 or 4.

  • ValueError – If divisor_override is less than or equal to 0.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> pool = nn.AvgPool2d(kernel_size=3, stride=1)
>>> x = ms.Tensor(np.random.randint(0, 10, [1, 2, 4, 4]), ms.float32)
>>> output = pool(x)
>>> print(output.shape)
(1, 2, 2, 2)
>>> x = ops.randn(6, 6, 8, 8)
>>> pool2 = nn.AvgPool2d(4, stride=1, pad_mode='pad', padding=2, divisor_override=5)
>>> output2 = pool2(x)
>>> print(output2.shape)
(6, 6, 9, 9)
class tinyms.layers.MaxPool2d(kernel_size=1, stride=1, pad_mode='valid', padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCHW')[source]

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool2d outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \((h_{ker}, w_{ker})\) and stride \((s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the max value, is an int number or a single element tuple that represents height and width are both kernel_size, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number or a single element tuple that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same”, “valid” or “pad”, not case sensitive. Default: “valid”.

    • same: The output shape is the same as the input shape evenly divided by stride.

    • valid: The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: pads the input. Pads the top, bottom, left, and right sides of the input with padding number of zeros. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int])) – Specifies the padding value of the pooling operation. Default: 0. padding can only be an integer or a tuple/list containing one or two integers. If padding is an integer or a tuple/list containing one integer, it will be padded padding times in the four directions of the input. If padding is a tuple/list containing two integers, it will be padded padding[0] times in the up-down direction of the input and padding[1] times in the left-right direction of the input.

  • dilation (Union(int, tuple[int])) – The spacing between the elements of the kernel in convolution, used to increase the receptive field of the pooling operation. If it is a tuple, it must contain one or two integers. Default: 1.

  • return_indices (bool) – If True, the function will return both the result of max pooling and the indices of the max elements. Default: False.

  • ceil_mode (bool) – If True, use ceil to compute the output shape instead of floor. Default: False.

  • data_format (str) – The optional value for data format, is ‘NHWC’ or ‘NCHW’. Default: ‘NCHW’.

Inputs:
  • x (Tensor) - Tensor of shape \((N,C_{in},H_{in},W_{in})\) or \((C_{in},H_{in},W_{in})\).

Outputs:

If return_indices is False, output is a Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((C_{out}, H_{out}, W_{out})\). It has the same data type as x.

If return_indices is True, output is a Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N_{out}, C_{out}, H_{out}, W_{out})\) or \((C_{out}, H_{out}, W_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64.

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor\]
\[W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor\]
Raises:
  • TypeError – If kernel_size or stride is neither int nor tuple.

  • ValueError – If pad_mode is neither ‘valid’ nor ‘same’ with not case sensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If length of shape of x is not equal to 3 or 4.

  • ValueError – If pad_mode is not ‘pad’, padding, dilation, return_indices, ceil_mode parameters are not set to their default values.

  • ValueError – If the length of the tuple/list padding parameter is not 2.

  • ValueError – If The length of the tuple dilation parameter is not 2.

  • ValueError – If dilation parameter is neither an integer nor a tuple.

  • ValueError – If pad_mode is ‘pad’ and data_format is ‘NHWC’.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> pool = nn.MaxPool2d(kernel_size=3, stride=1)
>>> x = Tensor(np.random.randint(0, 10, [1, 2, 4, 4]), mindspore.float32)
>>> output = pool(x)
>>> print(output.shape)
(1, 2, 2, 2)
>>> np_x = np.random.randint(0, 10, [5, 3, 4, 5])
>>> x = Tensor(np_x, mindspore.float32)
>>> pool2 = nn.MaxPool2d(kernel_size=2, stride=1, pad_mode='pad', padding=1, dilation=1, return_indices=True)
>>> output = pool2(x)
>>> print(output[0].shape)
(5, 3, 5, 6)
>>> print(output[1].shape)
(5, 3, 5, 6)
class tinyms.layers.AvgPool1d(kernel_size=1, stride=1, pad_mode='valid', padding=0, ceil_mode=False, count_include_pad=True)[source]

Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically the input is of shape \((N_{in}, C_{in}, L_{in})\), AvgPool1d outputs regional average in the \((L_{in})\)-dimension. Given kernel_size \(l_{ker}\) and stride \(s_0\), the operation is as follows:

\[\text{output}(N_i, C_j, l) = \frac{1}{l_{ker}} \sum_{n=0}^{l_{ker}-1} \text{input}(N_i, C_j, s_0 \times l + n)\]
Parameters:
  • kernel_size (int) – The size of kernel window used to take the average value, Default: 1.

  • stride (int) – The distance of kernel moving, an int number that represents the width of movement is strides, Default: 1.

  • pad_mode (str) –

    case insensitive. Default: “valid”.

    • same: The width of the output is the same as the value after the input is divided by stride.

    • valid: Returns the output obtained by effective calculation without padding. The excess pixels that do not meet the calculation will be discarded.

    • pad: Performs padding on the input. Adds padding size of zeros to both ends of the input. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int])) – Pooling padding value, only ‘pad’ mode can be set to non-zero. Default: 0. padding can only be an integer or a tuple/list containing a single integer, in which case padding times or padding[0] times are padded on both sides of the input.

  • ceil_mode (bool) – If True, use ceil to compute the output shape instead of floor. Default: False.

  • count_include_pad (bool) – If True, averaging calculation will include the zero-padding. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\) or \((C_{in}, L_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, L_{out})\) or \((C_{out}, L_{out})\).

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]
Raises:
  • TypeError – If kernel_size or stride is not an int.

  • ValueError – If pad_mode is not ‘valid’ ,’same’ or ‘pad’ with not case sensitive.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of padding tuple/list is not 1.

  • ValueError – If length of shape of x is not equal to 2 or 3.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> import numpy as np
>>> pool = nn.AvgPool1d(kernel_size=6, stride=1)
>>> x = ms.Tensor(np.random.randint(0, 10, [1, 3, 6]), ms.float32)
>>> output = pool(x)
>>> result = output.shape
>>> print(result)
(1, 3, 1)
>>> pool2 = nn.AvgPool1d(4, stride=1, ceil_mode=True, pad_mode='pad', padding=2)
>>> x1 = ops.randn(6, 6, 8)
>>> output = pool2(x1)
>>> print(output.shape)
(6, 6, 9)
class tinyms.layers.MaxPool1d(kernel_size=1, stride=1, pad_mode='valid', padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]

Applies a 1D max pooling over an input Tensor which can be regarded as a composition of 1D planes.

Typically the input is of shape \((N_{in}, C_{in}, L_{in})\), MaxPool1d outputs regional maximum in the \((L_{in})\)-dimension. Given kernel size \(ks = (l_{ker})\) and stride \(s = (s_0)\), the operation is as follows:

\[\text{output}(N_i, C_j, l) = \max_{n=0, \ldots, l_{ker}-1} \text{input}(N_i, C_j, s_0 \times l + n)\]
Parameters:
  • kernel_size (int) – The size of kernel used to take the max value, Default: 1.

  • stride (int) – The distance of kernel moving, an int number that represents the width of movement is stride, Default: 1.

  • pad_mode (str) –

    The optional value for pad mode, is “same”, “valid” or “pad”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded.

    • pad: Performs padding on the input. Adds padding size of zeros to both ends of the input. If this mode is set, padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int])) – Padding value for the pooling. Default value is 0. padding can only be an integer or a tuple/list containing a single integer, in which case padding times or padding[0] times are padded on both sides of the input.

  • dilation (Union(int, tuple[int])) – The spacing between the elements of the kernel in convolution, used to increase the receptive field of the pooling operation. If it is a tuple, its length can only be 1. Default: 1.

  • return_indices (bool) – If True, the function will return both the result of max pooling and the indices of the max elements. Default: False.

  • ceil_mode (bool) – If True, use ceil to compute the output shape instead of floor. Default: False.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\) or \((C_{in}, L_{in})\).

Outputs:

If return_indices is False, output is a Tensor, with shape \((N, C_{out}, L_{out})\) or \((C_{out}, L_{out})\). It has the same data type as x.

If return_indices is True, output is a Tuple of 2 Tensors, representing the maxpool result and where the max values are generated.

  • output (Tensor) - Maxpooling result, with shape \((N, C_{out}, L_{out})\) or \((C_{out}, L_{out})\). It has the same data type as x.

  • argmax (Tensor) - Index corresponding to the maximum value. Data type is int64.

If pad_mode is in pad mode, the output shape calculation formula is as follows:

\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel_size} - 1) - 1}{\text{stride}} + 1\right\rfloor\]
Raises:
  • TypeError – If kernel_size or strides is not an int.

  • ValueError – If pad_mode is not ‘valid’, ‘same’ or ‘pad’, case-insensitive.

  • ValueError – If data_format is neither ‘NCHW’ nor ‘NHWC’.

  • ValueError – If kernel_size or strides is less than 1.

  • ValueError – If length of shape of x is not equal to 2 or 3.

  • ValueError – If pad_mode is not ‘pad’, padding, dilation, return_indices, ceil_mode parameters are not set to their default values.

  • ValueError – If the length of the tuple/list padding parameter is not 1.

  • ValueError – If The length of the tuple dilation parameter is not 1.

  • ValueError – If dilation parameter is neither an integer nor a tuple.

  • ValueError – If padding is non-zero when pad_mode is not ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> mpool1 = nn.MaxPool1d(kernel_size=3, stride=1)
>>> x = Tensor(np.random.randint(0, 10, [1, 2, 4]), mindspore.float32)
>>> output = mpool1(x)
>>> result = output.shape
>>> print(result)
(1, 2, 2)
>>> np_x = np.random.randint(0, 10, [5, 3, 4])
>>> x = Tensor(np_x, mindspore.float32)
>>> mpool2 = nn.MaxPool1d(kernel_size=2, stride=1, pad_mode='pad', padding=1, dilation=1, return_indices=True)
>>> output = mpool2(x)
>>> print(output[0].shape)
(5, 3, 5)
>>> print(output[1].shape)
(5, 3, 5)
class tinyms.layers.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 2D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used or set to None at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively. The value must be a positive integer.

  • output_size (Union[int, tuple[int]], optional) – The Shape of the target output_size, is a positive int that represents height and width, or a tuple of two positive integers that represent height and width respectively. The value must be a positive integer. If None, the shape of the target will be determined by output_ratio. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type : float16, float32, float64, and value is between (0, 1). If None, the shape of the target will be determined by output_size. Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool2d, a Tensor of shape \((N, C, 2)\) whose elements are within the range of \((0, 1)\). Supported data type : float16, float32, float64. If None, no random step will be set. Default: None.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C, H_{in}, W_{in})\), with float16, float32, float64, int32, int64 data type.

Outputs:
  • y (Tensor) - Has the same type as the input. Has the shape \((N, C, H, W)\).

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int64 data type. It will be returned only when return_indices is True.

Raises:
  • TypeError – If data type of input is not one of the following: float16, float32, float64, int32, int64.

  • TypeError – If data type of _random_samples is not one of the following: float16, float32, float64.

  • ValueError – If kernel_size is not a number and kernel_size is not a tuple of length 2.

  • ValueError – If output_size is not a number and output_size is not a tuple of length 2.

  • ValueError – If the sum of kernel_size , output_size and -1 is larger than the corresponding dimension of input.

  • ValueError – If the dimension of _random_samples is not 3.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 2.

Supported Platforms:

CPU

Examples

>>> # the kernel_size is an int number and the output_size is a tuple.
>>> import numpy as np
>>> from mindspore import nn
>>> from mindspore import Tensor
>>> import mindspore.common.dtype as mstype
>>> input = Tensor(np.array([0.3220, 0.9545, 0.7879, 0.0975, 0.3698,
...                            0.5135, 0.5740, 0.3435, 0.1895, 0.8764,
...                            0.9581, 0.4760, 0.9014, 0.8522, 0.3664,
...                            0.4980, 0.9673, 0.9879, 0.6988, 0.9022,
...                            0.9304, 0.1558, 0.0153, 0.1559, 0.9852]).reshape([1, 1, 5, 5]), mstype.float32)
>>> _random_samples = Tensor(np.array([[[0.8, 0.8]]]), mstype.float32)
>>> net = nn.FractionalMaxPool2d(kernel_size=2, output_size=(2, 2), _random_samples=_random_samples,
...                              return_indices=True)
>>> y, argmax = net(input)
>>> y
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> argmax
[[[[ 1  9]
   [16 24]]]]
>>> net = nn.FractionalMaxPool2d(kernel_size=2, output_ratio=(0.5, 0.5), _random_samples=_random_samples,
...                              return_indices=True)
>>> y, argmax = net(input)
>>> print(y)
[[[[0.9545 0.8764]
   [0.9673 0.9852]]]]
>>> print(argmax)
[[[[ 1  9]
   [16 24]]]]
class tinyms.layers.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]

Applies the 3D FractionalMaxPool operatin over input. The output Tensor shape can be determined by either output_size or output_ratio, and the step size is determined by _random_samples. output_size or output_ratio cannot be used or set to None at the same time.

Refer to the paper Fractional MaxPooling by Ben Graham for more details.

The input and output data format can be “NCDHW”. N is the batch size, C is the number of channels, D the feature depth, H is the feature height, and W is the feature width.

Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is a positive int that represents depth, height and width of the kernel, or a tuple of three positive integers that represent depth, height and width respectively.

  • output_size (Union[int, tuple[int]], optional) – The shape of the target output_size, is an int number that represents depth, height and width, or a tuple of three positive integers that represents depth, height and width respectively. If None, the shape of the target will be determined by output_ratio. Default: None.

  • output_ratio (Union[float, tuple[float]], optional) – The ratio of target output shape to input shape. Specifying the size of the output tensor by using a ratio of the input size. Data type : float16, float32, float64, and value is between (0, 1). If None, the shape of the target will be determined by output_size.Default: None.

  • return_indices (bool, optional) – Whether to return the indices of max value. Default: False.

  • _random_samples (Tensor, optional) – The random step of FractionalMaxPool2d, a Tensor of shape \((N, C, 3)\) whose elements are within the range of \((0, 1)\). Supported data type : float16, float32, float64. If None, no random step will be set. Default: None.

Inputs:
  • input (Tensor) - The input of FractionalMaxPool3d, which is a 4D or 5D tensor. Tensor of data type : float16, float32, float64, int32, int64. Supported shape \((N, C, D_{in}, H_{in}, W_{in})\) .

Outputs:
  • y (Tensor) - A tensor, the output of FractionalMaxPool3d. Has the same data type with imput_x. Tensor of shape \((N, C, D, H, W)\) .

  • argmax (Tensor) - The indices along with the outputs, which is a Tensor, with the same shape as the y and int32 data type. It will output only when return_indices is True.

Raises:
  • TypeError – If input is not a 4D or 5D tensor.

  • TypeError – If _random_samples is not a 3D tensor.

  • TypeError – If data type of imput_x is not float16, float32, float64, int32, int64.

  • TypeError – If dtype of _random_samples is not float16, float32, float64.

  • TypeError – If dtype of argmax is not int32, int64.

  • ValueError – If output_size is a tuple and if output_size length is not 3.

  • ValueError – If kernel_size is a tuple and if kernel_size length is not 3.

  • ValueError – If numbers in output_size or kernel_size is not positive.

  • ValueError – if output_size and output_ratio are None at the same time.

  • ValueError – If the first dimension size of input and _random_samples is not equal.

  • ValueError – If the second dimension size of input and _random_samples is not equal.

  • ValueError – If the third dimension size of _random_samples is not 3.

Supported Platforms:

GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn
>>> from mindspore import Tensor
>>> import mindspore.common.dtype as mstype
>>> x = Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
...            .reshape([1, 1, 2, 2, 4]), mstype.float32)
>>> _random_samples = Tensor(np.array([0.7, 0.7, 0.7]).reshape([1, 1, 3]), mstype.float32)
>>> net = nn.FractionalMaxPool3d(kernel_size=(1, 1, 1), output_size=(1, 1, 3),
...                              _random_samples=_random_samples, return_indices=True)
>>> output, argmax = net(x)
>>> print(output)
[[[[[13. 14. 16.]]]]]
>>> print(argmax)
[[[[[12 13 15]]]]]
>>> net = nn.FractionalMaxPool3d(kernel_size=(1, 1, 1), output_ratio=(0.5, 0.5, 0.5),
...                              _random_samples=_random_samples, return_indices=True)
>>> output, argmax = net(x)
>>> print(output)
[[[[[13. 16.]]]]]
>>> print(argmax)
[[[[[12 15]]]]]
class tinyms.layers.AdaptiveAvgPool1d(output_size)[source]

Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N_{in}, C_{in}, L_{in})\), AdaptiveAvgPool1d outputs regional average in the \(L_{in}\)-dimension. The output is of shape \((N_{in}, C_{in}, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:

output_size (int) – the target output size \(L_{out}\).

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\), with float16 or float32 data type.

Outputs:

Tensor of shape \((N, C_{in}, L_{out})\), has the same type as input.

Raises:
  • TypeError – If output_size is not an int.

  • TypeError – If input is neither float16 nor float32.

  • ValueError – If output_size is less than 1.

  • ValueError – If length of shape of input is not equal to 3.

  • ValueError – If the last dimension of input is smaller than output_size.

  • ValueError – If the last dimension of input is not divisible by output_size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> pool = nn.AdaptiveAvgPool1d(output_size=2)
>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(input)
>>> result = output.shape
>>> print(result)
(1, 3, 2)
class tinyms.layers.AdaptiveMaxPool1d(output_size)[source]

Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Typically, the input is of shape \((N_{in}, C_{in}, L_{in})\), AdaptiveMaxPool1d outputs regional maximum in the \(L_{in}\)-dimension. The output is of shape \((N_{in}, C_{in}, L_{out})\), where \(L_{out}\) is defined by output_size.

Note

\(L_{in}\) must be divisible by output_size.

Parameters:

output_size (int) – the target output size \(L_{out}\).

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, L_{in})\), with float16 or float32 data type.

Outputs:

Tensor of shape \((N, C_{in}, L_{out})\), has the same type as x.

Raises:
  • TypeError – If x is neither float16 nor float32.

  • TypeError – If output_size is not an int.

  • ValueError – If output_size is less than 1.

  • ValueError – If the last dimension of x is smaller than output_size.

  • ValueError – If the last dimension of x is not divisible by output_size.

  • ValueError – If length of shape of x is not equal to 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> pool = nn.AdaptiveMaxPool1d(output_size=3)
>>> x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(x)
>>> result = output.shape
>>> print(result)
(1, 3, 3)
class tinyms.layers.AdaptiveMaxPool2d(output_size, return_indices=False)[source]

This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

For max adaptive pool2d:

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= {\max Input[h_{start}:h_{end}, w_{start}:w_{end}]} \end{align}\end{split}\]

Note

Ascend platform only supports float16 type for input.

Parameters:
  • output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.

  • return_indices (bool) – If return_indices is True, the indices of max value would be output. Default: False.

Inputs:
  • input (Tensor) - The input of AdaptiveMaxPool2d, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input. Shape of the output is input_shape[:len(input_shape) - len(out_shape)] + out_shape.

Raises:
  • TypeError – If output_size is not int or tuple.

  • TypeError – If input is not a tensor.

  • TypeError – If return_indices is not a bool.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • ValueError – If the dimension of input is not NCHW or CHW.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(None, 2)
>>> input = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                             [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d((None, 2))
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]
  [[2. 3.]
   [5. 6.]
   [8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d(2)
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]
  [[5. 6.]
   [8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d((1, 2))
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[8. 9.]]
  [[8. 9.]]
  [[8. 9.]]]]
class tinyms.layers.AdaptiveMaxPool3d(output_size, return_indices=False)[source]

Calculates the 3D adaptive max pooling for an input Tensor. That is, for any input size, the size of the specified output is \((D, H, W)\).

Parameters:
  • output_size (Union[int, tuple]) – The specified output size, which is a positive integer that represents depth, height and width, or a tuple of three positive integers that represent depth, height and width respectively. If it is None, the output size and input size of the corresponding dimension are the same.

  • return_indices (bool, optional) – If return_indices is True, the indices of max value would be output. Otherwise, the indices will not be returned. Default: False.

Inputs:
  • input (Tensor) - Tensor, has shape of \((C, D, H, W)\) or \((N, C, D, H, W)\).

Outputs:
  • y (Tensor) - Tensor, has the same number of dims and data type as the input .

  • argmax (Tensor) - Tensor, the indices of the maximum values along with the outputs, has the same shape as y and a dtype of int32. Return this only when return_indices is True.

Raises:
  • TypeError – If input is not a Tensor.

  • ValueError – If the dimensions number of input is not 4 or 5.

  • TypeError – If dtype of input is not int, uint or float.

  • ValueError – If output_size is neither an int nor a tuple with shape (3,).

Supported Platforms:

GPU CPU

Examples

>>> input = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> output_size = (1, 1, 2)
>>> net = nn.AdaptiveMaxPool3d(output_size, True)
>>> output = net(input)
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
[[[[33 35]]]]
class tinyms.layers.AdaptiveAvgPool2d(output_size)[source]

This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input features.

The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= \frac{\sum Input[h_{start}:h_{end}, w_{start}:w_{end}]}{(h_{end}- h_{start}) * (w_{end}- w_{start})} \end{align}\end{split}\]
Parameters:

output_size (Union[int, tuple]) – The target output size is H x W. ouput_size can be a tuple consisted of int type H and W, or a single H for H x H, or None. If it is None, it means the output size is the same as the input size.

Inputs:
  • input (Tensor) - The input of AdaptiveAvgPool2d, which is a 3D or 4D tensor, with float16, float32 or float64 data type.

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\).

Raises:
  • ValueError – If output_size is a tuple and the length of output_size is not 2.

  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is less than or equal to the dimension of output_size.

Supported Platforms:

GPU

Examples

>>> pool = nn.AdaptiveAvgPool2d(2)
>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
...                            [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> output = pool(input_x)
>>> result = output.shape
>>> print(result)
(3, 2, 2)
class tinyms.layers.AdaptiveAvgPool3d(output_size)[source]

This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is \((D, H, W)\). The number of output features is equal to the number of input planes.

Suppose the last 3 dimension size of input is \((inD, inH, inW)\), then the last 3 dimension size of output is \((outD, outH, outW)\).

\[\begin{split}\begin{array}{ll} \\ \forall \quad od \in [0,outD-1], oh \in [0,outH-1], ow \in [0,outW-1]\\ output[od,oh,ow] = \\ \qquad mean(input[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\ where,\\ \qquad istartD= \left\lceil \frac{od * inD}{outD} \right\rceil \\ \qquad iendD=\left\lfloor \frac{(od+1)* inD}{outD} \right\rfloor \\ \qquad istartH=\left\lceil \frac{oh * inH}{outH} \right\rceil \\ \qquad iendH=\left\lfloor \frac{(oh+1) * inH}{outH} \right\rfloor \\ \qquad istartW=\left\lceil \frac{ow * inW}{outW} \right\rceil \\ \qquad iendW=\left\lfloor \frac{(ow+1) * inW}{outW} \right\rfloor \end{array}\end{split}\]
Parameters:

output_size (Union[int, tuple]) – The target output size. ouput_size can be a tuple \((D, H, W)\), or an int D for \((D, D, D)\). \(D\), \(H\) and \(W\) can be int or None which means the output size is the same as that of the input.

Inputs:
  • input (Tensor) - The input of AdaptiveAvgPool3d, which is a 5D or 4D Tensor, with float16, float32 or float64 data type.

Outputs:

Tensor, with the same type as the input.

Raises:
  • TypeError – If input is not a Tensor.

  • TypeError – If dtype of input is not float16, float32 or float64.

  • ValueError – If the dimension of input is not 4D or 5D.

  • ValueError – If output_size value is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # case 1: output_size=(3, 3, 4)
>>> output_size=(3, 3, 4)
>>> input_x_val = np.random.randn(4, 3, 5, 6, 7)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> net = nn.AdaptiveAvgPool3d(output_size)
>>> output = net(input_x)
>>> print(output.shape)
(4, 3, 3, 3, 4)
>>> # case 2: output_size=4
>>> output_size=5
>>> input_x_val = np.random.randn(2, 3, 8, 6, 12)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> net = nn.AdaptiveAvgPool3d(output_size)
>>> output = net(input_x)
>>> print(output.shape)
(2, 3, 5, 5, 5)
>>> # case 3: output_size=(None, 4, 5)
>>> output_size=(None, 4, 5)
>>> input_x_val = np.random.randn(4, 1, 9, 10, 8)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> net = nn.AdaptiveAvgPool3d(output_size)
>>> output = net(input_x)
>>> print(output.shape)
(4, 1, 9, 4, 5)
class tinyms.layers.MaxUnpool1d(kernel_size, stride=None, padding=0)[source]

Computes the inverse of mindspore.nn.MaxPool1d.

MaxUnpool1d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in})\) or \((C, H_{in})\), and the output is of shape \((N, C, H_{out})\) or \((C, H_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ \end{array}\end{split}\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, H_{in})\) or \((C, H_{in})\).

  • indices (Tensor) - Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} - 1]\). Data type must be in int32 or int64.

  • output_size (tuple[int], optional) - The output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H)\) , \((C, H)\) or \((H)\) and output_size must belong to \([(N, C, H_{out} - stride[0]), (N, C, H_{out} + stride[0])]\).

Outputs:

Tensor, with shape \((N, C, H_{out})\) or \((C, H_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0)) or kernel_size is not positive.

  • ValueError – If the shapes of x and indices are not equal.

  • ValueError – If x whose length is not 2 or 3.

  • ValueError – If type of output_size is not tuple.

  • ValueError – If output_size whose length is not 0, 2 or 3.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[2, 4, 6, 8]]).astype(np.float32))
>>> indices = Tensor(np.array([[1, 3, 5, 7]]).astype(np.int64))
>>> maxunpool1d = nn.MaxUnpool1d(kernel_size =2, stride=2, padding=0)
>>> output = maxunpool1d(x, indices)
>>> print(output.asnumpy())
[[0. 2. 0. 4. 0. 6. 0. 8.]]
class tinyms.layers.MaxUnpool2d(kernel_size, stride=None, padding=0)[source]

Computes the inverse of mindspore.nn.MaxPool2d.

MaxUnpool2d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\), and the output is of shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ \end{array}\end{split}\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents height and width of the kernel, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of height and width are the same, equal to padding. If padding is a tuple of two integers, the padding of height and width equal to padding[0] and padding[1] correspondingly.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).

  • indices (Tensor) - Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • output_size (tuple[int], optional) - The output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, H, W)\), \((C, H, W)\) or \((H, W)\) and output_size must belong to \([(N, C, H_{out} - stride[0], W_{out} - stride[1]), (N, C, H_{out} + stride[0], W_{out} + stride[1])]\).

Outputs:

Tensor, with shape \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride, padding (also support 0 and (0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 2.

  • ValueError – If x whose length is not 3 or 4.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size whose length is not 0, 3 or 4.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[[[0, 1], [8, 9]]]]).astype(np.float32))
>>> indices = Tensor(np.array([[[[0, 1], [2, 3]]]]).astype(np.int64))
>>> maxunpool2d = nn.MaxUnpool2d(kernel_size=1, stride=1, padding=0)
>>> output = maxunpool2d(x, indices)
>>> print(output.asnumpy())
[[[[0. 1.]
   [8. 9.]]]]
class tinyms.layers.MaxUnpool3d(kernel_size, stride=None, padding=0)[source]

Computes the inverse of mindspore.nn.MaxPool3d.

MaxUnpool3d keeps the maximal value and set all position of non-maximal values to zero. Typically the input is of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\), and the output is of shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\). The operation is as follows.

\[\begin{split}\begin{array}{ll} \\ D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\ H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\ W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\ \end{array}\end{split}\]
Parameters:
  • kernel_size (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, an int number that represents depth, height and width of the kernel, or a tuple of three int numbers that represent depth, height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the depth, height and width of movement are both stride, or a tuple of three int numbers that represent depth, height and width of movement respectively. If stride is None, then stride equal to kernel_size. Default: None.

  • padding (Union[int, tuple[int]]) – The pad value to be filled. Default: 0. If padding is an integer, the paddings of depth, height and width are the same, equal to padding. If padding is a tuple of three integers, the padding of depth, height and width equal to padding[0], padding[1] and padding[2] correspondingly.

Inputs:
  • x (Tensor) - The input Tensor to invert. Tensor of shape \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).

  • indices (Tensor) - Max values’ index represented by the indices. Tensor of shape must be same with input ‘x’. Values of indices must belong to \([0, D_{in} \times H_{in} \times W_{in} - 1]\). Data type must be in int32 or int64.

  • output_size (tuple[int], optional) - The output size. Default: None. If output_size == (), then the shape of output computed by kernel_size, stride and padding. If output_size != (), then output_size must be \((N, C, D, H, W)\) , \((C, D, H, W)\) or \((D, H, W)\) and output_size must belong to \([(N, C, D_{out} - stride[0], H_{out} - stride[1], W_{out} - stride[2]), (N, C, D_{out} + stride[0], H_{out} + stride[1], W_{out} + stride[2])]\).

Outputs:

Tensor, with shape \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), with the same data type with x.

Raises:
  • TypeError – If data type of x or indices is not supported.

  • TypeError – If kernel_size, stride or padding is neither an int nor a tuple.

  • ValueError – If numbers in stride or padding (also support 0 and (0, 0, 0)) or kernel_size is not positive.

  • ValueError – If the shape of x and indices are not equal.

  • ValueError – If kernel_size, stride or padding is a tuple whose length is not equal to 3.

  • ValueError – If x whose length is not 4 or 5.

  • ValueError – If output_size whose length is not 0, 4 or 5.

  • ValueError – If output_size whose type is not tuple.

  • ValueError – If output_size is not close to output size computed by attr kernel_size, stride, padding.

Supported Platforms:

GPU CPU

Examples

>>> x = Tensor(np.array([[[[[0, 1], [8, 9]]]]]).astype(np.float32))
>>> indices= Tensor(np.array([[[[[0, 1], [2, 3]]]]]).astype(np.int64))
>>> maxunpool3d = nn.MaxUnpool3d(kernel_size=1, stride=1, padding=0)
>>> output = maxunpool3d(x, indices)
>>> print(output.asnumpy())
[[[[[0. 1.]
    [8. 9.]]]]]
class tinyms.layers.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 1D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Typically the input is of shape \((N_{in}, C_{in}, L_{in})\) or \((C_{in}, L_{in})`\), the output is of shape \((N_{out}, C_{out}, L_{out})\) or \((C_{out}, L_{out})\), with the same shape as input, the operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • norm_type (Union[int, float]) –

    Type of normalization, represents p in the formula, can not be 0.

    • if p = 1, the result is the sum of the elements within the pooling kernel(proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (int) – The size of kernel window.

  • stride (int) – The distance of kernel moving, an int number that represents the width of movement is stride, if the value is None, the default value kernel_size is used;

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Inputs:
  • x (Tensor) - Tensor of shape \((N_{in}, C_{in}, L_{in})\) or \((C_{in}, L_{in})\).

Outputs:
  • output (Tensor) - LPPool1d result, with shape \((N_{out}, C_{out}, L_{out})\) or \((C_{out}, L_{out})\), it has the same data type as x, where

\[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor\]
Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is not an int.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If length of shape of x is not equal to 2 or 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> import numpy as np
>>> a = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
>>> net = nn.LPPool1d(norm_type=1, kernel_size=3, stride=1)
>>> out = net(a)
>>> print(out)
[[[ 3.  6.]
  [15. 18.]
  [27. 30.]]
 [[39. 42.]
  [51. 54.]
  [63. 66.]]]
class tinyms.layers.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)[source]

Applying 2D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Typically the input is of shape \((N, C, H_{in}, W_{in})\), the output is of shape \((N, C, H_{in}, W_{in})\), with the same shape as input, the operation is as follows.

\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]
Parameters:
  • norm_type (Union[int, float]) –

    • if p = 1, the result is the sum of the elements within the pooling kernel(proportional to average pooling).

    • if p = \(\infty\), the result is the result of maximum pooling.

  • kernel_size (Union[int, tuple[int]]) – The size of kernel window. The data type of kernel_size must be int and the value represents the height and width, or a tuple of two int numbers that represent height and width respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both stride, or a tuple of two int numbers that represent height and width of movement respectively, if the value is None, the default value kernel_size is used;

  • ceil_mode (bool) – Whether to use ceil or floor to calculate output shape. Default: False.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C, H_{in}, W_{in})\).

Outputs:
  • output (Tensor) - LPPool2d result, with shape \((N, C, H_{in}, W_{in})\), It has the same data type as x, where

\[H_{out} = \left\lfloor\frac{H_{in} - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]
\[W_{out} = \left\lfloor\frac{W_{in} - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]
Raises:
  • TypeError – If x is not an Tensor.

  • TypeError – If kernel_size or stride is neither int nor tuple.

  • TypeError – If ceil_mode is not a bool.

  • TypeError – If norm_type is neither float nor int.

  • ValueError – If norm_type is equal to 0.

  • ValueError – If kernel_size or stride is less than 1.

  • ValueError – If kernel_size or stride is a tuple whose length is not equal to 2.

  • ValueError – If length of shape of x is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> import numpy as np
>>> a = Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
>>> net = nn.LPPool2d(norm_type=1, kernel_size=3, stride=1)
>>> out = net(a)
>>> print(out)
[[[[  54.   63.   72.]
   [  99.  108.  117.]]
  [[ 234.  243.  252.]
   [ 279.  288.  297.]]
  [[ 414.  423.  432.]
   [ 459.  468.  477.]]]
 [[[ 594.  603.  612.]
   [ 639.  648.  657.]]
  [[ 774.  783.  792.]
   [ 819.  828.  837.]]
  [[ 954.  963.  972.]
   [ 999. 1008. 1017.]]]]
class tinyms.layers.ImageGradients[source]

Returns two tensors, the first is along the height dimension and the second is along the width dimension.

Assume an image shape is \(h*w\), the gradients along the height and the width are \(dy\) and \(dx\), respectively.

\[ \begin{align}\begin{aligned}dy[i] = \begin{cases} image[i+1, :]-image[i, :], &if\ 0<=i<h-1 \cr 0, &if\ i==h-1\end{cases}\\dx[i] = \begin{cases} image[:, i+1]-image[:, i], &if\ 0<=i<w-1 \cr 0, &if\ i==w-1\end{cases}\end{aligned}\end{align} \]
Inputs:
  • images (Tensor) - The input image data, with format ‘NCHW’.

Outputs:
  • dy (Tensor) - vertical image gradients, the same type and shape as input.

  • dx (Tensor) - horizontal image gradients, the same type and shape as input.

Raises:

ValueError – If length of shape of images is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.ImageGradients()
>>> image = Tensor(np.array([[[[1, 2], [3, 4]]]]), dtype=mindspore.int32)
>>> output = net(image)
>>> print(output)
(Tensor(shape=[1, 1, 2, 2], dtype=Int32, value=
[[[[2, 2],
   [0, 0]]]]), Tensor(shape=[1, 1, 2, 2], dtype=Int32, value=
[[[[1, 0],
   [1, 0]]]]))
class tinyms.layers.SSIM(max_val=1.0, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03)[source]

Returns SSIM index between two images.

Its implementation is based on Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004) Image quality assessment: from error visibility to structural similarity .

SSIM is a measure of the similarity of two pictures. Like PSNR, SSIM is often used as an evaluation of image quality. SSIM is a number between 0 and 1, and the larger it is, the smaller the gap between the output image and the undistorted image, that is, the better the image quality. When the two images are exactly the same, SSIM=1.

\[\begin{split}l(x,y)&=\frac{2\mu_x\mu_y+C_1}{\mu_x^2+\mu_y^2+C_1}, C_1=(K_1L)^2.\\ c(x,y)&=\frac{2\sigma_x\sigma_y+C_2}{\sigma_x^2+\sigma_y^2+C_2}, C_2=(K_2L)^2.\\ s(x,y)&=\frac{\sigma_{xy}+C_3}{\sigma_x\sigma_y+C_3}, C_3=C_2/2.\\ SSIM(x,y)&=l*c*s\\&=\frac{(2\mu_x\mu_y+C_1)(2\sigma_{xy}+C_2)} {(\mu_x^2+\mu_y^2+C_1)(\sigma_x^2+\sigma_y^2+C_2)}.\end{split}\]
Parameters:
  • max_val (Union[int, float]) – The dynamic range of the pixel values (255 for 8-bit grayscale images). Default: 1.0.

  • filter_size (int) – The size of the Gaussian filter. Default: 11. The value must be greater than or equal to 1.

  • filter_sigma (float) – The standard deviation of Gaussian kernel. Default: 1.5. The value must be greater than 0.

  • k1 (float) – The constant used to generate c1 in the luminance comparison function. Default: 0.01.

  • k2 (float) – The constant used to generate c2 in the contrast comparison function. Default: 0.03.

Inputs:
  • img1 (Tensor) - The first image batch with format ‘NCHW’. It must be the same shape and dtype as img2.

  • img2 (Tensor) - The second image batch with format ‘NCHW’. It must be the same shape and dtype as img1.

Outputs:

Tensor, has the same dtype as img1. It is a 1-D tensor with shape N, where N is the batch num of img1.

Raises:
  • TypeError – If max_val is neither int nor float.

  • TypeError – If k1, k2 or filter_sigma is not a float.

  • TypeError – If filter_size is not an int.

  • ValueError – If max_val or filter_sigma is less than or equal to 0.

  • ValueError – If filter_size is less than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.SSIM()
>>> img1 = Tensor(np.ones([1, 3, 16, 16]).astype(np.float32))
>>> img2 = Tensor(np.ones([1, 3, 16, 16]).astype(np.float32))
>>> output = net(img1, img2)
>>> print(output)
[1.]
class tinyms.layers.MSSSIM(max_val=1.0, power_factors=(0.0448, 0.2856, 0.3001, 0.2363, 0.1333), filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03)[source]

Returns MS-SSIM index between two images.

Its implementation is based on Multiscale structural similarity for image quality assessment by Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik, published on Signals, Systems and Computers in 2004.

\[\begin{split}l(x,y)&=\frac{2\mu_x\mu_y+C_1}{\mu_x^2+\mu_y^2+C_1}, C_1=(K_1L)^2.\\ c(x,y)&=\frac{2\sigma_x\sigma_y+C_2}{\sigma_x^2+\sigma_y^2+C_2}, C_2=(K_2L)^2.\\ s(x,y)&=\frac{\sigma_{xy}+C_3}{\sigma_x\sigma_y+C_3}, C_3=C_2/2.\\ MSSSIM(x,y)&=l^\alpha_M*{\prod_{1\leq j\leq M} (c^\beta_j*s^\gamma_j)}.\end{split}\]
Parameters:
  • max_val (Union[int, float]) – The dynamic range of the pixel values (255 for 8-bit grayscale images). Default: 1.0.

  • power_factors (Union[tuple, list]) – Iterable of weights for each scale. Default: (0.0448, 0.2856, 0.3001, 0.2363, 0.1333). Default values obtained by Wang et al.

  • filter_size (int) – The size of the Gaussian filter. Default: 11.

  • filter_sigma (float) – The standard deviation of Gaussian kernel. Default: 1.5.

  • k1 (float) – The constant used to generate c1 in the luminance comparison function. Default: 0.01.

  • k2 (float) – The constant used to generate c2 in the contrast comparison function. Default: 0.03.

Inputs:
  • img1 (Tensor) - The first image batch with format ‘NCHW’. It must be the same shape and dtype as img2.

  • img2 (Tensor) - The second image batch with format ‘NCHW’. It must be the same shape and dtype as img1.

Outputs:

Tensor, the value is in range [0, 1]. It is a 1-D tensor with shape N, where N is the batch num of img1.

Raises:
  • TypeError – If max_val is neither int nor float.

  • TypeError – If power_factors is neither tuple nor list.

  • TypeError – If k1, k2 or filter_sigma is not a float.

  • TypeError – If filter_size is not an int.

  • ValueError – If max_val or filter_sigma is less than or equal to 0.

  • ValueError – If filter_size is less than 0.

  • ValueError – If length of shape of img1 or img2 is not equal to 4.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.MSSSIM(power_factors=(0.033, 0.033, 0.033))
>>> img1 = Tensor(np.ones((1, 3, 128, 128)).astype(np.float32))
>>> img2 = Tensor(np.ones((1, 3, 128, 128)).astype(np.float32))
>>> output = net(img1, img2)
>>> print(output)
[1.]
class tinyms.layers.PSNR(max_val=1.0)[source]

Returns Peak Signal-to-Noise Ratio of two image batches.

It produces a PSNR value for each image in batch. Assume inputs are \(I\) and \(K\), both with shape \(h*w\). \(MAX\) represents the dynamic range of pixel values.

\[\begin{split}MSE&=\frac{1}{hw}\sum\limits_{i=0}^{h-1}\sum\limits_{j=0}^{w-1}[I(i,j)-K(i,j)]^2\\ PSNR&=10*log_{10}(\frac{MAX^2}{MSE})\end{split}\]
Parameters:

max_val (Union[int, float]) – The dynamic range of the pixel values (255 for 8-bit grayscale images). The value must be greater than 0. Default: 1.0.

Inputs:
  • img1 (Tensor) - The first image batch with format ‘NCHW’. It must be the same shape and dtype as img2.

  • img2 (Tensor) - The second image batch with format ‘NCHW’. It must be the same shape and dtype as img1.

Outputs:

Tensor, with dtype mindspore.float32. It is a 1-D tensor with shape N, where N is the batch num of img1.

Raises:
  • TypeError – If max_val is neither int nor float.

  • ValueError – If max_val is less than or equal to 0.

  • ValueError – If length of shape of img1 or img2 is not equal to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.PSNR()
>>> img1 = Tensor([[[[1, 2, 3, 4], [1, 2, 3, 4]]]])
>>> img2 = Tensor([[[[3, 4, 5, 6], [3, 4, 5, 6]]]])
>>> output = net(img1, img2)
>>> print(output)
[-6.0206]
class tinyms.layers.CentralCrop(central_fraction)[source]

Crops the central region of the images with the central_fraction.

Parameters:

central_fraction (float) – Fraction of size to crop. It must be float and in range (0.0, 1.0].

Inputs:
  • image (Tensor) - A 3-D tensor of shape [C, H, W], or a 4-D tensor of shape [N, C, H, W].

Outputs:

Tensor, 3-D or 4-D float tensor, according to the input.

Raises:
  • TypeError – If central_fraction is not a float.

  • ValueError – If central_fraction is not in range (0.0, 1.0].

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.CentralCrop(central_fraction=0.5)
>>> image = Tensor(np.random.random((4, 3, 4, 4)), mindspore.float32)
>>> output = net(image)
>>> print(output.shape)
(4, 3, 2, 2)
class tinyms.layers.PixelShuffle(upscale_factor)[source]

Applies the PixelShuffle operation over input which implements sub-pixel convolutions with stride \(1/r\) . For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C \times r^2, H, W)\) , and the output is of shape \((*, C, H \times r, W \times r)\), where r is an upscale factor and * is zero or more batch dimensions.

Note

The dimension of input Tensor on Ascend should be less than 7.

Parameters:

upscale_factor (int) – factor to shuffle the input, and is a positive integer. upscale_factor is the above-mentioned \(r\).

Inputs:
  • input (Tensor) - Tensor of shape \((*, C \times r^2, H, W)\) . The dimension of x is larger than 2, and the length of third to last dimension can be divisible by upscale_factor squared.

Outputs:
  • output (Tensor) - Tensor of shape \((*, C, H \times r, W \times r)\) .

Raises:
  • ValueError – If upscale_factor is not a positive integer.

  • ValueError – If the length of third to last dimension of input is not divisible by upscale_factor squared.

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = np.arange(3 * 2 * 8 * 4 * 4).reshape((3, 2, 8, 4, 4))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> pixel_shuffle = nn.PixelShuffle(2)
>>> output = pixel_shuffle(input_x)
>>> print(output.shape)
(3, 2, 2, 8, 8)
class tinyms.layers.PixelUnshuffle(downscale_factor)[source]

Applies the PixelUnshuffle operation over input which is the inverse of PixelShuffle. For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C, H \times r, W \times r)\) , and the output is of shape \((*, C \times r^2, H, W)\) , where r is a downscale factor and * is zero or more batch dimensions.

Parameters:

downscale_factor (int) – factor to unshuffle the input, and is a positive integer. downscale_factor is the above-mentioned \(r\).

Inputs:
  • input (Tensor) - Tensor of shape \((*, C, H \times r, W \times r)\) . The dimension of input is larger than 2, and the length of second to last dimension or last dimension can be divisible by downscale_factor .

Outputs:
  • output (Tensor) - Tensor of shape \((*, C \times r^2, H, W)\) .

Raises:
  • ValueError – If downscale_factor is not a positive integer.

  • ValueError – If the length of second to last dimension or last dimension is not divisible by downscale_factor .

  • TypeError – If the dimension of input is less than 3.

Supported Platforms:

Ascend GPU CPU

Examples

>>> pixel_unshuffle = nn.PixelUnshuffle(2)
>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = pixel_unshuffle(input_x)
>>> print(output.shape)
(1, 4, 4, 4)
class tinyms.layers.ReduceLogSumExp(axis, keep_dims=False)[source]

Reduces a dimension of a tensor by calculating exponential for all elements in the dimension, then calculate logarithm of the sum.

\[ReduceLogSumExp(x) = \log(\sum(e^x))\]
Parameters:
  • axis (Union[int, tuple(int), list(int)]) – (), reduce all dimensions. Only constant value is allowed.

  • keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False.

Inputs:
  • x (Tensor) - The input tensor. With float16 or float32 data type.

Outputs:

Tensor, has the same dtype as the x.

  • If axis is (), and keep_dims is False, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is False, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is False, the shape of output is \((x_1, x_4, ..., x_R)\).

Raises:
  • TypeError – If axis is not one of int, list, tuple.

  • TypeError – If keep_dims is not bool.

  • TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = nn.ReduceLogSumExp(1, keep_dims=True)
>>> output = op(x)
>>> print(output.shape)
(3, 1, 5, 6)
class tinyms.layers.Range(start, limit=None, delta=1)[source]

‘nn.Range’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.range’ instead.

class tinyms.layers.LGamma[source]

Calculates LGamma using Lanczos’ approximation referring to “A Precision Approximation of the Gamma Function”. The algorithm is:

\[\begin{split}\begin{array}{ll} \\ lgamma(z + 1) = \frac{(\log(2) + \log(pi))}{2} + (z + 1/2) * log(t(z)) - t(z) + A(z) \\ t(z) = z + kLanczosGamma + 1/2 \\ A(z) = kBaseLanczosCoeff + \sum_{k=1}^n \frac{kLanczosCoefficients[i]}{z + k} \end{array}\end{split}\]

However, if the input is less than 0.5 use Euler’s reflection formula:

\[lgamma(x) = \log(pi) - lgamma(1-x) - \log(abs(sin(pi * x)))\]

And please note that

\[lgamma(+/-inf) = +inf\]

Thus, the behaviour of LGamma follows:

  • when x > 0.5, return log(Gamma(x))

  • when x < 0.5 and is not an integer, return the real part of Log(Gamma(x)) where Log is the complex logarithm

  • when x is an integer less or equal to 0, return +inf

  • when x = +/- inf, return +inf

Inputs:
  • x (Tensor) - The input tensor. Only float16, float32 are supported.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.LGamma()
>>> output = op(x)
>>> print(output)
[3.5762787e-07 6.9314754e-01 1.7917603e+00]
class tinyms.layers.DiGamma[source]

Calculates Digamma using Lanczos’ approximation referring to “A Precision Approximation of the Gamma Function”. The algorithm is:

\[\begin{split}\begin{array}{ll} \\ digamma(z + 1) = log(t(z)) + A'(z) / A(z) - kLanczosGamma / t(z) \\ t(z) = z + kLanczosGamma + 1/2 \\ A(z) = kBaseLanczosCoeff + \sum_{k=1}^n \frac{kLanczosCoefficients[i]}{z + k} \\ A'(z) = \sum_{k=1}^n \frac{kLanczosCoefficients[i]}{{z + k}^2} \end{array}\end{split}\]

However, if the input is less than 0.5 use Euler’s reflection formula:

\[digamma(x) = digamma(1 - x) - pi * cot(pi * x)\]
Inputs:
  • x (Tensor[Number]) - The input tensor. Only float16, float32 are supported.

Outputs:

Tensor, has the same shape and dtype as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.DiGamma()
>>> output = op(x)
>>> print(output)
[0.42278463  0.92278427 1.2561178]
class tinyms.layers.IGamma[source]

Calculates lower regularized incomplete Gamma function. The lower regularized incomplete Gamma function is defined as:

\[P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\]

where

\[gamma(a, x) = \int_0^x t^{a-1} \exp^{-t} dt\]

is the lower incomplete Gamma function.

Above \(Q(a, x)\) is the upper regularized complete Gamma function.

Inputs:
  • a (Tensor) - The input tensor. With float32 data type. a should have the same dtype with x.

  • x (Tensor) - The input tensor. With float32 data type. x should have the same dtype with a.

Outputs:

Tensor, has the same dtype as a and x.

Raises:

TypeError – If dtype of input x and a is not float16 nor float32, or if x has different dtype with a.

Supported Platforms:

Ascend GPU CPU

Examples

>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igamma = nn.IGamma()
>>> output = igamma(a, x)
>>> print (output)
[0.593994  0.35276785  0.21486944  0.13337152]
class tinyms.layers.LBeta[source]

This method avoids the numeric cancellation by explicitly decomposing lgamma into the Stirling approximation and an explicit log_gamma_correction, and cancelling the large terms from the Striling analytically.

This is semantically equal to

\[P(x, y) = lgamma(x) + lgamma(y) - lgamma(x + y).\]

The method is more accurate for arguments above 8. The reason for accuracy loss in the naive computation is catastrophic cancellation between the lgammas.

Inputs:
  • x (Tensor) - The input tensor. With float16 or float32 data type. x should have the same dtype with y.

  • y (Tensor) - The input tensor. With float16 or float32 data type. y should have the same dtype with x.

Outputs:

Tensor, has the same dtype as x and y.

Raises:

TypeError – If dtype of x or y is neither float16 nor float32, or if x has different dtype with y.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> y = Tensor(np.array([2.0, 3.0, 14.0, 15.0]).astype(np.float32))
>>> lbeta = nn.LBeta()
>>> output = lbeta(y, x)
>>> print(output)
[-1.7917596  -4.094345  -12.000229  -14.754799]
class tinyms.layers.CosineSimilarity(dim=1, eps=1e-08)[source]

Computes cosine similarity.

\[\mathcal{K} = \frac{\textbf{x}\textbf{y}^{\top}}{\parallel \textbf{x} \parallel \parallel \textbf{y} \parallel},\]

where \(\mathcal{K}\) is the similarity, \(\textbf{x}\) is the first tensor x1, \(\textbf{y}\) is the second tensor x2.

To avoid numerical errors when dividing by small numbers, the lower bound of \(\parallel \textbf{x} \parallel \parallel \textbf{y} \parallel\) is set to eps.

Parameters:
  • dim (int, optional) – Dimension. Default: 1.

  • eps (float, optional) – Small value. Default: 1e-08.

Inputs:
  • x1 (Tensor) - The first tensor \(\textbf{x}\). Shape: \((\ast_1, D, \ast_2)\) where \(D\) is at position dim.

  • x2 (Tensor) - The second tensor \(\textbf{y}\). The shape is the same as x1.

Outputs:

Tensor, with shape \((\ast_1, \ast_2)\), the data type will be inferred automatically.

Raises:

TypeError – If x1 or x2 is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x1 = Tensor([[1.0, 3.0, 4.0, 7.0], [2.0, 4.0, 2.0, 5.0], [3.0, 1.0, 5.0, 8.0]])
>>> x2 = Tensor([[2.0, 4.0, 2.0, 5.0], [3.0, 1.0, 5.0, 8.0], [1.0, 3.0, 4.0, 7.0]])
>>> func = nn.layer.CosineSimilarity()
>>> out = func(x1, x2)
>>> print(out.asnumpy())
[0.9402562 0.8614609 0.9516245]
class tinyms.layers.MatMul(transpose_x1=False, transpose_x2=False)[source]

The nn.MatMul interface is deprecated, please use the mindspore.ops.matmul instead.

Supported Platforms:

deprecated

class tinyms.layers.Moments(axis=None, keep_dims=None)[source]

‘nn.Moments’ is deprecated from version 2.0 and will be removed in a future version, use ‘ops.var_mean’ instead.

class tinyms.layers.MatInverse[source]

Calculates the inverse of Positive-Definite Hermitian matrix using Cholesky decomposition.

Inputs:
  • x (Tensor[Number]) - The input tensor. It must be a positive-definite matrix. With float16 or float32 data type.

Outputs:

Tensor, has the same dtype as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatInverse()
>>> output = op(x)
>>> print(output)
[[49.36112  -13.555558  2.1111116]
 [-13.555558  3.7777784  -0.5555557]
 [2.1111116  -0.5555557  0.11111113]]
class tinyms.layers.MatDet[source]

Calculates the determinant of Positive-Definite Hermitian matrix using Cholesky decomposition.

Inputs:
  • x (Tensor[Number]) - The input tensor. It must be a positive-definite matrix. With float16 or float32 data type.

Outputs:

Tensor, has the same dtype as the x.

Raises:

TypeError – If dtype of x is neither float16 nor float32.

Supported Platforms:

GPU

Examples

>>> x = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatDet()
>>> output = op(x)
>>> print(output)
35.999996
class tinyms.layers.Conv2dBnAct(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', has_bn=False, momentum=0.997, eps=1e-05, activation=None, alpha=0.2, after_fake=True)[source]

A combination of convolution, Batchnorm, and activation layer.

This part is a more detailed overview of Conv2d operation.

Parameters:
  • in_channels (int) – The number of input channel \(C_{in}\).

  • out_channels (int) – The number of output channel \(C_{out}\).

  • kernel_size (Union[int, tuple]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means the value is for both height and width of the kernel. A tuple of 2 ints means the first value is for the height and the other is for the width of the kernel.

  • stride (int) – Specifies stride for all spatial dimensions with the same value. The value of stride must be greater than or equal to 1 and lower than any one of the height and width of the x. Default: 1.

  • pad_mode (str) – Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

  • padding (int) – Implicit paddings on both sides of the x. Default: 0.

  • dilation (int) – Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and lower than any one of the height and width of the x. Default: 1.

  • group (int) – Splits filter into groups, in_channels and out_channels must be divisible by the number of groups. Default: 1.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the convolution kernel. It can be a Tensor, a string, an Initializer or a number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the bias vector. Possible Initializer and string are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

  • has_bn (bool) – Specifies to used batchnorm or not. Default: False.

  • momentum (float) – Momentum for moving average for batchnorm, must be [0, 1]. Default:0.997

  • eps (float) – Term added to the denominator to improve numerical stability for batchnorm, should be greater than 0. Default: 1e-5.

  • activation (Union[str, Cell, Primitive]) – Specifies activation type. The optional values are as following: ‘softmax’, ‘logsoftmax’, ‘relu’, ‘relu6’, ‘tanh’, ‘gelu’, ‘sigmoid’, ‘prelu’, ‘leakyrelu’, ‘hswish’, ‘hsigmoid’. Default: None.

  • alpha (float) – Slope of the activation function at x < 0 for LeakyReLU. Default: 0.2.

  • after_fake (bool) – Determine whether there must be a fake quantization operation after Cond2dBnAct. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\). The data type is float32.

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\). The data type is float32.

Raises:
  • TypeError – If in_channels, out_channels, stride, padding or dilation is not an int.

  • TypeError – If has_bias is not a bool.

  • ValueError – If in_channels or out_channels stride, padding or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’, ‘pad’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.Conv2dBnAct(120, 240, 4, has_bn=True, activation='relu')
>>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32)
>>> result = net(x)
>>> output = result.shape
>>> print(output)
(1, 240, 1024, 640)
class tinyms.layers.DenseBnAct(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, has_bn=False, momentum=0.9, eps=1e-05, activation=None, alpha=0.2, after_fake=True)[source]

A combination of Dense, Batchnorm, and the activation layer.

This part is a more detailed overview of Dense op.

Parameters:
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘zeros’.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • has_bn (bool) – Specifies to use batchnorm or not. Default: False.

  • momentum (float) – Momentum for moving average for batchnorm, must be [0, 1]. Default:0.9

  • eps (float) – Term added to the denominator to improve numerical stability for batchnorm, should be greater than 0. Default: 1e-5.

  • activation (Union[str, Cell, Primitive]) – Specifies activation type. The optional values are as following: ‘softmax’, ‘logsoftmax’, ‘relu’, ‘relu6’, ‘tanh’, ‘gelu’, ‘sigmoid’, ‘prelu’, ‘leakyrelu’, ‘hswish’, ‘hsigmoid’. Default: None.

  • alpha (float) – Slope of the activation function at x < 0 for LeakyReLU. Default: 0.2.

  • after_fake (bool) – Determine whether there must be a fake quantization operation after DenseBnAct. Default: True.

Inputs:
  • x (Tensor) - Tensor of shape \((N, in\_channels)\). The data type is float32.

Outputs:

Tensor of shape \((N, out\_channels)\). The data type is float32.

Raises:
  • TypeError – If in_channels or out_channels is not an int.

  • TypeError – If has_bias, has_bn or after_fake is not a bool.

  • TypeError – If momentum or eps is not a float.

  • ValueError – If momentum is not in range [0, 1.0].

Supported Platforms:

Ascend GPU CPU

Examples

>>> net = nn.DenseBnAct(3, 4)
>>> x = Tensor(np.random.randint(0, 255, [2, 3]), mindspore.float32)
>>> result = net(x)
>>> output = result.shape
>>> print(output)
(2, 4)
class tinyms.layers.TimeDistributed(layer, time_axis, reshape_with_axis=None)[source]

The time distributed layer.

Time distributed is a wrapper which allows to apply a layer to every temporal slice of an input. And the x should be at least 3D. There are two cases in the implementation. When reshape_with_axis provided, the reshape method will be chosen, which is more efficient; otherwise, the method of dividing the inputs along time axis will be used, which is more general. For example, reshape_with_axis could not be provided when deal with Batch Normalization.

Parameters:
  • layer (Union[Cell, Primitive]) – The Cell or Primitive which will be wrapped.

  • time_axis (int) – The axis of time_step.

  • reshape_with_axis (int) – The axis which will be reshaped with time_axis. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((N, T, *)\), where \(*\) means any number of additional dimensions.

Outputs:

Tensor of shape \((N, T, *)\)

Raises:

TypeError – If layer is not a Cell or Primitive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.random.random([32, 10, 3]), mindspore.float32)
>>> dense = nn.Dense(3, 6)
>>> net = nn.TimeDistributed(dense, time_axis=1, reshape_with_axis=0)
>>> output = net(x)
>>> print(output.shape)
(32, 10, 6)
class tinyms.layers.MultiheadAttention(embed_dim, num_heads, dropout=0.0, has_bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False)[source]

This is an implementation of multihead attention in the paper Attention is all you need. Given the query vector with source length, and the key and value vector with target length, the attention will be performed as the following

\[MultiHeadAttention(query, key, vector) = Concat(head_1, \dots, head_h)W^O\]

where \(head_i = Attention(QW_i^Q, KW_i^K, VW_i^V)\). The default is with a bias.

if query, key and value tensor is same, then it will be self attention.

Parameters:
  • embed_dim (int) – Total dimension of MultiheadAttention.

  • num_heads (int) – Number of attention heads. Note that embed_dim will be split across num_heads (i.e. each head will have dimension embed_dim // num_heads).

  • dropout (float) – Dropout probability of attn_output_weights. Default: 0.0.

  • has_bias (bool) – Whether adds bias to input / output projection layers. Default: True.

  • add_bias_kv (bool) – Whether adds bias to the key and value sequences at axis=0. Default: False.

  • add_zero_attn (bool) – Whether adds a new batch of zeros to the key and value sequences at axis=1. Default: False.

  • kdim (int) – Total number of features for keys. Default: None (kdim=embed_dim).

  • vdim (int) – Total number of features for values. Default: None (vdim=embed_dim).

  • batch_first (bool) – If True, then the input and output shape are \((batch, seq, feature)\) , else \((seq, batch, feature)\) . Default: False.

Inputs:
  • query (Tensor): The query embeddings. If query is unbatched, the shape is \((L, E_q)\), otherwise the shape is \((L, N, E_q)\) when batch_first=False or \((N, L, E_q)\) when batch_first=True, where \(L`is the target sequence length, :math:`N\) is the batch size, and \(E_q\) is the query embedding dimension embed_dim. Queries are compared against key-value pairs to produce the output. See “Attention Is All You Need” for more details.

  • key (Tensor): The key embeddings. If key is unbatched, the shape is \((S, E_k)\), otherwise the shape is \((S, N, E_k)\) when batch_first=False or \((N, S, E_k)\) when batch_first=True, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_k\) is the key embedding dimension kdim. See “Attention Is All You Need” for more details.

  • value (Tensor): The value embeddings. If value is unbatched, the shape is \((S, E_v)\), otherwise the shape is \((S, N, E_v)\) when batch_first=False or \((N, S, E_v)\) when batch_first=True, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_v\) is the value embedding dimension vdim. See “Attention Is All You Need” for more details.

  • key_padding_mask (Tensor, optional): If specified, a mask of shape \((N, S)\) indicating which elements within key to ignore for the purpose of attention (i.e. treat as “padding”). For unbatched query, shape should be \((S)\). Binary and byte masks are supported. For a binary mask, a True value indicates that the corresponding key value will be ignored for the purpose of attention. For a float mask, it will be directly added to the corresponding key value.

  • need_weights (bool): Whether returns attn_output_weights in addition to attn_outputs. Default: True.

  • attn_mask (Tensor, optional): If specified, a 2D or 3D mask preventing attention to certain positions. Must be of shape \((L, S)\) or \((N\cdot\text{num\_heads}, L, S)\), where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. A 2D mask will be broadcasted across the batch while a 3D mask allows for a different mask for each entry in the batch. Binary, byte, and float masks are supported. For a binary mask, a True value indicates that the corresponding position is not allowed to attend. For a byte mask, a non-zero value indicates that the corresponding position is not allowed to attend. For a float mask, the mask values will be added to the attention weight.

  • average_attn_weights (bool): If true, indicates that the returned attn_weights should be averaged across heads. Otherwise, attn_weights are provided separately per head. Note that this flag only has an effect when need_weights=True. Default: True (i.e. average weights across heads)

Outputs:

Tuple, a tuple contains(attn_output, attn_output_weights)

  • attn_output - Attention outputs. If input is unbatched, the output shape is \((L, E)\), otherwise the output shape is \((L, N, E)\) when batch_first=False or \((N, L, E)\) when batch_first=True, where \(L\) is the target sequence length, \(N\) is the batch size, and \(E\) is the embedding dimension embed_dim.

  • attn_output_weights - Only returned when need_weights=True. If average_attn_weights=True, returns attention weights averaged across heads with shape \((L, S)\) when input is unbatched or \((N, L, S)\) when input is batched, where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. If average_attn_weights=False, returns attention weights per head of shape \((\text{num\_heads}, L, S)\) when input is unbatched or \((N, \text{num\_heads}, L, S)\) when input is batched.

Supported Platforms:

Ascend GPU CPU

Examples

>>> embed_dim, num_heads = 128, 8
>>> seq_length, batch_size = 10, 8
>>> query = Tensor(np.random.randn(seq_length, batch_size, embed_dim), mindspore.float32)
>>> key = Tensor(np.random.randn(seq_length, batch_size, embed_dim), mindspore.float32)
>>> value = Tensor(np.random.randn(seq_length, batch_size, embed_dim), mindspore.float32)
>>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
>>> attn_output, attn_output_weights = multihead_attn(query, key, value)
>>> print(attn_output.shape)
(10, 8, 128)
class tinyms.layers.TransformerEncoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, mindspore.nn.cell.Cell, callable] = 'relu', layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False)[source]

Transformer Encoder Layer. This is an implementation of the single layer of the transformer encoder layer, including multihead attention and feedward layer.

Parameters:
  • d_model (int) – The number of features in the input tensor.

  • nhead (int) – The number of heads in the MultiheadAttention modules.

  • dim_feedforward (int) – The dimension of the feedforward layer. Default: 2048.

  • dropout (float) – The dropout value. Default: 0.1.

  • activation (Union[str, callable, Cell]) – The activation function of the intermediate layer, can be a string (“relu” or “gelu”), Cell instance (nn.ReLU() or nn.GELU()) or a callable (ops.relu or ops.gelu). Default: "relu".

  • layer_norm_eps (float) – The epsilon value in LayerNorm modules. Default: 1e-5.

  • batch_first (bool) –

    If batch_first = True, then the shape of input and output tensors is

    \((batch, seq, feature)\) , otherwise the shape is \((seq, batch, feature)\) .

    Default: False.

  • norm_first (bool) – If norm_first = True, layer norm is done prior to attention and feedforward operations, respectively. Default: False.

Inputs:
  • src (Tensor): the sequence to the encoder layer.

  • src_mask (Tensor, optional): the mask for the src sequence. Default: None.

  • src_key_padding_mask (Tensor, optional): the mask for the src keys per batch. Default: None.

Outputs:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> src = Tensor(np.random.rand(10, 32, 512), mindspore.float32)
>>> out = encoder_layer(src)
>>> # Alternatively, when batch_first=True:
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True)
>>> src = Tensor(np.random.rand(32, 10, 512), mindspore.float32)
>>> out = encoder_layer(src)
>>> print(out.shape)
(32, 10, 512)
class tinyms.layers.TransformerDecoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, mindspore.nn.cell.Cell, callable] = 'relu', layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False)[source]

Transformer Decoder Layer. This is an implementation of the single layer of the transformer decoder layer, including self-attention, cross attention and feedward layer.

Parameters:
  • d_model (int) – The number of expected features in the input tensor.

  • nhead (int) – The number of heads in the MultiheadAttention modules.

  • dim_feedforward (int) – The dimension of the feedforward layer. Default: 2048.

  • dropout (float) – The dropout value. Default: 0.1.

  • activation (Union[str, callable, Cell]) – The activation function of the intermediate layer, can be a string (“relu” or “gelu”), Cell instance (nn.ReLU() or nn.GELU()) or a callable (ops.relu or ops.gelu). Default: "relu"

  • layer_norm_eps (float) – The epsilon value in LayerNorm modules. Default: 1e-5.

  • batch_first (bool) – If batch_first = True, then the shape of input and output tensors is \((batch, seq, feature)\) , otherwise the shape is \((seq, batch, feature)\). Default: False.

  • norm_first (bool) – If norm_first = True, layer norm is done prior to attention and feedforward operations, respectively. Default: False.

Inputs:
  • tgt (Tensor): The sequence to the decoder layer.

  • memory (Tensor): The sequence from the last layer of the encoder.

  • tgt_mask (Tensor, optional): The mask of the tgt sequence. Default: None.

  • memory_mask (Tensor, optional): The mask of the memory sequence. Default: None.

  • tgt_key_padding_mask (Tensor, optional): The mask of the tgt keys per batch. Default: None.

  • memory_key_padding_mask (Tensor, optional): The mask of the memory keys per batch. Default: None.

Outputs:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> memory = Tensor(np.random.rand(10, 32, 512), mindspore.float32)
>>> tgt = Tensor(np.random.rand(20, 32, 512), mindspore.float32)
>>> out = decoder_layer(tgt, memory)
>>> # Alternatively, when `batch_first` is ``True``:
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True)
>>> memory = Tensor(np.random.rand(32, 10, 512), mindspore.float32)
>>> tgt = Tensor(np.random.rand(32, 20, 512), mindspore.float32)
>>> out = decoder_layer(tgt, memory)
>>> print(out.shape)
(32, 20, 512)
class tinyms.layers.TransformerEncoder(encoder_layer, num_layers, norm=None)[source]

Transformer Encoder module with multi-layer stacked of TransformerEncoderLayer, including multihead self attention and feedforward layer. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters.

Parameters:
  • encoder_layer (Cell) – An instance of the TransformerEncoderLayer() class.

  • num_layers (int) – The number of encoder-layers in the encoder.

  • norm (Cell, optional) – The layer normalization module.

Inputs:
  • src (Tensor): The sequence to the encoder.

  • src_mask (Tensor, optional): The mask of the src sequence. Default: None.

  • src_key_padding_mask (Tensor, optional): the mask of the src keys per batch . Default: None.

Outputs:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
>>> src = Tensor(np.random.rand(10, 32, 512), mindspore.float32)
>>> out = transformer_encoder(src)
>>> print(out.shape)
(10, 32, 512)
class tinyms.layers.TransformerDecoder(decoder_layer, num_layers, norm=None)[source]

Transformer Decoder module with multi-layer stacked of TransformerDecoderLayer, including multihead self attention, cross attention and feedforward layer.

Parameters:
  • decoder_layer (Cell) – An instance of the mindspore.nn.TransformerDecoderLayer class.

  • num_layers (int) – The number of decoder-layers in the decoder.

  • norm (Cell, optional) – The layer normalization module.

Inputs:
  • tgt (Tensor): The sequence to the decoder.

  • memory (Tensor): The sequence from the last layer of the encoder.

  • tgt_mask (Tensor, optional): the mask of the tgt sequence. Default: None.

  • memory_mask (Tensor, optional): the mask of the memory sequence. Default: None.

  • tgt_key_padding_mask (Tensor, optional): the mask of the tgt keys per batch. Default: None.

  • memory_key_padding_mask (Tensor, optional): the mask of the memory keys per batch. Default: None.

Outputs:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
>>> memory = Tensor(np.random.rand(10, 32, 512), mindspore.float32)
>>> tgt = Tensor(np.random.rand(20, 32, 512), mindspore.float32)
>>> out = transformer_decoder(tgt, memory)
>>> print(out.shape)
(20, 32, 512)
class tinyms.layers.Transformer(d_model: int = 512, nhead: int = 8, num_encoder_layers: int = 6, num_decoder_layers: int = 6, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, mindspore.nn.cell.Cell, callable] = 'relu', custom_encoder: Optional[mindspore.nn.cell.Cell] = None, custom_decoder: Optional[mindspore.nn.cell.Cell] = None, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False)[source]

Transformer module including encoder and decoder. The difference with the original implements is the module use the residual addition before the layer normalization. And the default hidden act is gelu. The details can be found in Attention is all you need.

Parameters:
  • d_model (int) – The number of expected features in the inputs tensor. Default: 512.

  • nhead (int) – The number of heads in the MultiheadAttention modules. Default: 8.

  • num_encoder_layers (int) – The number of encoder-layers in the encoder. Default: 6.

  • num_decoder_layers (int) – The number of decoder-layers in the decoder. Default: 6.

  • dim_feedforward (int) – The dimension of the feedforward layer. Default: 2048.

  • dropout (float) – The dropout value. Default: 0.1.

  • activation (Union[str, callable, Cell]) – The activation function of the intermediate layer, can be a string (“relu” or “gelu”), Cell instance (nn.ReLU() or nn.GELU()) or a callable (ops.relu or ops.gelu). Default: "relu"

  • custom_encoder (Cell) – Custom encoder. Default: None.

  • custom_decoder (Cell) – Custom decoder. Default: None.

  • layer_norm_eps (float) – the epsilion value in layer normalization module. Default: 1e-5.

  • batch_first (bool) – If batch_first = True, then the shape of input and output tensors is \((batch, seq, feature)\) , otherwise the shape is \((seq, batch, feature)\) . Default: False.

  • norm_first (bool) – If norm_first = True, layer norm is done prior to attention and feedforward operations, respectively. Default: False.

Inputs:
  • src (Tensor): The source sequence to the encoder.

  • tgt (Tensor): The target sequence to the decoder.

  • src_mask (Tensor, optional): The mask of the src sequence. Default: None.

  • tgt_mask (Tensor, optional): The mask of the tgt sequence. Default: None.

  • memory_mask (Tensor, optional): The additive mask of the encoder output. Default: None.

  • src_key_padding_mask (Tensor, optional): The mask of src keys per batch. Default: None.

  • tgt_key_padding_mask (Tensor, optional): The mask of tgt keys per batch. Default: None.

  • memory_key_padding_mask (Tensor, optional): The mask of memory keys per batch. Default: None.

Outputs:

Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
>>> src = Tensor(np.random.rand(10, 32, 512), mindspore.float32)
>>> tgt = Tensor(np.random.rand(20, 32, 512), mindspore.float32)
>>> out = transformer_model(src, tgt)
>>> print(out.shape)
(20, 32, 512)
class tinyms.layers.DenseThor(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, activation=None)[source]

The dense connected layer and saving the information needed for THOR.

Applies dense connected layer for the input and saves the information A and G in the dense connected layer needed for THOR.

This layer implements the operation as:

\[\text{outputs} = \text{activation}(\text{inputs} * \text{kernel} + \text{bias}),\]

where \(\text{activation}\) is the activation function , \(\text{kernel}\) is a weight matrix with the same data type as the inputs created by the layer, and \(\text{bias}\) is a bias vector with the same data type as the inputs created by the layer (only if has_bias is True).

Parameters:
  • in_channels (int) – The number of the input channels.

  • out_channels (int) – The number of the output channels.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as x. The values of str refer to the function initializer. Default: ‘zeros’.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • activation (str) – activate function applied to the output of the fully connected layer, eg. ‘ReLU’. Default: None.

Inputs:
  • x (Tensor) - Tensor of shape \((N, in\_channels)\).

Outputs:

Tensor of shape \((N, out\_channels)\).

Raises:

ValueError – If the shape of weight_init or bias_init is incorrect.

Supported Platforms:

Ascend GPU

Examples

>>> x = Tensor(np.array([[1, 2, 3], [3, 4, 5]]), mindspore.float32)
>>> net = nn.DenseThor(3, 4, weight_init="ones")
>>> output = net(x)
>>> print(output)
[[  6.  6.  6.  6.]
 [ 12. 12. 12. 12. ]]
save_gradient(dout)[source]

this function only for thor optimizer save_gradient

class tinyms.layers.Conv2dThor(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros')[source]

2D convolution layer and saving the information needed for THOR.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C_{in}\) is channel number, and \(H_{in}, W_{in})\) are height and width. And saves the information A and G in the 2D convolution layer needed for THOR.

For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is the cross-correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to the \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{ks_h}, \text{ks_w})\), where \(\text{ks_h}\) and \(\text{ks_w}\) are the height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} // \text{group}, \text{ks_h}, \text{ks_w})\), where group is the group number to split the input x in the channel dimension.

If the ‘pad_mode’ is set to be “valid”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively.

Note

For Ascend, the type of inputs should be subclass of Tensor[Float16], Tensor[Int8]. For GPU, the type of inputs should be subclass of Tensor[Float32].

Parameters:
  • in_channels (int) – The number of the input channel \(C_{in}\).

  • out_channels (int) – The number of the output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the height and width of the 2D convolution window. Single int means that the value is not only the height, but also the width of the kernel. A tuple of 2 integers means the height and the width of the kernel respectively.

  • stride (Union[int, tuple[int]]) – The distance of kernel moving, an int number represents the height and width of movement, or a tuple of two int numbers that represent height and width of movement, respectively. Default: 1.

  • pad_mode (str) –

    Specifies padding mode. The optional values are “same”, “valid”, “pad”. Default: “same”.

    • same: Adopts the way of completion. The shape of the output will be the same as the x. The total number of padding will be calculated in horizontal and vertical directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side. If this mode is set, padding must be 0.

    • valid: Adopts the way of discarding. The possible largest height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • pad: Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union[int, tuple[int]]) – Implicit paddings on both sides of the input x. If padding is an integer, the paddings of top, bottom, left and right are the same, equal to padding. If padding is a tuple with four integers, the paddings of top, bottom, left and right will be equal to padding[0], padding[1], padding[2], and padding[3] accordingly. Default: 0.

  • dilation (Union[int, tuple[int]]) – The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater or equal to 1 and bounded by the height and width of the input x. Default: 1.

  • group (int) – Splits filter into groups, in_ channels and out_channels must be divisible by the number of groups. If the group is equal to in_channels and out_channels, this 2D convolution layer also can be called 2D depthwise convolution layer. Default: 1.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: False.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializes the convolution kernel. It can be a Tensor, a string, an Initializer or a number. When a string is specified, values from ‘TruncatedNormal’, ‘Normal’, ‘Uniform’, ‘HeUniform’ and ‘XavierUniform’ distributions as well as constant ‘One’ and ‘Zero’ distributions are possible. Alias ‘xavier_uniform’, ‘he_uniform’, ‘ones’ and ‘zeros’ are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer for more details. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializes the bias vector. Possible Initializer and string are the same as ‘weight_init’. Refer to the values of Initializer for more details. Default: ‘zeros’.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\).

Supported Platforms:

Ascend GPU

Examples

>>> net = nn.Conv2dThor(120, 240, 4, has_bias=False, weight_init='normal')
>>> # for Ascend
>>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float16)
>>> print(net(x).shape)
(1, 240, 1024, 640)
save_gradient(dout)[source]
class tinyms.layers.EmbeddingThor(vocab_size, embedding_size, use_one_hot=False, embedding_table='normal', dtype=mindspore.float32, padding_idx=None)[source]

A simple lookup table that stores embeddings of a fixed dictionary and size and saving the information needed for THOR.

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. And saves the information A and G in the dense connected layer needed for THOR.

Note

When ‘use_one_hot’ is set to True, the type of the input x must be mindspore.int32.

Parameters:
  • vocab_size (int) – The size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • use_one_hot (bool) – Specifies whether to apply one_hot encoding form. Default: False.

  • embedding_table (Union[Tensor, str, Initializer, numbers.Number]) – Initializes the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: ‘normal’.

  • dtype (mindspore.dtype) – Data type of input x. Default: mindspore.float32.

  • padding_idx (int, None) – When the padding_idx encounters index, the output embedding vector of this index will be initialized to zero. Default: None. The feature is inactivated.

Inputs:
  • x (Tensor) - Tensor of input shape \((\text{batch_size}, \text{x_length})\). The elements of the Tensor must be integer and not larger than vocab_size. Otherwise the corresponding embedding vector will be zero.

Outputs:

Tensor of output shape \((\text{batch_size}, \text{x_length}, \text{embedding_size})\).

Supported Platforms:

Ascend GPU

Examples

>>> net = nn.EmbeddingThor(20000, 768,  True)
>>> x = Tensor(np.ones([8, 128]), mindspore.int32)
>>>
>>> # Maps the input word IDs to word embedding.
>>> output = net(x)
>>> output.shape
(8, 128, 768)
save_gradient(dout)[source]

this function only for thor optimizer save_gradient

class tinyms.layers.EmbeddingLookupThor(vocab_size, embedding_size, param_init='normal', target='CPU', slice_mode='batch_slice', manual_shapes=None, max_norm=None, sparse=True, vocab_cache_size=0)[source]

Returns a slice of the input tensor based on the specified indices and saving the information needed for THOR.

This module has the same function as EmbeddingLookup, but additionally saves the information A and G in the embeddinglookup layer needed for THOR.

Parameters:
  • vocab_size (int) – The size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • param_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: ‘normal’.

  • target (str) – Specifies the target where the op is executed. The value must in [‘DEVICE’, ‘CPU’]. Default: ‘CPU’.

  • slice_mode (str) – The slicing way in semi_auto_parallel/auto_parallel. The value must get through nn.EmbeddingLookup. Default: nn.EmbeddingLookup.BATCH_SLICE.

  • manual_shapes (tuple) – The accompaniment array in field slice mode.

  • max_norm (Union[float, None]) – A maximum clipping value. The data type must be float16, float32 or None. Default: None

  • sparse (bool) – Using sparse mode. When ‘target’ is set to ‘CPU’, ‘sparse’ has to be true. Default: True.

  • vocab_cache_size (int) – Cache size of the dictionary of embeddings. Default: 0. It is valid only in ‘DEVICE’ target. And the moment parameter of corresponding optimizer will also be set to the cache size. In addition, it should be noted that it will cost the ‘DEVICE’ memory, so suggests setting a reasonable value to avoid insufficient memory.

Inputs:
  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\).

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises:
  • ValueError – If target is neither ‘CPU’ nor ‘DEVICE’.

  • ValueError – If slice_mode is not one of ‘batch_slice’ or ‘field_slice’ or ‘table_row_slice’ or ‘table_column_slice’.

  • ValueError – If sparse is False and target is ‘CPU’.

  • ValueError – If slice_mode is ‘field_slice’ and manual_shapes is None.

  • TypeError – If vocab_size or embedding_size or vocab_cache_size is not an int.

  • TypeError – If sparse is not a bool or manual_shapes is not a tuple.

  • ValueError – If vocab_size or embedding_size is less than 1.

  • ValueError – If vocab_cache_size is less than 0.

Supported Platforms:

Ascend

Examples

>>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
>>> result = nn.EmbeddingLookup(4,2)(input_indices)
>>> print(result.shape)
(2, 2, 2)
save_gradient(dout)[source]

this function only for thor optimizer save_gradient

class tinyms.layers.ConstantPad1d(padding, value)[source]

Using a given constant value to pads the last dimensions of input tensor.

Parameters:
  • padding (Union[int, tuple]) – The padding size to pad the last dimension of input tensor. If is int, uses the same padding in both boundaries of input’s last dimension. If a 2-tuple, uses (padding_0, padding_1) to pad. If the input is x, the size of last dimension of output is \(padding\_0 + x.shape[-1] + padding\_1\). The remaining dimensions of the output are consistent with those of the input.

  • value (Union[int, float]) – Padding value.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If padding is not a tuple or int.

  • TypeError – If value is not int or float.

  • ValueError – If the length of padding with tuple type is not equal to 2.

  • ValueError – If the output shape after padding is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ConstantPad1d
>>> x = np.ones(shape=(1, 2, 3, 4)).astype(np.float32)
>>> x = Tensor(x)
>>> # padding is tuple
>>> padding = (0, 1)
>>> value = 0.5
>>> pad1d = ConstantPad1d(padding, value)
>>> out = pad1d(x)
>>> print(out)
[[[[1.  1.  1.  1.  0.5]
   [1.  1.  1.  1.  0.5]
   [1.  1.  1.  1.  0.5]]
  [[1.  1.  1.  1.  0.5]
   [1.  1.  1.  1.  0.5]
   [1.  1.  1.  1.  0.5]]]]
>>> print(out.shape)
(1, 2, 3, 5)
>>> # padding is int
>>> padding = 1
>>> value = 0.5
>>> pad1d = ConstantPad1d(padding, value)
>>> out = pad1d(x)
>>> print(out)
[[[[0.5 1.  1.  1.  1.  0.5]
   [0.5 1.  1.  1.  1.  0.5]
   [0.5 1.  1.  1.  1.  0.5]]
  [[0.5 1.  1.  1.  1.  0.5]
   [0.5 1.  1.  1.  1.  0.5]
   [0.5 1.  1.  1.  1.  0.5]]]]
>>> print(out.shape)
(1, 2, 3, 6)
>>> # padding is negative
>>> padding = (-1, 0)
>>> value = 0.5
>>> pad1d = ConstantPad1d(padding, value)
>>> out = pad1d(x)
>>> print(out)
[[[[1. 1. 1.]
   [1. 1. 1.]
   [1. 1. 1.]]
  [[1. 1. 1.]
   [1. 1. 1.]
   [1. 1. 1.]]]]
>>> print(out.shape)
(1, 2, 3, 3)
class tinyms.layers.ConstantPad2d(padding, value)[source]

Using a given constant value to pads the last two dimensions of input tensor.

Parameters:
  • padding (Union[int, tuple]) – The padding size to pad the last two dimensions of input tensor. If is int, uses the same padding in boundaries of input’s last two dimensions. If is tuple and length of padding is 4 uses (padding_0, padding_1, padding_2, padding_3) to pad. If the input is x, the size of last dimension of output is \(padding\_0 + x.shape[-1] + padding\_1\). The size of penultimate dimension of output is \(padding\_2 + x.shape[-2] + padding\_3\). The remaining dimensions of the output are consistent with those of the input.

  • value (Union[int, float]) – Padding value.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If padding is not a tuple or int.

  • TypeError – If value is not int or float.

  • ValueError – If the length of padding is more than 4 or not a multiple of 2.

  • ValueError – If the output shape after padding is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ConstantPad2d
>>> x = np.ones(shape=(1, 2, 3, 4)).astype(np.float32)
>>> x = Tensor(x)
>>> padding = (-1, 1, 0, 1)
>>> value = 0.5
>>> pad2d = ConstantPad2d(padding, value)
>>> out = pad2d(x)
>>> print(out)
[[[[1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [0.5 0.5 0.5 0.5]]
  [[1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [0.5 0.5 0.5 0.5]]]]
>>> print(out.shape)
(1, 2, 4, 4)
class tinyms.layers.ConstantPad3d(padding, value)[source]

Using a given constant value to pads the last three dimensions of input tensor.

Parameters:
  • padding (Union[int, tuple]) – The padding size to pad the last three dimensions of input tensor. If is int, uses the same padding in boundaries of input’s last three dimensions. If is tuple and length of padding is 6 uses (padding_0, padding_1, padding_2, padding_3, padding_4, padding_5) to pad. If the input is x, the size of last dimension of output is \(padding\_0 + x.shape[-1] + padding\_1\). The size of penultimate dimension of output is \(padding\_2 + x.shape[-2] + padding\_3\). The size of 3rd to last dimension of output is \(padding\_4 + x.shape[-3] + padding\_5\). The remaining dimensions of the output are consistent with those of the input.

  • value (Union[int, float]) – Padding value.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If padding is not a tuple or int.

  • TypeError – If value is not int or float.

  • ValueError – If the length of padding is more than 6 or not a multiple of 2.

  • ValueError – If the output shape after padding is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ConstantPad3d
>>> x = np.ones(shape=(1, 2, 3, 4)).astype(np.float32)
>>> x = Tensor(x)
>>> padding = (-1, 1, 0, 1, 1, 0)
>>> value = 0.5
>>> pad3d = ConstantPad3d(padding, value)
>>> out = pad3d(x)
>>> print(out)
[[[[0.5 0.5 0.5 0.5]
   [0.5 0.5 0.5 0.5]
   [0.5 0.5 0.5 0.5]
   [0.5 0.5 0.5 0.5]]
  [[1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [0.5 0.5 0.5 0.5]]
  [[1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [1.  1.  1.  0.5]
   [0.5 0.5 0.5 0.5]]]]
>>> print(out.shape)
(1, 3, 4, 4)
class tinyms.layers.ReflectionPad1d(padding)[source]

Using a given padding to do reflection pad on the given tensor.

Parameters:

padding (union[int, tuple]) – The padding size to pad the last dimension of input tensor. If padding is an integer: all directions will be padded with the same size. If padding is a tuple: uses \((pad\_left, pad\_right)\) to pad.

Inputs:
  • x (Tensor) - 2D or 3D, shape: \((C, W_{in})\) or \((N, C, W_{in})\).

Outputs:

Tensor, after padding. Shape: \((C, W_{out})\) or \((N, C, W_{out})\), where \(W_{out} = W_{in} + pad\_left + pad\_right\).

Raises:
  • TypeError – If ‘padding’ is not a tuple or int.

  • TypeError – If there is an element in ‘padding’ that is not int.

  • ValueError – If the length of ‘padding’ is not divisible by 2.

  • ValueError – If there is an element in ‘padding’ that is negative.

  • ValueError – If the there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ReflectionPad1d
>>> x = Tensor(np.array([[[0, 1, 2, 3], [4, 5, 6, 7]]]).astype(np.float32))
>>> # x has shape (1, 2, 4)
>>> padding = (3, 1)
>>> # The first and the second dimension of x remain the same.
>>> # The third dimension of x: W_out = W_in + pad_left + pad_right = 4 + 3 + 1 = 8
>>> pad1d = ReflectionPad1d(padding)
>>> out = pad1d(x)
>>> # The shape of out is (1, 2, 8)
>>> print(out)
[[[3. 2. 1. 0. 1. 2. 3. 2.]
  [7. 6. 5. 4. 5. 6. 7. 6.]]]
class tinyms.layers.ReflectionPad2d(padding)[source]

Using a given padding to do reflection pad the given tensor.

Parameters:

padding (union[int, tuple]) – The padding size to pad the input tensor. If padding is an integer: all directions will be padded with the same size. If padding is a tuple: uses \((pad\_left, pad\_right, pad\_up, pad\_down)\) to pad.

Inputs:
  • x (Tensor) - 3D or 4D, shape: \((C, H_{in}, W_{in})\) or \((N, C, H_{in}, W_{in})\).

Outputs:

Tensor, after padding. Shape: \((C, H_{out}, W_{out})\) or \((N, C, H_{out}, W_{out})\), where \(H_{out} = H_{in} + pad\_up + pad\_down\), \(W_{out} = W_{in} + pad\_left + pad\_right\).

Raises:
  • TypeError – If ‘padding’ is not a tuple or int.

  • TypeError – If there is an element in ‘padding’ that is not int.

  • ValueError – If the length of ‘padding’ is not divisible by 2.

  • ValueError – If there is an element in ‘padding’ that is negative.

  • ValueError – If the there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ReflectionPad2d
>>> x = Tensor(np.array([[[0, 1, 2], [3, 4, 5], [6, 7, 8]]]).astype(np.float32))
>>> # x has shape (1, 3, 3)
>>> padding = (1, 1, 2, 0)
>>> pad2d = ReflectionPad2d(padding)
>>> # The first dimension of x remains the same.
>>> # The second dimension of x: H_out = H_in + pad_up + pad_down = 3 + 1 + 1 = 5
>>> # The third dimension of x: W_out = W_in + pad_left + pad_right = 3 + 2 + 0 = 5
>>> out = pad2d(x)
>>> # The shape of out is (1, 5, 5)
>>> print(out)
[[[7. 6. 7. 8. 7.]
  [4. 3. 4. 5. 4.]
  [1. 0. 1. 2. 1.]
  [4. 3. 4. 5. 4.]
  [7. 6. 7. 8. 7.]]]
class tinyms.layers.ReflectionPad3d(padding)[source]

Pad the given tensor in a reflecting way using the input boundaries as the axis of symmetry.

Note

ReflectionPad3d has not supported 5D tensor yet.

Parameters:

padding (union[int, tuple]) – The padding size to pad the input tensor. If padding is an integer: all directions will be padded with the same size. If padding is a tuple: uses \((pad\_left, pad\_right, pad\_up, pad\_down, pad\_front, pad\_back)\) to pad.

Inputs:
  • x (Tensor) - 4D Tensor, shape: \((N, D_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, after padding. Shape: \((N, D_{out}, H_{out}, W_{out})\), where \(D_{out} = D_{in} + pad\_front + pad\_back\), \(H_{out} = H_{in} + pad\_up + pad\_down\) \(W_{out} = W_{in} + pad\_left + pad\_right\).

Raises:
  • TypeError – If ‘padding’ is not a tuple or int.

  • TypeError – If there is an element in ‘padding’ that is not int.

  • ValueError – If the length of ‘padding’ is not divisible by 2.

  • ValueError – If there is an element in ‘padding’ that is negative.

  • ValueError – If the there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ReflectionPad3d
>>> arr = np.arange(8).astype(np.float32).reshape((1, 2, 2, 2))
>>> x = Tensor(arr)
>>> # x has shape (1, 2, 2, 2)
>>> padding = (1, 1, 1, 0, 0, 1)
>>> pad3d = ReflectionPad3d(padding)
>>> out = pad3d(x)
>>> # The first dimension of x remains the same.
>>> # The second dimension of x: D_out = D_in + pad_front + pad_back = 2 + 0 + 1 = 3
>>> # The third dimension of x: H_out = H_in + pad_up + pad_down = 2 + 1 + 0 = 3
>>> # The last dimension of x: W_out = W_in + pad_left + pad_right = 2 + 1 + 1 = 4
>>> # The shape of out is (1, 3, 3, 4)
>>> print(out)
[[[[3. 2. 3. 2.]
   [1. 0. 1. 0.]
   [3. 2. 3. 2.]]
  [[7. 6. 7. 6.]
   [5. 4. 5. 4.]
   [7. 6. 7. 6.]]
  [[3. 2. 3. 2.]
   [1. 0. 1. 0.]
   [3. 2. 3. 2.]]]]
class tinyms.layers.ZeroPad2d(padding)[source]

Pads the last two dimensions of input tensor with zero.

Parameters:

padding (Union[int, tuple]) – The padding size to pad the last two dimensions of input tensor. If is int, uses the same padding in boundaries of input’s last two dimensions. If is tuple and length of padding is 4 uses (padding_0, padding_1, padding_2, padding_3) to pad. If the input is x, the size of last dimension of output is \(padding\_0 + x.shape[-1] + padding\_1\). The size of penultimate dimension of output is \(padding\_2 + x.shape[-2] + padding\_3\). The remaining dimensions of the output are consistent with those of the input.

Returns:

Tensor, the tensor after padding.

Raises:
  • TypeError – If padding is not a tuple or int.

  • ValueError – If the length of padding is more than 4 or not a multiple of 2.

  • ValueError – If the output shape after padding is not positive.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.nn import ZeroPad2d
>>> x = np.ones(shape=(1, 2, 3, 4)).astype(np.float32)
>>> x = Tensor(x)
>>> padding = (-1, 1, 0, 1)
>>> pad = ZeroPad2d(padding)
>>> out = pad(x)
>>> print(out)
[[[[1. 1. 1. 0.]
   [1. 1. 1. 0.]
   [1. 1. 1. 0.]
   [0. 0. 0. 0.]]
  [[1. 1. 1. 0.]
   [1. 1. 1. 0.]
   [1. 1. 1. 0.]
   [0. 0. 0. 0.]]]]
>>> print(out.shape)
(1, 2, 4, 4)
class tinyms.layers.ReplicationPad1d(padding)[source]

Pad on W dimension of input x according to padding.

Parameters:

padding (union[int, tuple]) –

The padding size to pad the last dimension of x .

  • If padding is an integer, all directions will be padded with the same size.

  • If padding is a tuple, uses \((pad_{left}, pad_{right})\) to pad.

Inputs:
  • x (Tensor) - 2D or 3D, shape: \((C, W_{in})\) or \((N, C, W_{in})\).

Outputs:

Tensor, after padding. Shape: \((C, W_{out})\) or \((N, C, W_{out})\), where \(W_{out} = W_{in} + pad_{left} + pad_{right}\)

Raises:
  • TypeError – If padding is neither a tuple nor an int.

  • TypeError – If there is an element in padding that is not int.

  • ValueError – If padding is tuple and the length of padding is not divisible by 2.

  • ValueError – If padding is tuple and there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.nn import ReplicationPad1d
>>> pad1d = ReplicationPad1d(2)
>>> input = Tensor(np.arange(0, 8).reshape(1, 2, 4), mindspore.float32)
>>> print(input)
[[[0. 1. 2. 3.]
  [4. 5. 6. 7.]]]
>>> out = pad1d(input)
>>> print(out)
[[[0. 0. 0. 1. 2. 3. 3. 3.]
  [4. 4. 4. 5. 6. 7. 7. 7.]]]
>>> pad1d = ReplicationPad1d((3, 1))
>>> out = pad1d(input)
>>> print(out)
[[[0. 0. 0. 0. 1. 2. 3. 3.]
  [4. 4. 4. 4. 5. 6. 7. 7.]]]
class tinyms.layers.ReplicationPad2d(padding)[source]

Pad on HW dimension of input x according to padding.

Parameters:

padding (union[int, tuple]) –

The padding size to pad the last two dimension of x .

  • If padding is an integer, all directions will be padded with the same size.

  • If padding is a tuple, uses \((pad_{left}, pad_{right}, pad_{up}, pad_{down})\) to pad.

Inputs:
  • x (Tensor) - 3D or 4D, shape: \((C, H_{in}, W_{in})\) or \((N, C, H_{in}, W_{in})\).

Outputs:

Tensor, after padding. Shape: \((C, H_{out}, W_{out})\) or \((N, C, H_{out}, W_{out})\), where \(H_{out} = H_{in} + pad_{up} + pad_{down}\), \(W_{out} = W_{in} + pad_{left} + pad_{right}\).

Raises:
  • TypeError – If padding is neither a tuple nor an int.

  • TypeError – If there is an element in padding that is not int.

  • ValueError – If padding is tuple and the length of padding is not divisible by 2.

  • ValueError – If padding is tuple and there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.nn import ReplicationPad2d
>>> pad2d = ReplicationPad2d(2)
>>> input = Tensor(np.arange(0, 9).reshape(1, 1, 3, 3), mindspore.float32)
>>> print(input)
[[[[0. 1. 2.]
   [3. 4. 5.]
   [6. 7. 8.]]]]
>>> out = pad2d(input)
>>> print(out)
[[[[0. 0. 0. 1. 2. 2. 2.]
   [0. 0. 0. 1. 2. 2. 2.]
   [0. 0. 0. 1. 2. 2. 2.]
   [3. 3. 3. 4. 5. 5. 5.]
   [6. 6. 6. 7. 8. 8. 8.]
   [6. 6. 6. 7. 8. 8. 8.]
   [6. 6. 6. 7. 8. 8. 8.]]]]
>>> pad2d = ReplicationPad2d((1, 1, 2, 0))
>>> out = pad2d(input)
>>> print(out)
[[[[0. 0. 1. 2. 2.]
   [0. 0. 1. 2. 2.]
   [0. 0. 1. 2. 2.]
   [3. 3. 4. 5. 5.]
   [6. 6. 7. 8. 8.]]]]
class tinyms.layers.ReplicationPad3d(padding)[source]

Pad on DHW dimension of input x according to padding.

Parameters:

padding (union[int, tuple]) –

The padding size to pad the last three dimension of x .

  • If padding is an integer, all directions will be padded with the same size.

  • If padding is a tuple, uses \((pad_{left}, pad_{right}, pad_{up}, pad_{down}, pad_{front}, pad_{back})\) to pad.

Inputs:
  • x (Tensor) - 4D or 5D, shape: \((C, D_{in}, H_{in}, W_{in})\) or \((N, C, D_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, after padding, shape: \((C, D_{out}, H_{out}, W_{out})\) or \((N, C, D_{out}, H_{out}, W_{out})\), where \(D_{out} = D_{in} + pad_{front} + pad_{back}\), \(H_{out} = H_{in} + pad_{up} + pad_{down}\), \(W_{out} = W_{in} + pad_{left} + pad_{right}\).

Raises:
  • TypeError – If padding is neither a tuple nor an int.

  • TypeError – If there is an element in padding that is not int.

  • ValueError – If padding is tuple and the length of padding is not divisible by 2.

  • ValueError – If padding is tuple and there is a dimension mismatch between the padding and the tensor.

Supported Platforms:

GPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.nn import ReplicationPad3d
>>> pad3d = ReplicationPad3d(1)
>>> input = Tensor(np.arange(0, 9).reshape(1, 1, 1, 3, 3), mindspore.float32)
>>> out = pad3d(input)
>>> print(out)
[[[[[0. 0. 1. 2. 2.]
    [0. 0. 1. 2. 2.]
    [3. 3. 4. 5. 5.]
    [6. 6. 7. 8. 8.]
    [6. 6. 7. 8. 8.]]
   [[0. 0. 1. 2. 2.]
    [0. 0. 1. 2. 2.]
    [3. 3. 4. 5. 5.]
    [6. 6. 7. 8. 8.]
    [6. 6. 7. 8. 8.]]
   [[0. 0. 1. 2. 2.]
    [0. 0. 1. 2. 2.]
    [3. 3. 4. 5. 5.]
    [6. 6. 7. 8. 8.]
    [6. 6. 7. 8. 8.]]]]]
class tinyms.layers.ChannelShuffle(groups)[source]

Divide the channels of Tensor whose shape is \((*, C, H, W)\) into \(g\) groups to obtain a Tensor with shape \((*, C \frac g, g, H, W)\), and transpose along the corresponding axis of \(C\), \(\frac{g}{}\) and \(g\) to restore Tensor to the original shape.

Parameters:

groups (int) – Number of groups to divide channels in, must be greater than 0. Refer to \(g\).

Inputs:
  • x (Tensor) - Tensor of shape \((*, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with the same type and shape as the x.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> channel_shuffle = nn.ChannelShuffle(2)
>>> x = Tensor(np.arange(16).astype(np.int32).reshape(1, 4, 2, 2))
>>> print(x)
[[[[0 1],
   [2 3]],
  [[4 5],
   [6 7]],
  [[8 9],
   [10 11]],
  [[12 13],
   [14 15]],
 ]]
>>> output = channel_shuffle(x)
>>> print(output)
[[[[0 1],
   [2 3]],
  [[8 9],
   [10 11]],
  [[4 5],
   [6 7]],
  [[12 13],
   [14 15]],
 ]]

tinyms.model

tinyms.initializers

class tinyms.initializers.Initializer(**kwargs)[source]

The abstract base class of the initializer.

Parameters:

kwargs (dict) – Keyword arguments for Initializer.

tinyms.initializers.initializer(init, shape=None, dtype=mindspore.float32)[source]

Create and initialize a tensor.

Parameters:
  • init (Union[Tensor, str, Initializer, numbers.Number]) –

    Initialize value.

    • str: The init should be the alias of the class inheriting from Initializer and the corresponding class will be called in practice. The value of ‘init’ can be “normal”, “ones” or “zeros”, etc.

    • Initializer: The init should be the class inheriting from Initializer to initialize tensor.

    • numbers.Number: The Constant will be called to initialize tensor.

    • Tensor: The tensor will be called to initialize tensor.

  • shape (Union[tuple, list, int]) – The shape of the initialized tensor. Default: None.

  • dtype (mindspore.dtype) – The type of data in initialized tensor. Default: mindspore.float32.

Returns:

Tensor, return is Tensor object.

Raises:
  • TypeError – The type of the argument ‘init’ is not correct.

  • ValueError – The shape of the tensor which is passed through ‘init’ is not the same as that passed by ‘shape’.

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.common.initializer import initializer, One
>>> data = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> tensor1 = initializer(data, [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
>>> tensor3 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor4 = initializer(0, [1, 2, 3], mindspore.float32)
class tinyms.initializers.TruncatedNormal(sigma=0.01)[source]

Generates an array with values sampled from Truncated Normal distribution in order to initialize a tensor.

Parameters:

sigma (float) – The standard deviation of Truncated Normal distribution. Default: 0.01.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, TruncatedNormal
>>> tensor1 = initializer(TruncatedNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('truncatedNormal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Normal(sigma=0.01, mean=0.0)[source]

Generates an array with values sampled from Normal distribution \({N}(\text{sigma}, \text{mean})\) in order to initialize a tensor.

\[f(x) = \frac{1} {\sqrt{2*π} * sigma}exp(-\frac{(x - mean)^2} {2*{sigma}^2})\]
Parameters:
  • sigma (float) – The standard deviation of Normal distribution. Default: 0.01.

  • mean (float) – The mean of Normal distribution. Default: 0.0.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Normal
>>> tensor1 = initializer(Normal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('normal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Uniform(scale=0.07)[source]

Generates an array with values sampled from Uniform distribution \({U}(-\text{scale}, \text{scale})\) in order to initialize a tensor.

Parameters:

scale (float) – The bound of the Uniform distribution. Default: 0.07.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Uniform
>>> tensor1 = initializer(Uniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.HeUniform(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[source]

Generates an array with values sampled from HeKaiming Uniform distribution \({U}(-\text{boundary}, \text{boundary})\) in order to initialize a tensor, where

\[boundary = \text{gain} \times \sqrt{\frac{3}{fan\_mode}}\]

where \(gain\) is an optional scaling factor. If \(fan\_mode\) is ‘fan_in’, it is the number of input units of the weight tensor. If \(fan\_mode\) is ‘fan_out’, it is the number of output units of the weight tensor.

For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.

Parameters:
  • negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

  • mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: ‘fan_in’.

  • nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: ‘leaky_relu’.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeUniform
>>> tensor1 = initializer(HeUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.HeNormal(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[source]

Generates an array with values sampled from HeKaiming Normal distribution \({N}(0, \text{sigma}^2)\) in order to initialize a tensor, where

\[sigma = \frac{gain} {\sqrt{fan\_mode}}\]

where \(gain\) is an optional scaling factor. \(fan\_mode\) is the number of input or output units of the weight tensor, depending on the mode is ‘fan_in’ or ‘fan_out’.

For details of HeNormal algorithm, please check https://arxiv.org/abs/1502.01852.

Parameters:
  • negative_slope (int, float) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

  • mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: ‘fan_in’.

  • nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: ‘leaky_relu’.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeNormal
>>> tensor1 = initializer(HeNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_normal', [1, 2, 3], mindspore.float32)
class tinyms.initializers.XavierUniform(gain=1)[source]

Generates an array with values sampled from Xavier uniform distribution \({U}(-\text{boundary}, \text{boundary})\) in order to initialize a tensor, where

\[boundary = gain * \sqrt{\frac{6}{n_{in} + n_{out}}}\]

where \(gain\) is an optional scaling factor. \(n_{in}\) is the number of input units in the weight tensor, \(n_{out}\) is the number of output units in the weight tensor.

For details of XavierUniform algorithm, please check http://proceedings.mlr.press/v9/glorot10a.html.

Parameters:

gain (float) – An optional scaling factor. Default: 1.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, XavierUniform
>>> tensor1 = initializer(XavierUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('xavier_uniform', [1, 2, 3], mindspore.float32)
class tinyms.initializers.One(**kwargs)[source]

Generates an array with constant value of one in order to initialize a tensor.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Zero(**kwargs)[source]

Generates an array with constant value of zero in order to initialize a tensor.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Zero
>>> tensor1 = initializer(Zero(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('zeros', [1, 2, 3], mindspore.float32)
class tinyms.initializers.Constant(value)[source]

Generates an array with constant value in order to initialize a tensor.

Parameters:

value (Union[int, numpy.ndarray]) – The value to initialize.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Constant
>>> tensor1 = initializer(Constant(3), [1, 2, 3], mindspore.float32)

tinyms.losses

tinyms.optimizers

tinyms.callbacks

Callback related classes and functions in model training phase.

class tinyms.callbacks.LossTimeMonitor(lr_init=None)[source]

Monitor loss and time.

Parameters:

lr_init (numpy.ndarray) – Train learning rate. Default: None.

Returns:

None

Examples

>>> from tinyms import Tensor
>>> from tinyms.callbacks import LossTimeMonitor
>>>
>>> LossTimeMonitor(lr_init=Tensor([0.05] * 100).asnumpy())
class tinyms.callbacks.LossTimeMonitorV2[source]

Monitor loss and time version 2.0. This version will not show learning rate.

Args:

Returns:

None

Examples

>>> from tinyms.callbacks import LossTimeMonitorV2
>>>
>>> LossTimeMonitorV2()
class tinyms.callbacks.BertLossCallBack(dataset_size=1)[source]

Monitor the loss in training. If the loss in NAN or INF terminating training.

Parameters:

dataset_size (int) – Print loss every times. Default: 1.

Returns:

None

Examples

>>> from tinyms.callbacks import BertLossCallBack
>>>
>>> BertLossCallBack(dataset_size=1)
step_end(run_context)[source]

Print loss after each step

class tinyms.callbacks.Callback[source]

Abstract base class used to build a Callback class. Callbacks are context managers which will be entered and exited when passing into the Model. You can use this mechanism to do some custom operations.

Each method of Callback class corresponds to a stage in training or eval process, and those methods have the same input run_context, which hold context information of the model in training or eval process. When defining a Callback subclass or creating a custom Callback, note that you should override methods with names prefixed with “on_train” or “on_eval”, otherwise ValueError will be raised if the custimized Callbacks used in model.fit.

When creating a custom Callback, model context information can be obtained in Callback methods by calling RunContext.original_args(), which is a dictionary varivable recording current attributes. Users can add custimized attributes to the information. Training process can also be stopped by calling request_stop method. For details of custom Callback, please check Callback.

Examples

>>> import numpy as np
>>> from mindspore import nn
>>> from mindspore import dataset as ds
>>> from mindspore.train import Model, Callback
>>> class Print_info(Callback):
...     def step_end(self, run_context):
...         cb_params = run_context.original_args()
...         print("step_num: ", cb_params.cur_step_num)
>>>
>>> print_cb = Print_info()
>>> data = {"x": np.float32(np.random.rand(64, 10)), "y": np.random.randint(0, 5, (64,))}
>>> dataset = ds.NumpySlicesDataset(data=data).batch(32)
>>> net = nn.Dense(10, 5)
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
>>> model.train(1, dataset, callbacks=print_cb)
step_num: 2
begin(run_context)[source]

Called once before the network executing. A backwards compatibility alias for on_train_begin and on_eval_begin.

Parameters:

run_context (RunContext) – Include some information of the model.

end(run_context)[source]

Called once after network training. A backwards compatibility alias for on_train_end and on_eval_end.

Parameters:

run_context (RunContext) – Include some information of the model.

epoch_begin(run_context)[source]

Called before each epoch beginning. A backwards compatibility alias for on_train_epoch_begin and on_eval_epoch_begin.

Parameters:

run_context (RunContext) – Include some information of the model.

epoch_end(run_context)[source]

Called after each epoch finished. A backwards compatibility alias for on_train_epoch_end and on_eval_epoch_end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_begin(run_context)[source]

Called before eval begin.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_end(run_context)[source]

Called after eval end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_epoch_begin(run_context)[source]

Called before eval epoch begin.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_epoch_end(run_context)[source]

Called after eval epoch end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_step_begin(run_context)[source]

Called before each eval step begin.

Parameters:

run_context (RunContext) – Include some information of the model.

on_eval_step_end(run_context)[source]

Called after each eval step end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_begin(run_context)[source]

Called once before the network training.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_end(run_context)[source]

Called after training end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_epoch_begin(run_context)[source]

Called before each training epoch begin.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_epoch_end(run_context)[source]

Called after each training epoch end.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_step_begin(run_context)[source]

Called before each training step begin.

Parameters:

run_context (RunContext) – Include some information of the model.

on_train_step_end(run_context)[source]

Called after each training step end.

Parameters:

run_context (RunContext) – Include some information of the model.

step_begin(run_context)[source]

Called before each step beginning. A backwards compatibility alias for on_train_step_begin and on_eval_step_begin.

Parameters:

run_context (RunContext) – Include some information of the model.

step_end(run_context)[source]

Called after each step finished. A backwards compatibility alias for on_train_step_end and on_eval_step_end.

Parameters:

run_context (RunContext) – Include some information of the model.

class tinyms.callbacks.LossMonitor(per_print_times=1)[source]

Monitor the loss in train or monitor the loss and eval metrics in fit.

If the loss is NAN or INF, it will terminate training.

Note

If per_print_times is 0, do not print loss.

Parameters:

per_print_times (int) – How many steps to print once loss. During sink mode, it will print loss in the nearest step. Default: 1.

Raises:

ValueError – If per_print_times is not an integer or less than zero.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, LossMonitor
>>>
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> loss_monitor = LossMonitor()
>>> model.train(10, dataset, callbacks=loss_monitor)
on_train_epoch_end(run_context)[source]

When LossMonitor used in model.fit, print eval metrics at the end of epoch if current epoch should do evaluation.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

step_end(run_context)[source]

Print training loss at the end of step.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.TimeMonitor(data_size=None)[source]

Monitor the time in train or eval process.

Parameters:

data_size (int) – How many steps are the intervals between print information each time. if the program get batch_num during training, data_size will be set to batch_num, otherwise data_size will be used. Default: None.

Raises:

ValueError – If data_size is not positive int.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, TimeMonitor
>>>
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> time_monitor = TimeMonitor()
>>> model.train(10, dataset, callbacks=time_monitor)
epoch_begin(run_context)[source]

Record time at the beginning of epoch.

Parameters:

run_context (RunContext) – Context of the process running. For more details, please refer to mindspore.train.RunContext.

epoch_end(run_context)[source]

Print process cost time at the end of epoch.

Parameters:

run_context (RunContext) – Context of the process running. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.ModelCheckpoint(prefix='CKP', directory=None, config=None)[source]

The checkpoint callback class.

It is called to combine with train process and save the model and network parameters after training.

Note

In the distributed training scenario, please specify different directories for each training process to save the checkpoint file. Otherwise, the training may fail. If this callback is used in the model function, the checkpoint file will saved parameters of the optimizer by default.

Parameters:
  • prefix (str) – The prefix name of checkpoint files. Default: “CKP”.

  • directory (str) – The path of the folder which will be saved in the checkpoint file. By default, the file is saved in the current directory. Default: None.

  • config (CheckpointConfig) – Checkpoint strategy configuration. Default: None.

Raises:
  • ValueError – If prefix is not str or contains the ‘/’ character.

  • ValueError – If directory is not str.

  • TypeError – If the config is not CheckpointConfig type.

end(run_context)[source]

Save the last checkpoint after training finished.

Parameters:

run_context (RunContext) – Context of the train running.

property latest_ckpt_file_name

Return the latest checkpoint path and file name.

step_end(run_context)[source]

Save the checkpoint at the end of step.

Parameters:

run_context (RunContext) – Context of the train running.

class tinyms.callbacks.SummaryCollector(summary_dir, collect_freq=10, collect_specified_data=None, keep_default_action=True, custom_lineage_data=None, collect_tensor_freq=None, max_file_size=None, export_options=None)[source]

SummaryCollector can help you to collect some common information.

It can help you to collect loss, learning late, computational graph and so on. SummaryCollector also enables the summary operator to collect data to summary files.

Note

  1. When using SummaryCollector, you need to run the code in if __name__ == “__main__” .

  2. Multiple SummaryCollector instances in callback list are not allowed.

  3. Not all information is collected at the training phase or at the eval phase.

  4. SummaryCollector always record the data collected by the summary operator.

  5. SummaryCollector only supports Linux systems.

  6. The Summary is not supported when compile source with -s on option.

Parameters:
  • summary_dir (str) – The collected data will be persisted to this directory. If the directory does not exist, it will be created automatically.

  • collect_freq (int) – Set the frequency of data collection, it should be greater than zero, and the unit is step. If a frequency is set, we will collect data when (current steps % freq) equals to 0, and the first step will be collected at any time. It is important to note that if the data sink mode is used, the unit will become the epoch. It is not recommended to collect data too frequently, which can affect performance. Default: 10.

  • collect_specified_data (Union[None, dict]) –

    Perform custom operations on the collected data. By default, if set to None, all data is collected as the default behavior. You can customize the collected data with a dictionary. For example, you can set {‘collect_metric’: False} to control not collecting metrics. The data that supports control is shown below. Default: None.

    • collect_metric (bool): Whether to collect training metrics, currently only the loss is collected. The first output will be treated as the loss and it will be averaged. Default: True.

    • collect_graph (bool): Whether to collect the computational graph. Currently, only training computational graph is collected. Default: True.

    • collect_train_lineage (bool): Whether to collect lineage data for the training phase, this field will be displayed on the lineage page of MindInsight. Default: True.

    • collect_eval_lineage (bool): Whether to collect lineage data for the evaluation phase, this field will be displayed on the lineage page of MindInsight. Default: True.

    • collect_input_data (bool): Whether to collect dataset for each training. Currently only image data is supported. If there are multiple columns of data in the dataset, the first column should be image data. Default: True.

    • collect_dataset_graph (bool): Whether to collect dataset graph for the training phase. Default: True.

    • histogram_regular (Union[str, None]): Collect weight and bias for parameter distribution page and displayed in MindInsight. This field allows regular strings to control which parameters to collect. It is not recommended to collect too many parameters at once, as it can affect performance. Note that if you collect too many parameters and run out of memory, the training will fail. Default: None, it means only the first five parameters are collected.

    • collect_landscape (Union[dict,None]): Whether to collect the parameters needed to create the loss landscape. If set to None, collect_landscape parameters will not be collected. All parameter information is collected by default and stored in file {summary_dir}/ckpt_dir/train_metadata.json.

      • landscape_size (int): Specify the image resolution of the generated loss landscape. For example, if it is set to 128, the resolution of the landscape is 128 * 128. The calculation time increases with the increase of resolution. Default: 40. Optional values: between 3 and 256.

      • unit (str): Specify the interval strength of the training process. Default: “step”. Optional: epoch/step.

      • create_landscape (dict): Select how to create loss landscape. Training process loss landscape(train) and training result loss landscape(result). Default: {“train”: True, “result”: True}. Optional: True/False.

      • num_samples (int): The size of the dataset used to create the loss landscape. For example, in image dataset, You can set num_samples is 128, which means that 128 images are used to create loss landscape. Default: 128.

      • intervals (List[List[int]]): Specifies the interval in which the loss landscape. For example: If the user wants to create loss landscape of two training processes, they are 1-5 epoch and 6-10 epoch respectively. They anc set [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]. Note: Each interval have at least three epochs.

  • keep_default_action (bool) – This field affects the collection behavior of the ‘collect_specified_data’ field. True: it means that after specified data is set, non-specified data is collected as the default behavior. False: it means that after specified data is set, only the specified data is collected, and the others are not collected. Default: True.

  • custom_lineage_data (Union[dict, None]) – Allows you to customize the data and present it on the MingInsight lineage page. In the custom data, the type of the key supports str, and the type of value supports str, int and float. Default: None, it means there is no custom data.

  • collect_tensor_freq (Optional[int]) – The same semantics as the collect_freq, but controls TensorSummary only. Because TensorSummary data is too large to be compared with other summary data, this parameter is used to reduce its collection. By default, The maximum number of steps for collecting TensorSummary data is 20, but it will not exceed the number of steps for collecting other summary data. For example, given collect_freq=10, when the total steps is 600, TensorSummary will be collected 20 steps, while other summary data 61 steps, but when the total steps is 20, both TensorSummary and other summary will be collected 3 steps. Also note that when in parallel mode, the total steps will be split evenly, which will affect the number of steps TensorSummary will be collected. Default: None, which means to follow the behavior as described above.

  • max_file_size (Optional[int]) – The maximum size in bytes of each file that can be written to the disk. For example, to write not larger than 4GB, specify max_file_size=4*1024**3. Default: None, which means no limit.

  • export_options (Union[None, dict]) –

    Perform custom operations on the export data. Note that the size of export files is not limited by the max_file_size. You can customize the export data with a dictionary. For example, you can set {‘tensor_format’: ‘npy’} to export tensor as npy file. The data that supports control is shown below. Default: None, it means that the data is not exported.

    • tensor_format (Union[str, None]): Customize the export tensor format. Supports [“npy”, None]. Default: None, it means that the tensor is not exported.

      • npy: export tensor as npy file.

Raises:

ValueError – The Summary is not supported, please without -s on and recompile source.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore.train import Model, SummaryCollector
>>> from mindspore.nn import Accuracy
>>>
>>> if __name__ == '__main__':
...     # If the device_target is GPU, set the device_target to "GPU"
...     ms.set_context(mode=ms.GRAPH_MODE, device_target="Ascend")
...     mnist_dataset_dir = '/path/to/mnist_dataset_directory'
...     # The detail of create_dataset method shown in model_zoo.official.cv.lenet.src.dataset.py
...     ds_train = create_dataset(mnist_dataset_dir, 32)
...     # The detail of LeNet5 shown in model_zoo.official.cv.lenet.src.lenet.py
...     network = LeNet5(10)
...     net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
...     net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
...     model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()}, amp_level="O2")
...
...     # Simple usage:
...     summary_collector = SummaryCollector(summary_dir='./summary_dir')
...     model.train(1, ds_train, callbacks=[summary_collector], dataset_sink_mode=False)
...
...     # Do not collect metric and collect the first layer parameter, others are collected by default
...     specified={'collect_metric': False, 'histogram_regular': '^conv1.*'}
...     summary_collector = SummaryCollector(summary_dir='./summary_dir', collect_specified_data=specified)
...     model.train(1, ds_train, callbacks=[summary_collector], dataset_sink_mode=False)
class tinyms.callbacks.CheckpointConfig(save_checkpoint_steps=1, save_checkpoint_seconds=0, keep_checkpoint_max=5, keep_checkpoint_per_n_minutes=0, integrated_save=True, async_save=False, saved_network=None, append_info=None, enc_key=None, enc_mode='AES-GCM', exception_save=False)[source]

The configuration of model checkpoint.

Note

During the training process, if dataset is transmitted through the data channel, it is suggested to set ‘save_checkpoint_steps’ to an integer multiple of loop_size. Otherwise, the time to save the checkpoint may be biased. It is recommended to set only one save strategy and one keep strategy at the same time. If both save_checkpoint_steps and save_checkpoint_seconds are set, save_checkpoint_seconds will be invalid. If both keep_checkpoint_max and keep_checkpoint_per_n_minutes are set, keep_checkpoint_per_n_minutes will be invalid.

Parameters:
  • save_checkpoint_steps (int) – Steps to save checkpoint. Default: 1.

  • save_checkpoint_seconds (int) – Seconds to save checkpoint. Can’t be used with save_checkpoint_steps at the same time. Default: 0.

  • keep_checkpoint_max (int) – Maximum number of checkpoint files can be saved. Default: 5.

  • keep_checkpoint_per_n_minutes (int) – Save the checkpoint file every keep_checkpoint_per_n_minutes minutes. Can’t be used with keep_checkpoint_max at the same time. Default: 0.

  • integrated_save (bool) – Whether to merge and save the split Tensor in the automatic parallel scenario. Integrated save function is only supported in automatic parallel scene, not supported in manual parallel. Default: True.

  • async_save (bool) – Whether asynchronous execution saves the checkpoint to a file. Default: False.

  • saved_network (Cell) – Network to be saved in checkpoint file. If the saved_network has no relation with the network in training, the initial value of saved_network will be saved. Default: None.

  • append_info (list) – The information save to checkpoint file. Support “epoch_num”, “step_num” and dict. The key of dict must be str, the value of dict must be one of int, float, bool, Parameter or Tensor. Default: None

  • enc_key (Union[None, bytes]) – Byte type key used for encryption. If the value is None, the encryption is not required. Default: None.

  • enc_mode (str) – This parameter is valid only when enc_key is not set to None. Specifies the encryption mode, currently supports ‘AES-GCM’, ‘AES-CBC’ and ‘SM4-CBC’. Default: ‘AES-GCM’.

  • exception_save (bool) – Whether to save the current checkpoint when an exception occurs. Default: False.

Raises:

ValueError – If input parameter is not the correct type.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.common.initializer import Normal
>>> from mindspore.train import Model, CheckpointConfig, ModelCheckpoint
>>>
>>> class LeNet5(nn.Cell):
...     def __init__(self, num_class=10, num_channel=1):
...         super(LeNet5, self).__init__()
...         self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
...         self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
...         self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
...         self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.flatten = nn.Flatten()
...
...     def construct(self, x):
...         x = self.max_pool2d(self.relu(self.conv1(x)))
...         x = self.max_pool2d(self.relu(self.conv2(x)))
...         x = self.flatten(x)
...         x = self.relu(self.fc1(x))
...         x = self.relu(self.fc2(x))
...         x = self.fc3(x)
...         return x
>>>
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> config = CheckpointConfig(saved_network=net)
>>> ckpoint_cb = ModelCheckpoint(prefix='LeNet5', directory='./checkpoint', config=config)
>>> model.train(10, dataset, callbacks=ckpoint_cb)
property append_dict

Get the value of information dict saved to checkpoint file.

Returns:

Dict, the information saved to checkpoint file.

property async_save

Get the value of whether asynchronous execution saves the checkpoint to a file.

Returns:

Bool, whether asynchronous execution saves the checkpoint to a file.

property enc_key

Get the value of byte type key used for encryption.

Returns:

(None, bytes), byte type key used for encryption.

property enc_mode

Get the value of the encryption mode.

Returns:

str, encryption mode.

get_checkpoint_policy()[source]

Get the policy of checkpoint.

Returns:

Dict, the information of checkpoint policy.

property integrated_save

Get the value of whether to merge and save the split Tensor in the automatic parallel scenario.

Returns:

Bool, whether to merge and save the split Tensor in the automatic parallel scenario.

property keep_checkpoint_max

Get the value of maximum number of checkpoint files can be saved.

Returns:

Int, Maximum number of checkpoint files can be saved.

property keep_checkpoint_per_n_minutes

Get the value of save the checkpoint file every n minutes.

Returns:

Int, save the checkpoint file every n minutes.

property save_checkpoint_seconds

Get the value of _save_checkpoint_seconds.

Returns:

Int, seconds to save the checkpoint file.

property save_checkpoint_steps

Get the value of steps to save checkpoint.

Returns:

Int, steps to save checkpoint.

property saved_network

Get the value of network to be saved in checkpoint file.

Returns:

Cell, network to be saved in checkpoint file.

class tinyms.callbacks.RunContext(original_args)[source]

Hold and manage information about the model.

RunContext is mainly used to collect context-related information about the model during training or eval and pass it into the Callback object as an input parameter to share information.

Callback objects not only can obtain the Model context information by calling by RunContext.original_args() and add extra attributes to the information, but also can stop the training process by calling request_stop method. For details of custom Callback, please check Callback.

RunContext.original_args() holds the model context information as a dictionary variable, and different attributes of the dictionary are stored in training or eval process. Details are as follows:

Attributes supported in train

Attributes supported in eval

meaning

train_network

train network with optimizer and loss

epoch_num

Number of train epochs

train_dataset

the train dataset

loss_fn

the loss function

optimizer

the optimizer

parallel_mode

the parallel mode

device_number

the device number

train_dataset_element

the train data element of current step

last_save_ckpt_step

the last step num of save ckpt

latest_ckpt_file

the ckpt file

cur_epoch_num

number of current epoch

eval_network

the evaluate network

valid_dataset

the valid dataset

metrics

the evaluate metrics

mode

mode

“train” or “eval”

batch_num

batch_num

the train/eval batch number

list_callback

list_callback

callback list

network

network

basic network

cur_step_num

cur_step_num

the train/eval step number

dataset_sink_mode

dataset_sink_mode

the train/eval sink mode

net_outputs

net_outputs

network output results

Parameters:

original_args (dict) – Holding the related information of model.

get_stop_requested()[source]

Return whether a stop is requested or not.

Returns:

bool, if true, model.train() stops iterations.

original_args()[source]

Get the _original_args object.

Returns:

Dict, an object that holds the original arguments of model.

request_stop()[source]

Set stop requirement during training or eval.

Callbacks can use this function to request stop of iterations. model.train() checks whether this is called or not.

class tinyms.callbacks.LearningRateScheduler(learning_rate_function)[source]

Change the learning_rate during training.

Parameters:

learning_rate_function (Function) – The function about how to change the learning rate during training.

Examples

>>> import numpy as np
>>> from mindspore import nn
>>> from mindspore.train import Model, LearningRateScheduler
>>> from mindspore import dataset as ds
...
>>> def learning_rate_function(lr, cur_step_num):
...     if cur_step_num%1000 == 0:
...         lr = lr*0.1
...     return lr
...
>>> lr = 0.1
>>> momentum = 0.9
>>> net = nn.Dense(10, 5)
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
...
>>> data = {"x": np.float32(np.random.rand(64, 10)), "y": np.random.randint(0, 5, (64,))}
>>> dataset = ds.NumpySlicesDataset(data=data).batch(32)
>>> model.train(1, dataset, callbacks=[LearningRateScheduler(learning_rate_function)],
...             dataset_sink_mode=False)
step_end(run_context)[source]

Change the learning_rate at the end of step.

Parameters:

run_context (RunContext) – Include some information of the model.

class tinyms.callbacks.SummaryLandscape(summary_dir)[source]

SummaryLandscape can help you to collect loss landscape information. It can create landscape in PCA direction or random direction by calculating loss.

Note

  1. When using SummaryLandscape, you need to run the code in if __name__ == “__main__” .

  2. SummaryLandscape only supports Linux systems.

Parameters:

summary_dir (str) – The path of summary is used to save the model weight, metadata and other data required to create landscape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore.nn import Loss, Accuracy
>>> from mindspore.train import Model, SummaryCollector, SummaryLandscape
>>>
>>> if __name__ == '__main__':
...     # If the device_target is Ascend, set the device_target to "Ascend"
...     ms.set_context(mode=ms.GRAPH_MODE, device_target="GPU")
...     mnist_dataset_dir = '/path/to/mnist_dataset_directory'
...     # The detail of create_dataset method shown in model_zoo.official.cv.lenet.src.dataset.py
...     ds_train = create_dataset(mnist_dataset_dir, 32)
...     # The detail of LeNet5 shown in model_zoo.official.cv.lenet.src.lenet.py
...     network = LeNet5(10)
...     net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
...     net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
...     model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()})
...     # Simple usage for collect landscape information:
...     interval_1 = [1, 2, 3, 4, 5]
...     summary_collector = SummaryCollector(summary_dir='./summary/lenet_interval_1',
...                                          collect_specified_data={'collect_landscape':{"landscape_size": 4,
...                                                                                        "unit": "step",
...                                                                          "create_landscape":{"train":True,
...                                                                                             "result":False},
...                                                                          "num_samples": 2048,
...                                                                          "intervals": [interval_1]}
...                                                                    })
...     model.train(1, ds_train, callbacks=[summary_collector], dataset_sink_mode=False)
...
...     # Simple usage for visualization landscape:
...     def callback_fn():
...         network = LeNet5(10)
...         net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
...         metrics = {"Loss": Loss()}
...         model = Model(network, net_loss, metrics=metrics)
...         mnist_dataset_dir = '/path/to/mnist_dataset_directory'
...         ds_eval = create_dataset(mnist_dataset_dir, 32)
...         return model, network, ds_eval, metrics
...
...     summary_landscape = SummaryLandscape('./summary/lenet_interval_1')
...     # parameters of collect_landscape can be modified or unchanged
...     summary_landscape.gen_landscapes_with_multi_process(callback_fn,
...                                                        collect_landscape={"landscape_size": 4,
...                                                                         "create_landscape":{"train":False,
...                                                                                            "result":False},
...                                                                          "num_samples": 2048,
...                                                                          "intervals": [interval_1]},
...                                                         device_ids=[1])
clean_ckpt()[source]

Clean the checkpoint.

gen_landscapes_with_multi_process(callback_fn, collect_landscape=None, device_ids=None, output=None)[source]

Use the multi process to generate landscape.

Parameters:
  • callback_fn (python function) –

    A python function object. User needs to write a function, it has no input, and the return requirements are as follows.

    • mindspore.train.Model: User’s model object.

    • mindspore.nn.Cell: User’s network object.

    • mindspore.dataset: User’s dataset object for create loss landscape.

    • mindspore.train.Metrics: User’s metrics object.

  • collect_landscape (Union[dict, None]) –

    The meaning of the parameters when creating loss landscape is consistent with the fields with the same name in SummaryCollector. The purpose of setting here is to allow users to freely modify creating parameters. Default: None.

    • landscape_size (int): Specify the image resolution of the generated loss landscape. For example, if it is set to 128, the resolution of the landscape is 128 * 128. The calculation time increases with the increase of resolution. Default: 40. Optional values: between 3 and 256.

    • create_landscape (dict): Select how to create loss landscape. Training process loss landscape(train) and training result loss landscape(result). Default: {“train”: True, “result”: True}. Optional: True/False.

    • num_samples (int): The size of the dataset used to create the loss landscape. For example, in image dataset, You can set num_samples is 2048, which means that 2048 images are used to create loss landscape. Default: 2048.

    • intervals (List[List[int]]): Specifies the interval in which the loss landscape. For example: If the user wants to create loss landscape of two training processes, they are 1-5 epoch and 6-10 epoch respectively. They can set [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]. Note: Each interval have at least three epochs.

  • device_ids (List(int)) – Specifies which devices are used to create loss landscape. For example: [0, 1] refers to creating loss landscape with device 0 and device 1. Default: None.

  • output (str) – Specifies the path to save the loss landscape. Default: None. The default save path is the same as the summary file.

class tinyms.callbacks.History[source]

Records the network outputs and metrics information into a History object.

The network outputs information will be the loss value if not custimizing the train network or eval network; if the custimized network returns a Tensor or numpy.ndarray, the mean value of network output will be recorded, if the custimized network returns a tuple or list, the first element of network outputs will be recorded.

Note

Normally used in mindspore.train.Model.train or mindspore.train.Model.fit.

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> from mindspore import nn
>>> from mindspore.train import Model, History
>>> data = {"x": np.float32(np.random.rand(64, 10)), "y": np.random.randint(0, 5, (64,))}
>>> train_dataset = ds.NumpySlicesDataset(data=data).batch(32)
>>> net = nn.Dense(10, 5)
>>> crit = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> history_cb = History()
>>> model = Model(network=net, optimizer=opt, loss_fn=crit, metrics={"recall"})
>>> model.train(2, train_dataset, callbacks=[history_cb])
>>> print(history_cb.epoch)
>>> print(history_cb.history)
{'epoch': [1, 2]}
{'net_output': [1.607877, 1.6033841]}
begin(run_context)[source]

Initialize the epoch property at the begin of training.

Parameters:

run_context (RunContext) – Context of the mindspore.train.Model.{train | eval}. For more details, please refer to mindspore.train.RunContext.

epoch_end(run_context)[source]

Records the first element of network outputs and metrics information at the end of epoch.

Parameters:

run_context (RunContext) – Context of the mindspore.train.Model.{train | eval}. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.LambdaCallback(on_train_epoch_begin=None, on_train_epoch_end=None, on_train_step_begin=None, on_train_step_end=None, on_train_begin=None, on_train_end=None, on_eval_epoch_begin=None, on_eval_epoch_end=None, on_eval_step_begin=None, on_eval_step_end=None, on_eval_begin=None, on_eval_end=None)[source]

Callback for creating simple, custom callbacks.

This callback is constructed with anonymous functions that will be called at the appropriate time (during mindspore.train.Model.{train | eval | fit}). Note that each stage of callbacks expects one positional arguments: run_context.

Warning

This is an experimental API that is subject to change or deletion.

Parameters:
  • on_train_epoch_begin (Function) – called at each train epoch begin.

  • on_train_epoch_end (Function) – called at each train epoch end.

  • on_train_step_begin (Function) – called at each train step begin.

  • on_train_step_end (Function) – called at each train step end.

  • on_train_begin (Function) – called at the beginning of model train.

  • on_train_end (Function) – called at the end of model train.

  • on_eval_epoch_begin (Function) – called at eval epoch begin.

  • on_eval_epoch_end (Function) – called at eval epoch end.

  • on_eval_step_begin (Function) – called at each eval step begin.

  • on_eval_step_end (Function) – called at each eval step end.

  • on_eval_begin (Function) – called at the beginning of model eval.

  • on_eval_end (Function) – called at the end of model eval.

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> from mindspore import nn
>>> from mindspore.train import Model, LambdaCallback
>>> data = {"x": np.float32(np.random.rand(64, 10)), "y": np.random.randint(0, 5, (64,))}
>>> train_dataset = ds.NumpySlicesDataset(data=data).batch(32)
>>> net = nn.Dense(10, 5)
>>> crit = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> lambda_callback = LambdaCallback(on_train_epoch_end=
... lambda run_context: print("loss: ", run_context.original_args().net_outputs))
>>> model = Model(network=net, optimizer=opt, loss_fn=crit, metrics={"recall"})
>>> model.train(2, train_dataset, callbacks=[lambda_callback])
loss: 1.6127687
loss: 1.6106578
class tinyms.callbacks.ReduceLROnPlateau(monitor='eval_loss', factor=0.1, patience=10, verbose=False, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)[source]

Reduce learning rate when the monitor has stopped improving.

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Note

Learning rate grouping is not supported now.

Parameters:
  • monitor (str) – quantity to be monitored. If evaluation is performed on the end of train epochs, the valid monitors can be “loss”, “eval_loss” or metric names passed when instantiate the Model; otherwise the valid monitor is “loss”. When monitor is “loss”, if train network has multiple outputs, the first element will be returned as training loss.

  • factor (float) – factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.

  • patience (int) – monitor value is better than history best value over min_delta is seen as improvement, patience is number of epochs with no improvement that would be waited. When the waiting counter self.wait is larger than or equal to patience, the lr will be reduced. Default: 10.

  • verbose (bool) – If False: quiet, if True: print related information. Default: False.

  • mode (str) – one of {‘auto’, ‘min’, ‘max’}. In “min” mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in “max” mode it will be reduced when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity. Default: “auto”.

  • min_delta (float) – threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.

  • cooldown (int) – number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.

  • min_lr (float) – lower bound on the learning rate. Default: 0.

Raises:
  • ValueErrormode not in ‘auto’, ‘min’ or ‘max’.

  • ValueError – The monitor value is not a scalar.

  • ValueError – The learning rate is not a Parameter.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, ReduceLROnPlateau
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"})
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> cb = ReduceLROnPlateau(monitor="acc", patience=3, verbose=True)
>>> model.fit(10, dataset, callbacks=cb)
on_train_begin(run_context)[source]

Initialize variables at the begin of training.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.EarlyStopping(monitor='eval_loss', min_delta=0, patience=0, verbose=False, mode='auto', baseline=None, restore_best_weights=False)[source]

Stop training when a monitored metric has stopped improving.

Assuming monitor is “accuracy”, with this, mode would be “max” since goal of trianing is to maximize the accuracy, the model.fit() training loop will check at end of epoch whether the accuracy is no longer increasing, considering the min_delta and patience if applicable. Once it’s found no longer increasing, run_context.request_stop() will be called and the training terminates.

Parameters:
  • monitor (str) – quantity to be monitored. If evaluation is performed on the end of train epochs, the valid monitors can be “loss”, “eval_loss” or metric names passed when instantiate the Model; otherwise the valid monitor is “loss”. When monitor is “loss”, if train network has multiple outputs, the first element will be returned as training loss. Default: “eval_loss”.

  • patience (int) – monitor value is better than history best value over min_delta is seen as improvement, patience is number of epochs with no improvement that would be waited. When the waiting counter self.wait is larger than or equal to patience, the training process will be stopped. Default: 0.

  • verbose (bool) – If False: quiet, if True: print related information. Default: True.

  • mode (str) – one of {‘auto’, ‘min’, ‘max’}. In “min” mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in “max” mode it will be reduced when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity. Default: “auto”.

  • min_delta (float) – threshold for measuring the new optimum, to only focus on significant changes. Default: 0.

  • baseline (float) – Baseline value for the monitor. When the monitor value shows improvement over the history best value and the baseline, the internal wait counter will be set to zero. Default: None.

  • restore_best_weights (bool) – Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. Default: False.

Raises:
  • ValueErrormode not in ‘auto’, ‘min’ or ‘max’.

  • ValueError – The monitor value is not a scalar.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, EarlyStopping
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"})
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> cb = EarlyStopping(monitor="acc", patience=3, verbose=True)
>>> model.fit(10, dataset, callbacks=cb)
on_train_begin(run_context)[source]

Initialize variables at the begin of training.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_end(run_context)[source]

If verbose is True, print the stopped epoch.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the training process will be stopped.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.OnRequestExit(save_ckpt=True, save_mindir=True, file_name='Net', directory='./', sig=<Signals.SIGTERM: 15>)[source]

Respond to the user’s closing request, exit the training or eval process, and save the checkpoint and mindir.

Register OnRequestExit Callback before training, when the user want to exit the training process and save the training data, could send the registered exit signal ‘sig’ to the training process. After the training process executes the current step, saves the current training status, including checkpoint and mindir, and then exit the training process.

Parameters:
  • save_ckpt (bool) – Whether save the checkpoint before the training process exit. Default: True.

  • save_mindir (bool) – Whether save the mindir before the training process exit. Default: True.

  • file_name (str) – The saved checkpoint and mindir file name, the checkpoint file add suffix ‘.ckpt’, the mindir file add suffix ‘.mindir’. Default: ‘Net’.

  • directory (str) – The directory save checkpoint and mindir. Default: ‘./’.

  • sig (int) – The user registered exit signal, it must be a captureable and negligible signal. When the process receives the signal, exits the training or eval process. Default: signal.SIGTERM.

Raises:
  • ValueError – If the ‘save_ckpt’ is not a bool.

  • ValueError – If the ‘save_mindir’ is not a bool.

  • ValueError – If the ‘file_name’ is not a str.

  • ValueError – If the ‘directory’ is not a str.

  • ValueError – If the ‘sig’ is not an int or the ‘sig’ is signal.SIGKILL.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import dataset as ds
>>> from mindspore import nn
>>>
>>> # Define the forward net
>>> class ForwardNet(nn.Cell):
>>>     def __init__(self, num_class=10, channel=1):
>>>         super(ForwardNet, self).__init__()
>>>         self.param = ms.Parameter(1.0)
>>>         self.relu = ms.ops.ReLU()
>>>
>>>     def construct(self, x):
>>>         return self.relu(x + self.param)
>>> forward_net = ForwardNet()
>>> loss = nn.MAELoss()
>>> opt = nn.Momentum(forward_net.trainable_params(), 0.01, 0.9)
>>> model = ms.Model(forward_net, loss_fn=loss, optimizer=opt)        >>>
>>> # Create dataset
>>> def generator_multi_column():
>>>    i = 0
>>>    while i < 1000:
>>>        i += 1
>>>        yield np.ones((1, 32, 32)).astype(np.float32) * 0.01, np.array(1).astype(np.int32)
>>> dataset = ds.GeneratorDataset(source=generator_multi_column, column_names=["data", "label"])
>>> dataset = dataset.batch(32, drop_remainder=True)
>>>
>>> on_request_exit = ms.train.OnRequestExit(file_name='LeNet5')
>>> model.train(10, dataset, callbacks=on_request_exit)
>>> # The user send the signal SIGTERM to the training process,
>>> # the process would save the checkpoint and mindir, and then exit the training process.
on_eval_begin(run_context)[source]

When the eval begin, register the handler for exit signal transferred by user.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_eval_end(run_context)[source]

When the eval end, if received the exit signal, the checkpoint and mindir would be saved according to the user config.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

on_eval_step_end(run_context)[source]

When the eval step end, if received the exit signal, set the ‘run_context’ attribute ‘_stop_requested’ to True. Then exit the eval process after this step eval.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_begin(run_context)[source]

When the train begin, register the handler for exit signal transferred by user.

Parameters:

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_end(run_context)[source]

When the train end, if received the exit signal, the checkpoint and mindir would be saved according to the user config.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

When the train epoch end, if received the exit signal, set the ‘run_context’ attribute ‘_stop_requested’ to True. Then exit the training process after this epoch training.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_step_end(run_context)[source]

When the train step end, if received the exit signal, set the ‘run_context’ attribute ‘_stop_requested’ to True. Then exit the training process after this step training.

Parameters:

run_context (RunContext) – Include some information of the model. For more details, please refer to mindspore.train.RunContext.

class tinyms.callbacks.BackupAndRestore(backup_dir, save_freq='epoch', delete_checkpoint=True)[source]

Callback to back up and restore the parameters during training.

Note

This function can only use in training.

Parameters:
  • backup_dir (str) – Path to store and load the checkpoint file.

  • save_freq (Union['epoch', int]) – When set to ‘epoch’ the callback saves the checkpoint at the end of each epoch. When set to an integer, the callback saves the checkpoint every save_freq epoch. Default: ‘epoch’.

  • delete_checkpoint (bool) – If delete_checkpoint=True, the checkpoint will be deleted after training is finished. Default: True.

Raises:

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, BackupAndRestore
>>>
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim)
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> backup_ckpt = BackupAndRestore("backup")
>>> model.train(10, dataset, callbacks=backup_ckpt)
on_train_begin(run_context)[source]

Load the backup checkpoint file at the beginning of epoch.

Parameters:

run_context (RunContext) – Context of the process running. For more details, please refer to mindspore.train.RunContext.

on_train_end(run_context)[source]

Deleted checkpoint file at the end of train.

Parameters:

run_context (RunContext) – Context of the process running. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

Backup checkpoint file at the end of train epoch.

Parameters:

run_context (RunContext) – Context of the process running. For more details, please refer to mindspore.train.RunContext.

tinyms.metrics

Metrics module provides functions to measure the performance of the machine learning models on the evaluation dataset. It’s used to choose the best model.

class tinyms.metrics.AUCMetric[source]

Calculates the auc value. Implement auc metric method.

Computes the Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve.

Parameters:
  • x (Union[np.array, list]) – From the ROC curve(fpr), np.array with false positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • y (Union[np.array, list]) – From the ROC curve(tpr), np.array with true positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • reorder (boolean) – If True, assume that the curve is ascending in the case of ties, as for an ROC curve. If the curve is non-ascending, the result will be wrong. Default: False.

Returns:

Compute result.

Return type:

area (float)

Examples

>>> from tinyms.metrics import AUCMetric
>>>
>>> metric = AUCMetric()
clear()[source]

Clear the internal evaluation result.

tinyms.metrics.names()[source]

Gets all names of the metric methods.

Returns:

List, the name list of metric methods.

Supported Platforms:

Ascend GPU CPU

tinyms.metrics.get_metric_fn(name, *args, **kwargs)[source]

Gets the metric method based on the input name.

Parameters:
  • name (str) – The name of metric method. Names can be obtained by mindspore.train.names() . object for the currently supported metrics.

  • args – Arguments for the metric function.

  • kwargs – Keyword arguments for the metric function.

Returns:

Metric object, class instance of the metric method.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import nn
>>> metric = nn.get_metric_fn('precision', eval_type='classification')
tinyms.metrics.get_metrics(metrics)[source]

Get metrics used in evaluation.

Parameters:

metrics (Union[dict, set]) – Dict or set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}.

Returns:

dict, the key is metric name, the value is class instance of metric method.

Raises:

TypeError – If the type of argument ‘metrics’ is not None, dict or set.

class tinyms.metrics.Accuracy(eval_type='classification')[source]

Calculates the accuracy for classification and multilabel data.

The accuracy class creates two local variables, the correct number and the total number that are used to compute the frequency with which y_pred matches y. This frequency is the accuracy.

\[\text{accuracy} =\frac{\text{true_positive} + \text{true_negative}} {\text{true_positive} + \text{true_negative} + \text{false_positive} + \text{false_negative}}\]
Parameters:

eval_type (str) – The metric to calculate the accuracy over a dataset. Supports ‘classification’ and ‘multilabel’. ‘classification’ means the dataset label is single. ‘multilabel’ means the dataset has multiple labels. Default: ‘classification’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.train import Accuracy
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]), mindspore.float32)
>>> y = Tensor(np.array([1, 0, 1]), mindspore.float32)
>>> metric = Accuracy('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> accuracy = metric.eval()
>>> print(accuracy)
0.6666666666666666
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the accuracy.

Returns:

np.float64, the computed result.

Raises:

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the local variables. For ‘classification’, if the index of the maximum of the predict value matches the label, the predict result is correct. For ‘multilabel’, the predict value match the label, the predict result is correct.

Parameters:

inputs

Logits and labels. y_pred stands for logits, y stands for labels. y_pred and y must be a Tensor, a list or an array.

  • For the ‘classification’ evaluation type, y_pred is a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\) in most cases (not strictly), where \(N\) is the number of cases and \(C\) is the number of categories. y must be in one-hot format that shape is \((N, C)\), or can be transformed to one-hot format that shape is \((N,)\).

  • For ‘multilabel’ evaluation type, the value of y_pred and y can only be 0 or 1, indices with 1 indicate the positive category. The shape of y_pred and y are both \((N, C)\).

Raises:
  • ValueError – If the number of the inputs is not 2.

  • ValueError – class numbers of last input predicted data and current predicted data not match.

class tinyms.metrics.MAE[source]

Calculates the mean absolute error(MAE).

Creates a criterion that measures the MAE between each element in the input: \(x\) and the target: \(y\).

\[\text{MAE} = \frac{\sum_{i=1}^n \|{y\_pred}_i - y_i\|}{n}\]

where \(n\) is batch size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.train import MAE
>>>
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32)
>>> y = Tensor(np.array([0.1, 0.25, 0.7, 0.9]), mindspore.float32)
>>> error = MAE()
>>> error.clear()
>>> error.update(x, y)
>>> result = error.eval()
>>> print(result)
0.037499990314245224
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the mean absolute error(MAE).

Returns:

numpy.float64. The computed result.

Raises:

RuntimeError – If the total number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_{pred}\) and \(y\).

Parameters:

inputs – Input y_pred and y for calculating MAE where the shape of y_pred and y are both N-D and the shape should be the same.

Raises:

ValueError – If the number of the input is not 2.

class tinyms.metrics.MSE[source]

Measures the mean squared error(MSE).

Creates a criterion that measures the MSE (squared L2 norm) between each element in the prediction and the ground truth: \(x\) and: \(y\).

\[\text{MSE}(x,\ y) = \frac{\sum_{i=1}^n({y\_pred}_i - y_i)^2}{n}\]

where \(n\) is batch size.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.train import MSE
>>>
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32)
>>> y = Tensor(np.array([0.1, 0.25, 0.5, 0.9]), mindspore.float32)
>>> error = MSE()
>>> error.clear()
>>> error.update(x, y)
>>> result = error.eval()
>>> print(result)
0.0031250009778887033
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the mean squared error(MSE).

Returns:

numpy.float64. The computed result.

Raises:

RuntimeError – If the number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_{pred}\) and \(y\).

Parameters:

inputs – Input y_pred and y for calculating the MSE where the shape of y_pred and y are both N-D and the shape should be the same.

Raises:

ValueError – If the number of inputs is not 2.

class tinyms.metrics.Metric[source]

Base class of metric, which is used to evaluate metrics.

The clear, update, and eval should be called when evaluating metric, and they should be overridden by subclasse. update will accumulate intermediate results in the evaluation process, eval will evaluate the final result, and clear will reinitialize the intermediate results.

Never use this class directly, but instantiate one of its subclasses instead, for examples, mindspore.train.MAE, mindspore.train.Recall etc.

Supported Platforms:

Ascend GPU CPU

abstract clear()[source]

An interface describes the behavior of clearing the internal evaluation result.

Note

All subclasses must override this interface.

abstract eval()[source]

An interface describes the behavior of computing the evaluation result.

Note

All subclasses must override this interface.

property indexes

Get the current indexes value. The default value is None and can be changed by set_indexes.

set_indexes(indexes)[source]

This interface is to rearrange the inputs of update.

Given (label0, label1, logits), set the indexes to [2, 1] then the (logits, label1) will be the actually inputs of update.

Note

When customize a metric, decorate the update function with the decorator mindspore.train.rearrange_inputs() for the indexes to take effect.

Parameters:

indexes (List(int)) – The order of logits and labels to be rearranged.

Outputs:

Metric, its original Class instance.

Raises:

ValueError – If the type of input ‘indexes’ is not a list or its elements are not all int.

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Accuracy
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> y2 = Tensor(np.array([0, 0, 1]))
>>> metric = Accuracy('classification').set_indexes([0, 2])
>>> metric.clear()
>>> # indexes is [0, 2], using x as logits, y2 as label.
>>> metric.update(x, y, y2)
>>> accuracy = metric.eval()
>>> print(accuracy)
0.3333333333333333
abstract update(*inputs)[source]

An interface describes the behavior of updating the internal evaluation result.

Note

All subclasses must override this interface.

Parameters:

inputs – A variable-length input argument list, usually are the logits and the corresponding labels.

tinyms.metrics.rearrange_inputs(func)[source]

This decorator is used to rearrange the inputs according to its indexes attribute of the class.

This decorator is currently applied on the update of mindspore.train.Metric.

Parameters:

func (Callable) – A candidate function to be wrapped whose input will be rearranged.

Returns:

Callable, used to exchange metadata between functions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore.nn import rearrange_inputs
>>> class RearrangeInputsExample:
...     def __init__(self):
...         self._indexes = None
...
...     @property
...     def indexes(self):
...         return getattr(self, '_indexes', None)
...
...     def set_indexes(self, indexes):
...         self._indexes = indexes
...         return self
...
...     @rearrange_inputs
...     def update(self, *inputs):
...         return inputs
>>>
>>> rearrange_inputs_example = RearrangeInputsExample().set_indexes([1, 0])
>>> outs = rearrange_inputs_example.update(5, 9)
>>> print(outs)
(9, 5)
class tinyms.metrics.Precision(eval_type='classification')[source]

Calculates precision for classification and multilabel data.

The precision function creates two local variables, \(\text{true_positive}\) and \(\text{false_positive}\), which are used to compute the precision. The calculation formula is:

\[\text{precision} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_positive}}\]

Note

In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.

Parameters:

eval_type (str) – ‘classification’ or ‘multilabel’ are supported. Default: ‘classification’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Precision
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Precision('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> precision = metric.eval()
>>> print(precision)
[0.5 1. ]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the precision.

Parameters:

average (bool) – Specify whether calculate the average precision. Default: False.

Returns:

numpy.float64, the computed result.

update(*inputs)[source]

Updates the internal evaluation result with y_pred and y.

Parameters:

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).

Raises:

ValueError – If the number of inputs is not 2.

class tinyms.metrics.HausdorffDistance(distance_metric='euclidean', percentile=None, directed=False, crop=True)[source]

Calculates the Hausdorff distance. Hausdorff distance is the maximum and minimum distance between two point sets. Given two feature sets A and B, the Hausdorff distance between two point sets A and B is defined as follows:

\[\begin{split}\begin{array}{ll} \\ H(A, B) = \text{max}[h(A, B), h(B, A)]\\ h(A, B) = \underset{a \in A}{\text{max}}\{\underset{b \in B}{\text{min}} \rVert a - b \rVert \}\\ h(B, A) = \underset{b \in B}{\text{max}}\{\underset{a \in A}{\text{min}} \rVert b - a \rVert \} \end{array}\end{split}\]

where \(h(A,B)\) is the maximum distance of a set A to the nearest point in the set B, \(h(B, A)\) is the maximum distance of a set B to the nearest point in the set A. The distance calculation is oriented, which means that most of times \(h(A, B)\) is not equal to \(h(B, A)\). \(H(A, B)\) is the two-way Hausdorff distance.

Parameters:
  • distance_metric (string) – Three distance measurement methods are supported: “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • percentile (float) – Floating point numbers between 0 and 100. Specify the percentile parameter to get the percentile of the Hausdorff distance. Default: None.

  • directed (bool) – If True, it only calculates h(y_pred, y) distance, otherwise, max(h(y_pred, y), h(y, y_pred)) will be returned. Default: False.

  • crop (bool) – Crop input images and only keep the foregrounds. In order to maintain two inputs’ shapes, here the bounding box is achieved by (y_pred | y) which represents the union set of two images. Default: True.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import HausdorffDistance
>>>
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = HausdorffDistance()
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> mean_average_distance = metric.eval()
>>> print(mean_average_distance)
1.4142135623730951
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate the no-directed or directed Hausdorff distance.

Returns:

numpy.float64, the hausdorff distance.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result with the inputs: ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters:

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are a Tensor, list or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. Data type of ‘label_idx’ is int or float.

Raises:
  • ValueError – If the number of the inputs is not 3.

  • TypeError – If the data type of label_idx is not int or float.

  • ValueError – If the value of label_idx is not in y_pred or y.

  • ValueError – If y_pred and y have different shapes.

class tinyms.metrics.Recall(eval_type='classification')[source]

Calculates recall for classification and multilabel data.

The recall class creates two local variables, \(\text{true_positive}\) and \(\text{false_negative}\), that are used to compute the recall. The calculation formula is:

\[\text{recall} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_negative}}\]

Note

In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.

Parameters:

eval_type (str) – ‘classification’ or ‘multilabel’ are supported. Default: ‘classification’. Default: ‘classification’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Recall
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Recall('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> recall = metric.eval()
>>> print(recall)
[1. 0.5]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the recall.

Parameters:

average (bool) – Specify whether calculate the average recall. Default: False.

Returns:

numpy.float64, the computed result.

update(*inputs)[source]

Updates the internal evaluation result with y_pred and y.

Parameters:

inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).

Raises:

ValueError – If the number of inputs is not 2.

class tinyms.metrics.Fbeta(beta)[source]

Calculates the Fbeta score.

Fbeta score is a weighted mean of precision and recall.

\[F_\beta=\frac{(1+\beta^2) \cdot true\_positive} {(1+\beta^2) \cdot true\_positive +\beta^2 \cdot false\_negative + false\_positive}\]
Parameters:

beta (Union[float, int]) – Beta coefficient in the F measure. beta should be greater than 0.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Fbeta
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Fbeta(1)
>>> metric.clear()
>>> metric.update(x, y)
>>> fbeta = metric.eval()
>>> print(fbeta)
[0.66666667 0.66666667]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the fbeta.

Parameters:

average (bool) – Whether to calculate the average fbeta. Default: False.

Returns:

numpy.ndarray or numpy.float64, the computed result.

update(*inputs)[source]

Updates the internal evaluation result y_pred and y.

Parameters:

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.

Raises:
  • ValueError – class numbers of last input predicted data and current predicted data not match.

  • ValueError – If the predicted value and true value contain different classes.

class tinyms.metrics.BleuScore(n_gram=4, smooth=False)[source]

Calculates the BLEU score. BLEU (bilingual evaluation understudy) is a metric for evaluating the quality of text translated by machine.

Parameters:
  • n_gram (int) – The n_gram value ranges from 1 to 4. Default: 4.

  • smooth (bool) – Whether or not to apply smoothing. Default: False.

Raises:

ValueError – If the value range of n_gram is not from 1 to 4.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore.train import BleuScore
>>>
>>> candidate_corpus = [['i', 'have', 'a', 'pen', 'on', 'my', 'desk']]
>>> reference_corpus = [[['i', 'have', 'a', 'pen', 'in', 'my', 'desk'],
...                      ['there', 'is', 'a', 'pen', 'on', 'the', 'desk']]]
>>> metric = BleuScore()
>>> metric.clear()
>>> metric.update(candidate_corpus, reference_corpus)
>>> bleu_score = metric.eval()
>>> print(bleu_score)
0.5946035575013605
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the bleu score.

Returns:

numpy.float64, the bleu score.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result with candidate_corpus and reference_corpus.

Parameters:

inputs (iterator) – Input candidate_corpus and reference_corpus. candidate_corpus and reference_corpus are both a list. The candidate_corpus is an iterable of machine translated corpus. The reference_corpus is an iterable object of iterables of reference corpus.

Raises:
  • ValueError – If the number of inputs is not 2.

  • ValueError – If the lengths of candidate_corpus and reference_corpus are not equal.

class tinyms.metrics.CosineSimilarity(similarity='cosine', reduction='none', zero_diagonal=True)[source]

Computes representation similarity.

Parameters:
  • similarity (str) – ‘dot’ or ‘cosine’. Default: ‘cosine’.

  • reduction (str) – ‘none’, ‘sum’, ‘mean’ (all along dim -1). Default: ‘none’.

  • zero_diagonal (bool) – If True, diagonals of results will be set to zero. Default: True.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore.train import CosineSimilarity
>>>
>>> test_data = np.array([[1, 3, 4, 7], [2, 4, 2, 5], [3, 1, 5, 8]])
>>> metric = CosineSimilarity()
>>> metric.clear()
>>> metric.update(test_data)
>>> square_matrix = metric.eval()
>>> print(square_matrix)
[[0.  0.94025615  0.95162452]
 [0.94025615  0.  0.86146098]
 [0.95162452  0.86146098  0.]]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the similarity matrix.

Returns:

numpy.ndarray. The similarity matrix.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(inputs)[source]

Updates the internal evaluation result with ‘inputs’.

Parameters:

inputs (Union[Tensor, list, numpy.ndarray]) – The input matrix.

class tinyms.metrics.OcclusionSensitivity(pad_val=0.0, margin=2, n_batch=128, b_box=None)[source]

Calculates the occlusion sensitivity of the model for a given image, which illustrates which parts of an image are most important for a network’s classification.

Occlusion sensitivity refers to how the predicted probability changes with the change of the occluded part of an image. The higher the value in the output image is, the greater the decline of certainty, indicating that the occluded area is more important in the decision-making process.

Parameters:
  • pad_val (float) – The padding value of the occluded part in an image. Default: 0.0.

  • margin (Union[int, Sequence]) – Create a cuboid / cube around the voxel you want to occlude. Default: 2.

  • n_batch (int) – number of images in a batch. Default: 128.

  • b_box (Sequence) – Bounding box on which to perform the analysis. The output image will also match in size. There should be a minimum and maximum for all dimensions except batch: [min1, max1, min2, max2,...]. If no bounding box is supplied, this will be the same size as the input image. If a bounding box is used, the output image will be cropped to this size. Default: None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>> from mindspore.train import OcclusionSensitivity
>>>
>>> class DenseNet(nn.Cell):
...     def __init__(self):
...         super(DenseNet, self).__init__()
...         w = np.array([[0.1, 0.8, 0.1, 0.1],[1, 1, 1, 1]]).astype(np.float32)
...         b = np.array([0.3, 0.6]).astype(np.float32)
...         self.dense = nn.Dense(4, 2, weight_init=Tensor(w), bias_init=Tensor(b))
...
...     def construct(self, x):
...         return self.dense(x)
>>>
>>> model = DenseNet()
>>> test_data = np.array([[0.1, 0.2, 0.3, 0.4]]).astype(np.float32)
>>> label = np.array(1).astype(np.int32)
>>> metric = OcclusionSensitivity()
>>> metric.clear()
>>> metric.update(model, test_data, label)
>>> score = metric.eval()
>>> print(score)
[0.29999995    0.6    1.    0.9]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the occlusion_sensitivity.

Returns:

A numpy ndarray.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates input, including model, y_pred and label.

Parameters:

inputsy_pred and label are a Tensor, list or numpy.ndarray. y_pred: a batch of images to test, which could be 2D or 3D. label: classification labels to check for changes. label is normally the true label, but doesn’t have to be. model is the neural network.

Raises:
  • ValueError – If the number of inputs is not 3.

  • RuntimeError – If y_pred.shape[0] is not 1.

  • RuntimeError – If the number of labels is different from the number of batches.

class tinyms.metrics.F1[source]

Calculates the F1 score. F1 is a special case of Fbeta when beta is 1. Refer to class mindspore.train.Fbeta for more details.

\[F_1=\frac{2\cdot true\_positive}{2\cdot true\_positive + false\_negative + false\_positive}\]
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import F1
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = F1()
>>> metric.update(x, y)
>>> result = metric.eval()
>>> print(result)
[0.66666667 0.66666667]
class tinyms.metrics.Dice(smooth=1e-05)[source]

The Dice coefficient is a set similarity metric. It is used to calculate the similarity between two samples. The value of the Dice coefficient is 1 when the segmentation result is the best and is 0 when the segmentation result is the worst. The Dice coefficient indicates the ratio of the area between two objects to the total area. The function is shown as follows:

\[dice = \frac{2 * (pred \bigcap true)}{pred \bigcup true}\]
Parameters:

smooth (float) – A term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Dice
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([[0, 1], [1, 0], [0, 1]]))
>>> metric = Dice(smooth=1e-5)
>>> metric.clear()
>>> metric.update(x, y)
>>> dice = metric.eval()
>>> print(dice)
0.20467791371802546
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the Dice.

Returns:

Float, the computed result.

Raises:

RuntimeError – If the total number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result y_pred and y.

Parameters:

inputs (tuple) – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is the predicted value, y is the true value. The shape of y_pred and y are both \((N, ...)\).

Raises:
  • ValueError – If the number of the inputs is not 2.

  • ValueError – If y_pred and y do not have the same shape.

class tinyms.metrics.ROC(class_num=None, pos_label=None)[source]

Calculates the ROC curve. It is suitable for solving binary classification and multi classification problems. In the case of multiclass, the values will be calculated based on a one-vs-the-rest approach.

Parameters:
  • class_num (int) – The number of classes. It is not necessary to provide this argument under the binary classification scenario. Default: None.

  • pos_label (int) – Determine the integer of positive class. For binary problems, it is translated to 1 by default. For multiclass problems, this argument should not be set, as it will iteratively changed in the range [0,num_classes-1]. Default: None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import ROC
>>>
>>> # 1) binary classification example
>>> x = Tensor(np.array([3, 1, 4, 2]))
>>> y = Tensor(np.array([0, 1, 2, 3]))
>>> metric = ROC(pos_label=2)
>>> metric.clear()
>>> metric.update(x, y)
>>> fpr, tpr, thresholds = metric.eval()
>>> print(fpr)
[0. 0. 0.33333333 0.6666667 1.]
>>> print(tpr)
[0. 1. 1. 1. 1.]
>>> print(thresholds)
[5 4 3 2 1]
>>>
>>> # 2) multiclass classification example
>>> x = Tensor(np.array([[0.28, 0.55, 0.15, 0.05], [0.10, 0.20, 0.05, 0.05], [0.20, 0.05, 0.15, 0.05],
...                     [0.05, 0.05, 0.05, 0.75]]))
>>> y = Tensor(np.array([0, 1, 2, 3]))
>>> metric = ROC(class_num=4)
>>> metric.clear()
>>> metric.update(x, y)
>>> fpr, tpr, thresholds = metric.eval()
>>> print(fpr)
[array([0., 0., 0.33333333, 0.66666667, 1.]), array([0., 0.33333333, 0.33333333, 1.]),
array([0., 0.33333333, 1.]), array([0., 0., 1.])]
>>> print(tpr)
[array([0., 1., 1., 1., 1.]), array([0., 0., 1., 1.]), array([0., 1., 1.]), array([0., 1., 1.])]
>>> print(thresholds)
[array([1.28, 0.28, 0.2, 0.1, 0.05]), array([1.55, 0.55, 0.2, 0.05]), array([1.15, 0.15, 0.05]),
array([1.75, 0.75, 0.05])]
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the ROC curve.

Returns:

A tuple, composed of fpr, tpr, and thresholds.

  • fpr (np.array) - False positive rate. In binary classification case, a fpr numpy array under different thresholds will be returned, otherwise in multiclass case, a list of fpr numpy arrays will be returned and each element represents one category.

  • tpr (np.array) - True positive rates. n binary classification case, a tps numpy array under different thresholds will be returned, otherwise in multiclass case, a list of tps numpy arrays will be returned and each element represents one category.

  • thresholds (np.array) - Thresholds used for computing fpr and tpr.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Update state with predictions and targets.

Parameters:

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. In most cases (not strictly), y_pred is a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.

tinyms.metrics.auc(x, y, reorder=False)[source]

Computes the AUC(Area Under the Curve) using the trapezoidal rule. This is a general function, given points on a curve, for computing the area under the ROC-curve.

Parameters:
  • x (Union[np.array, list]) – From the ROC curve(fpr), np.array with false positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • y (Union[np.array, list]) – From the ROC curve(tpr), np.array with true positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • reorder (bool) – If False, x must rise or fall monotonously. If True, x will be sorted in ascending order. Default: False.

Returns:

float, the area under the ROC-curve.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore.train import ROC, auc
>>>
>>> y_pred = np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]])
>>> y = np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]])
>>> metric = ROC(pos_label=2)
>>> metric.clear()
>>> metric.update(y_pred, y)
>>> fpr, tpr, thre = metric.eval()
>>> output = auc(fpr, tpr)
>>> print(output)
0.5357142857142857
class tinyms.metrics.TopKCategoricalAccuracy(k)[source]

Calculates the top-k categorical accuracy.

Parameters:

k (int) – Specifies the top-k categorical accuracy to compute.

Raises:
Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import TopKCategoricalAccuracy
>>>
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...         [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = TopKCategoricalAccuracy(3)
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
0.6666666666666666
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the top-k categorical accuracy.

Returns:

numpy.float64, computed result.

update(*inputs)[source]

Updates the internal evaluation result y_pred and y.

Parameters:

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.

Note

The method update must receive input of the form \((y_{pred}, y)\). If some samples have the same accuracy, the first sample will be chosen.

class tinyms.metrics.Top1CategoricalAccuracy[source]

Calculates the top-1 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to TopKCategoricalAccuracy for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Top1CategoricalAccuracy
>>>
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...         [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = Top1CategoricalAccuracy()
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
0.0
class tinyms.metrics.Top5CategoricalAccuracy[source]

Calculates the top-5 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to TopKCategoricalAccuracy for more details.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...            [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = nn.Top5CategoricalAccuracy()
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
1.0
class tinyms.metrics.Loss[source]

Calculates the average of the loss. If method ‘update’ is called every \(n\) iterations, the result of evaluation will be:

\[loss = \frac{\sum_{k=1}^{n}loss_k}{n}\]
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindspore.train import Loss
>>>
>>> x = Tensor(np.array(0.2), mindspore.float32)
>>> loss = Loss()
>>> loss.clear()
>>> loss.update(x)
>>> result = loss.eval()
>>> print(result)
0.20000000298023224
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculates the average of the loss.

Returns:

numpy.float64. The average of the loss.

Raises:

RuntimeError – If the total number is 0.

update(*inputs)[source]

Updates the internal evaluation result.

Parameters:

inputs – Inputs contain only one element, the element is loss. The dimension of loss must be 0 or 1.

Raises:
  • ValueError – If the length of inputs is not 1.

  • ValueError – If the dimension of loss is not 1 or 0.

class tinyms.metrics.MeanSurfaceDistance(symmetric=False, distance_metric='euclidean')[source]

Computes the Average Surface Distance from y_pred to y under the default setting. It measures how much, on average, the surface varies between the segmentation and the GT (ground truth).

Given two sets A and B, S(A) denotes the set of surface voxels of A, the shortest distance of an arbitrary voxel v to S(A) is defined as:

\[{\text{dis}}\left (v, S(A)\right ) = \underset{s_{A} \in S(A)}{\text{min }}\rVert v - s_{A} \rVert\]

The Average Surface Distance from set(B) to set(A) is given by:

\[AvgSurDis(B \rightarrow A) = \frac{\sum_{s_{B} \in S(B)}^{} {\text{dis} \left ( s_{B}, S(A) \right )} } {\left | S(B) \right |}\]

Where the ||*|| denotes a distance measure. |*| denotes the number of elements.

The mean of surface distance from set(B) to set(A) and from set(A) to set(B) is:

\[MeanSurDis(A \leftrightarrow B) = \frac{\sum_{s_{A} \in S(A)}^{} {\text{dis} \left ( s_{A}, S(B) \right )} + \sum_{s_{B} \in S(B)}^{} {\text{dis} \left ( s_{B}, S(A) \right )} }{\left | S(A) \right | + \left | S(B) \right |}\]
Parameters:
  • distance_metric (string) – Three measurement methods are supported: “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • symmetric (bool) – Whether to calculate the Mean Surface Distance between y_pred and y. If False, it only calculates \(AvgSurDis({y\_pred} \rightarrow y)\), otherwise, the mean of distance from y_pred to y and from y to y_pred, i.e. \(MeanSurDis(y\_pred \leftrightarrow y)\), will be returned. Default: False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import MeanSurfaceDistance
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = MeanSurfaceDistance(symmetric=False, distance_metric="euclidean")
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> mean_average_distance = metric.eval()
>>> print(mean_average_distance)
0.8047378541243649
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate mean surface distance.

Returns:

numpy.float64. The mean surface distance value.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters:

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are a Tensor, list or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. ‘label_idx’, the data type of label_idx is int.

Raises:
  • ValueError – If the number of the inputs is not 3.

  • TypeError – If the data type of label_idx is not int or float.

  • ValueError – If the value of label_idx is not in y_pred or y.

  • ValueError – If y_pred and y have different shapes.

class tinyms.metrics.RootMeanSquareDistance(symmetric=False, distance_metric='euclidean')[source]

Computes the Root Mean Square Surface Distance from y_pred to y under the default setting.

Given two sets A and B, S(A) denotes the set of surface voxels of A, the shortest distance of an arbitrary voxel v to S(A) is defined as:

\[{\text{dis}}\left (v, S(A)\right ) = \underset{s_{A} \in S(A)}{\text{min }}\rVert v - s_{A} \rVert\]

The Root Mean Square Surface Distance from set(B) to set(A) is:

\[RmsSurDis(B \rightarrow A) = \sqrt{\frac{\sum_{s_{B} \in S(B)}^{} {\text{dis}^2 \left ( s_{B}, S(A) \right )} }{\left | S(B) \right |}}\]

Where the ||*|| denotes a distance measure. |*| denotes the number of elements.

The Root Mean Square Surface Distance from set(B) to set(A) and from set(A) to set(B) is:

\[RmsSurDis(A \leftrightarrow B) = \sqrt{\frac{\sum_{s_{A} \in S(A)}^{} {\text{dis} \left ( s_{A}, S(B) \right ) ^{2}} + \sum_{s_{B} \in S(B)}^{} {\text{dis} \left ( s_{B}, S(A) \right ) ^{2}}}{\left | S(A) \right | + \left | S(B) \right |}}\]
Parameters:
  • distance_metric (string) – Three measurement methods are supported: “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • symmetric (bool) – Whether to calculate the symmetric average root mean square distance between y_pred and y. If False, only calculates \(RmsSurDis(y\_pred, y)\) surface distance, otherwise, the mean of distance from y_pred to y and from y to y_pred, i.e. \(RmsSurDis({y\_pred} \leftrightarrow y)\) will be returned. Default: False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import RootMeanSquareDistance
>>>
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = RootMeanSquareDistance(symmetric=False, distance_metric="euclidean")
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> root_mean_square_distance = metric.eval()
>>> print(root_mean_square_distance)
1.0000000000000002
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate Root Mean Square Distance.

Returns:

numpy.float64, root mean square surface distance.

Raises:

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters:

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are Tensor, list or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. ‘label_idx’, the data type of label_idx is int.

Raises:
  • ValueError – If the number of the inputs is not 3.

  • TypeError – If the data type of label_idx is not int or float.

  • ValueError – If the value of label_idx is not in y_pred or y.

  • ValueError – If y_pred and y have different shapes.

class tinyms.metrics.Perplexity(ignore_label=None)[source]

Computes perplexity. Perplexity is a measurement about how well a probability distribution or a model predicts a sample. A low perplexity indicates the model can predict the sample well. The function is shown as follows:

\[PP(W)=P(w_{1}w_{2}...w_{N})^{-\frac{1}{N}}=\sqrt[N]{\frac{1}{P(w_{1}w_{2}...w_{N})}}\]

Where \(w\) represents words in corpus.

Parameters:

ignore_label (Union[int, None]) – Index of an invalid label to be ignored when counting. If set to None, it will include all entries. Default: None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Perplexity
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Perplexity(ignore_label=None)
>>> metric.clear()
>>> metric.update(x, y)
>>> perplexity = metric.eval()
>>> print(perplexity)
2.231443166940565
clear()[source]

Clears the internal evaluation result.

eval()[source]

Returns the current evaluation result.

Returns:

numpy.float64. The computed result.

Raises:

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the internal evaluation result preds and labels.

Parameters:

inputs – Input preds and labels. preds and labels are a Tensor, list or numpy.ndarray. preds is the predicted values, labels is the labels of the data. The shape of preds and labels are both \((N, C)\).

Raises:
  • ValueError – If the number of the inputs is not 2.

  • RuntimeError – If preds and labels have different lengths.

  • RuntimeError – If label shape is not equal to pred shape.

class tinyms.metrics.ConfusionMatrix(num_classes, normalize='no_norm', threshold=0.5)[source]

Computes the confusion matrix, which is commonly used to evaluate the performance of classification models, including binary classification and multiple classification.

If you only need confusion matrix, use this class. If you want to calculate other metrics, such as ‘PPV’, ‘TPR’, ‘TNR’, etc., use class mindspore.train.ConfusionMatrixMetric .

Parameters:
  • num_classes (int) – Number of classes in the dataset.

  • normalize (str) –

    Normalization mode for confusion matrix. Default: “no_norm”. Choose from:

    • ’no_norm’ (None) - No Normalization is used. Default: None.

    • ’target’ (str) - Normalization based on target value.

    • ’prediction’ (str) - Normalization based on predicted value.

    • ’all’ (str) - Normalization over the whole matrix.

  • threshold (float) – The threshold used to compare with the input tensor. Default: 0.5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import ConfusionMatrix
>>>
>>> x = Tensor(np.array([1, 0, 1, 0]))
>>> y = Tensor(np.array([1, 0, 0, 1]))
>>> metric = ConfusionMatrix(num_classes=2, normalize='no_norm', threshold=0.5)
>>> metric.clear()
>>> metric.update(x, y)
>>> output = metric.eval()
>>> print(output)
[[1. 1.]
 [1. 1.]]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes confusion matrix.

Returns:

numpy.ndarray, the computed result.

update(*inputs)[source]

Update state with y_pred and y.

Parameters:

inputs (tuple) – Input y_pred and y. y_pred and y are a Tensor, list or numpy.ndarray. y_pred is the predicted value, y is the true value. The shape of y_pred is \((N, C, ...)\) or \((N, ...)\). The shape of y is \((N, ...)\).

Raises:
  • ValueError – If the number of inputs is not 2.

  • ValueError – If the dim of y_pred and y are not equal.

class tinyms.metrics.ConfusionMatrixMetric(skip_channel=True, metric_name='sensitivity', calculation_method=False, decrease='mean')[source]

Computes metrics related to confusion matrix. The calculation based on full-scale tensor, average values of batch, class channel and iteration are collected. All metrics supported by the interface are listed in comments of metric_name.

If you want to calculate metrics related to confusion matrix, such as ‘PPV’, ‘TPR’, ‘TNR’, use this class. If you only want to calculate confusion matrix, please use mindspore.train.ConfusionMatrix .

Parameters:
  • skip_channel (bool) – Whether to skip the measurement calculation on the first channel of the predicted output. Default: True.

  • metric_name (str) – Names of supported metrics , users can also set the industry common aliases for them. Choose from: [“sensitivity”, “specificity”, “precision”, “negative predictive value”, “miss rate”, “fall out”, “false discovery rate”, “false omission rate”, “prevalence threshold”, “threat score”, “accuracy”, “balanced accuracy”, “f1 score”, “matthews correlation coefficient”, “fowlkes mallows index”, “informedness”, “markedness”]. Default: “sensitivity”.

  • calculation_method (bool) – If true, the measurement for each sample will be calculated first. If not, the confusion matrix of all samples will be accumulated first. As for classification task, ‘calculation_method’ should be False. Default: False.

  • decrease (str) – The reduction method on data batch. decrease takes effect only when calculation_method is True. Default: “mean”. Choose from: [“none”, “mean”, “sum”, “mean_batch”, “sum_batch”, “mean_channel”, “sum_channel”].

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import ConfusionMatrixMetric
>>>
>>> metric = ConfusionMatrixMetric(skip_channel=True, metric_name="tpr",
...                                   calculation_method=False, decrease="mean")
>>> metric.clear()
>>> x = Tensor(np.array([[[0], [1]], [[1], [0]]]))
>>> y = Tensor(np.array([[[0], [1]], [[0], [1]]]))
>>> metric.update(x, y)
>>> avg_output = metric.eval()
>>> print(avg_output)
[0.5]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes confusion matrix metric.

Returns:

ndarray, the computed result.

update(*inputs)[source]

Update state with predictions and targets.

Parameters:

inputs (tuple) –

Input y_pred and y. y_pred and y are a Tensor, list or numpy.ndarray.

  • y_pred (ndarray): The batch data shape is \((N, C, ...)\) or \((N, ...)\), representing onehot format or category index format respectively. As for classification tasks, y_pred should have the shape [BN] where N is larger than 1. As for segmentation tasks, the shape should be [BNHW] or [BNHWD].

  • y (ndarray): It must be one-hot format. The batch data shape is \((N, C, ...)\).

Raises:

ValueError – If the number of the inputs is not 2.

tinyms.hub

tinyms.serving

tinyms.app

Release Roadmap

v0.2.0

Major Features Planning

  • RNN support with initial focus on LSTM and SentimentNet

  • ModelPark additions: Alex,Densenet100,Bert.

  • TinyMS Hub, for one click pretrained model depolyment and inference.

  • Adding TinyMS into MindSpore CI pipeline to ensure no breaking change to affect high level API experience.

Plan will be finalized at the upcoming community design meeting.

Contributing Guidelines

Firstly a great welcome for anyone who wants to participate in TinyMS community👏. For those who are not familiar with how this community works, these are some guidelines to get you quickly getting started.

Contributor License Agreement

It’s required to sign CLA before your first code submission to TinyMS community.

For individual contributor, please refer to ICLA sign page for the detailed information.

Getting Started

Contribution Workflow

Code style

Please follow this style to make TinyMS easy to review, maintain and develop.

  • Coding guidelines

    The Python coding style suggested by Python PEP 8 Coding Style is adopted in TinyMS community.

  • Unittest guidelines

    The Python unittest style suggested by pytest is adopted in TinyMS community.

  • Autodoc guidelines

    The Autodoc generated style suggested by Sphinx is adopted in TinyMS community.

Fork-Pull development model

  • Fork TinyMS repository

    Before submitting code to TinyMS project, please make sure that this project have been forked to your own repository. It means that there will be parallel development between TinyMS repository and your own repository, so be careful to avoid the inconsistency between them.

    NOTICE: The default branch name of TinyMS project is main instead of master.

  • Clone the remote repository

    If you want to download the code to the local machine, git is the best choice:

    git clone https://github.com/{insert_your_forked_repo}/tinyms.git
    git remote add upstream https://github.com/tinyms-ai/tinyms.git
    
  • Develop code locally

    To avoid inconsistency between multiple branches, checking out to a new branch for every pull request is SUGGESTED:

    git checkout -b {new_branch_name}
    

    NOTICE: Please try to pull the latest code from upstream repository (git pull upstream main) every time before checking out a new branch.

    Then you can change the code arbitrarily.

  • Push the code to the remote repository

    After updating the code, you should push the update in the formal way:

    git add .
    git status # Check the update status
    git commit -m "Your commit title"
    git commit -s --amend # Add the concrete description of your commit
    git push origin {new_branch_name}
    
  • Pull a request to TinyMS repository

    In the last step, your need to pull a compare request between your new branch and TinyMS main branch. After finishing the pull request, the Travis CI will be automatically set up for building test.

Report issues

A great way to contribute to the project is to send a detailed report when you encounter an issue. We always appreciate a well-written, thorough bug report, and will thank you for it!🤝

When reporting issues, refer to this format:

  • What version of env (tinyms, mindspore, os, python etc) are you using?

  • Is this a BUG REPORT or FEATURE REQUEST?

  • What happened?

  • What you expected to happen?

  • How to reproduce it? (as minimally and precisely as possible)

  • Special notes for your reviewers?

Issues advisory:

  • If you find an unclosed issue, which is exactly what you are going to solve, please put some comments on that issue to tell others you would be in charge of it.

  • If an issue is opened for a while, it’s recommended for contributors to precheck before working on solving that issue.

  • If you resolve an issue which is reported by yourself, it’s also required to let others know before closing that issue.

Propose PRs

Working on your first Pull Request? 📚You can learn how from this free series How to Contribute to an Open Source Project on GitHub📚

When proposing pull requests, please adhere to these rules:

  • Raise your idea as an issue on GitHub.

  • If it is a new feature that needs lots of design details, a design proposal should also be submitted.

  • After reaching consensus in the issue discussions and design proposal reviews, complete the development on the forked repo and submit a PR.

  • None of PRs is not permitted until it receives 2+ LGTM from approvers. Please NOTICE that approver is NOT allowed to add LGTM on his own PR.

  • After PR is sufficiently discussed, it will get merged, abandoned or rejected depending on the outcome of the discussion.

PRs advisory:

  • Any irrelevant changes should be avoided.

  • Make sure your commit history being ordered.

  • Always keep your branch up with the master branch.

  • For bug-fix PRs, make sure all related issues being linked.

Community Support

Whenever you feel confused in this community, please be free to reach out the help from TinyMS community with these approaches:

  • Wechat communication. Add mindspore0328 Wechat ID to ask for help.

  • QQ group. TBD.

  • Slack channel. Join tinyms channel in MindSpore Slack to communicate with each other.

Communication

Technical Discussion

  • ISSUEs and PRs are always the preferred way to have technical discussions. Please refer to the guidelines

Community Support

Whenever you feel confused in this community, please be free to reach out the help from TinyMS community with these approaches:

  • Wechat communication. Add mindspore0328 Wechat ID to ask for help.

  • QQ group. TBD.

  • Slack channel. Join tinyms channel in MindSpore Slack to communicate with each other.

FAQ

TODO