tinyms.model

class tinyms.model.Model(network)[source]

High-Level API for Training or Evaluation.

Model groups layers into an object with training and inference features.

Parameters

network (layers.Layer) – A training or testing network.

Examples

>>> from tinyms.model import Model, lenet5
>>> form tinyms.losses import SoftmaxCrossEntropyWithLogits
>>> from tinyms.optimizers import Momentum
>>>
>>> net = lenet5(class_num=10)
>>> model = Model(net)
>>> net_loss = SoftmaxCrossEntropyWithLogits()
>>> net_opt = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model.compile(loss_fn=net_loss, optimizer=net_opt, metrics=None)
>>> # For details about how to build the dataset, please refer to the API document on the official website.
>>> ds_train = create_custom_dataset()
>>> model.train(2, ds_train)
compile(loss_fn=None, optimizer=None, metrics=None, eval_network=None, amp_level='O0', **kwargs)[source]

High-Level API for configure the train or eval network.

Parameters
  • loss_fn (layers.Layer) – Objective function, if loss_fn is None, the network should contain the logic of loss and grads calculation, and the logic of parallel if needed. Default: None.

  • optimizer (layers.Layer) – Optimizer for updating the weights. Default: None.

  • metrics (Union[dict, set]) – A Dictionary or a set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}. Default: None.

  • eval_network (layers.Layer) – Network for evaluation. If not defined, network and loss_fn would be wrapped as eval_network. Default: None.

  • amp_level (str) –

    Option for argument level in mindspore.amp.build_train_network, level for mixed precision training. Supports [“O0”, “O2”, “O3”, “auto”]. Default: “O0”.

    • O0: Do not change.

    • O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.

    • O3: Cast network to float16, with additional property ‘keep_batchnorm_fp32=False’.

    • auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set

    level to O3 Ascend. The recommended level is choose by the export experience, cannot always generalize. User should specify the level for special network.

    O2 is recommended on GPU, O3 is recommended on Ascend.

eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]

Evaluation API where the iteration is controlled by python front-end.

Configure to pynative mode or CPU, the evaluating process will be performed with dataset non-sink mode.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Parameters
  • valid_dataset (Dataset) – Dataset to evaluate the model.

  • callbacks (list) – List of callback objects which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

Returns

Dict, which returns the loss value and metrics values for the model in the test mode.

Examples

>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'})
>>> acc = model.eval(dataset, dataset_sink_mode=False)
export(*inputs, file_name, file_format='MINDIR', **kwargs)[source]

Export the TinyMS prediction model to a file in the specified format.

Parameters
  • inputs (Tensor) – Inputs of the net.

  • file_name (str) – File name of the model to be exported.

  • file_name – File name of the model to be exported.

  • file_format (str) –

    MindSpore currently supports AIR, ONNX and MINDIR format for exported model. Default: MINDIR.

    • AIR: Ascend Intermediate Representation. An intermediate representation format of Ascend model.

    Recommended suffix for output file is ‘.air’.

    • ONNX: Open Neural Network eXchange. An open format built to represent machine learning models.

    Recommended suffix for output file is ‘.onnx’.

    • MINDIR: MindSpore Native Intermediate Representation for Anf. An intermediate representation format

    for MindSpore models. Recommended suffix for output file is ‘.mindir’.

  • kwargs (dict) –

    Configuration options dictionary.

    • quant_mode: The mode of quant.

    • mean: Input data mean. Default: 127.5.

    • std_dev: Input data variance. Default: 127.5.

infer_predict_layout(*predict_data)[source]

Generate parameter layout for the predict network in auto or semi auto parallel mode.

Data could be a single tensor or multiple tensors.

Note

Batch data should be put together in one tensor.

Parameters

predict_data (Tensor) – One tensor or multiple tensors of predict data.

Returns

Parameter layout dictionary used for load distributed checkpoint

Return type

parameter_layout_dict (dict)

Examples

>>> context.set_context(mode=context.GRAPH_MODE)
>>> context.set_auto_parallel_context(full_batch=True, parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL)
>>> input_data = Tensor(np.random.randint(0, 255, [1, 3, 224, 224]), mindspore.float32)
>>> model = Model(Net())
>>> model.infer_predict_layout(input_data)
load_checkpoint(ckpt_file_name, strict_load=False)[source]

Loads checkpoint info from a specified file.

Parameters
  • ckpt_file_name (str) – Checkpoint file name.

  • strict_load (bool) – Whether to strict load the parameter into net. If False, it will load parameter in the param_dict into net with the same suffix. Default: False.

Returns

Dict, key is parameter name, value is a Parameter.

Raises

ValueError – Checkpoint file is incorrect.

Examples

>>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt"
>>> param_dict = model.load_checkpoint(ckpt_file_name)
predict(*predict_data)[source]

Generate output predictions for the input samples.

Data could be a single tensor, a list of tensor, or a tuple of tensor.

Note

Batch data should be put together in one tensor.

Parameters

predict_data – The predict data, can be bool, int, float, str, None, tensor, or tuple, list and dict that store these types.

Returns

Tensor, array(s) of predictions.

Examples

>>> input_data = Tensor(np.random.randint(0, 255, [1, 1, 32, 32]), mindspore.float32)
>>> model = Model(Net())
>>> result = model.predict(input_data)
save_checkpoint(ckpt_file_name)[source]

Saves checkpoint info to a specified file.

Parameters

ckpt_file_name (str) – Checkpoint file name. If the file name already exists, it will be overwritten.

Raises

TypeError – If the parameter save_obj is not layers.Layer or list type.

train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True, sink_size=-1)[source]

Training API where the iteration is controlled by python front-end.

When setting pynative mode or CPU, the training process will be performed with dataset not sink.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M. If sink_size > 0, each epoch the dataset can be traversed unlimited times until you get sink_size elements of the dataset. Next epoch continues to traverse from the end position of the previous traversal.

Parameters
  • epoch (int) – Generally, total number of iterations on the data per epoch. When dataset_sink_mode is set to true and sink_size>0, each epoch sink sink_size steps on the data instead of total number of iterations.

  • train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiple data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned. The data and label would be passed to the network and loss function respectively.

  • callbacks (list) – List of callback objects which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode or CPU, the training process will be performed with dataset not sink.

  • sink_size (int) – Control the amount of data in each sink. If sink_size = -1, sink the complete dataset for each epoch. If sink_size > 0, sink sink_size data for each epoch. If dataset_sink_mode is False, set sink_size as invalid. Default: -1.

Examples

>>> from mindspore.train.loss_scale_manager import FixedLossScaleManager
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> model.train(2, dataset)
tinyms.model.lenet5(class_num=10)[source]

Get LeNet5 neural network.

Parameters

class_num (int) – Class number.

Returns

layers.Layer, layer instance of LeNet5 neural network.

Examples

>>> net = lenet5(class_num=10)
class tinyms.model.LeNet(class_num=10, channel_num=1)[source]

LeNet architecture.

Parameters
  • class_num (int) – The number of classes that the training images are belonging to.

  • channel_num (int) – The channel number.

Returns

Tensor, output tensor.

Examples

>>> LeNet(class_num=10)
tinyms.model.resnet50(class_num=10)[source]

Get ResNet50 neural network.

Parameters

class_num (int) – Class number. Default: 10.

Returns

layers.Layer, layer instance of ResNet50 neural network.

Examples

>>> net = resnet50(10)
class tinyms.model.ResNet(block, layer_nums, in_channels, out_channels, strides, num_classes)[source]

ResNet architecture.

Parameters
  • block (layers.Layer) – Block for network.

  • layer_nums (list) – Numbers of block in different layers.

  • in_channels (list) – Input channel in each layer.

  • out_channels (list) – Output channel in each layer.

  • strides (list) – Stride size in each layer.

  • num_classes (int) – The number of classes that the training images are belonging to.

Returns

Tensor, output tensor.

Examples

>>> ResNet(ResidualBlock,
>>>        [3, 4, 6, 3],
>>>        [64, 256, 512, 1024],
>>>        [256, 512, 1024, 2048],
>>>        [1, 2, 2, 2],
>>>        10)
tinyms.model.mobilenetv2(class_num=1000, is_training=True)[source]

Get MobileNetV2 instance for training.

Parameters
  • class_num (int) – The number of classes.

  • is_training (bool) – Whether to do training job, default: True.

Returns

MobileNetV2 instance.

tinyms.model.mobilenetv2_infer(class_num=1000)[source]

Get MobileNetV2 instance for predict.

Parameters

class_num (int) – The number of classes.

Returns

MobileNetV2 instance.

class tinyms.model.MobileNetV2(class_num=1000, width_mult=1.0, round_nearest=8, input_channel=32, last_channel=1280, is_training=True)[source]

MobileNetV2 architecture.

Parameters
  • class_num (int) – The number of classes.

  • width_mult (float) – Channels multiplier for round to 8/16 and others. Default is 1.0.

  • round_nearest (int) – Channel round to. Default is 8.

  • input_channel (int) – Input channel. Default is 32.

  • last_channel (int) – The channel of last layer. Default is 1280.

Returns

Tensor, output tensor.

class tinyms.model.SSD300(backbone, class_num=21, is_training=True)[source]

SSD300 Network. Default backbone is MobileNetV2.

Parameters
  • backbone (layers.Layer) – backbone of ssd300 model.

  • class_num (int) – number of classes. Default: 21.

  • is_training (bool) – Specify if in training step. Default: True.

Returns

Tensor, localization predictions. Tensor, class conf scores.

tinyms.model.cycle_gan(G_A, G_B)[source]

Get Cycle GAN network.

Parameters
  • G_A (layers.Layer) – The generator net, currently it should be in [resnet, unet].

  • G_B (layers.Layer) – The generator net, currently it should be in [resnet, unet].

Returns

Cycle GAN instance.

Examples

>>> gan_net = cycle_gan(G_A, G_B)
tinyms.model.cycle_gan_infer(g_model='resnet')[source]

Get Cycle GAN network for predict.

Parameters
  • G_A (layers.Layer) – The generator net, currently it should be in [resnet, unet].

  • G_B (layers.Layer) – The generator net, currently it should be in [resnet, unet].

Returns

Cycle GAN instance.

Examples

>>> gan_net = cycle_gan(G_A, G_B)