tinyms.callbacks¶
Callback related classes and functions in model training phase.
-
class
tinyms.callbacks.
LossTimeMonitor
(lr_init=None)[source]¶ Monitor loss and time.
- Parameters
lr_init (numpy.ndarray) – Train learning rate. Default: None.
- Returns
None
Examples
>>> from tinyms import Tensor >>> from tinyms.callbacks import LossTimeMonitor >>> >>> LossTimeMonitor(lr_init=Tensor([0.05] * 100).asnumpy())
-
class
tinyms.callbacks.
LossTimeMonitorV2
[source]¶ Monitor loss and time version 2.0. This version will not show learning rate.
Args:
- Returns
None
Examples
>>> from tinyms.callbacks import LossTimeMonitorV2 >>> >>> LossTimeMonitorV2()
-
class
tinyms.callbacks.
BertLossCallBack
(dataset_size=1)[source]¶ Monitor the loss in training. If the loss in NAN or INF terminating training.
- Parameters
dataset_size (int) – Print loss every times. Default: 1.
- Returns
None
Examples
>>> from tinyms.callbacks import BertLossCallBack >>> >>> BertLossCallBack(dataset_size=1)
-
class
tinyms.callbacks.
Callback
[source]¶ Abstract base class used to build a callback class. Callbacks are context managers which will be entered and exited when passing into the Model. You can use this mechanism to initialize and release resources automatically.
Callback function will execute some operations in the current step or epoch.
- It holds the information of the model. Such as network, train_network, epoch_num, batch_num,
loss_fn, optimizer, parallel_mode, device_number, list_callback, cur_epoch_num, cur_step_num, dataset_sink_mode, net_outputs and so on.
Examples
>>> from mindspore import Model, nn >>> from mindspore.train.callback import Callback >>> class Print_info(Callback): ... def step_end(self, run_context): ... cb_params = run_context.original_args() ... print("step_num: ", cb_params.cur_step_num) >>> >>> print_cb = Print_info() >>> dataset = create_custom_dataset() >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> model.train(1, dataset, callbacks=print_cb) step_num: 1
-
begin
(run_context)[source]¶ Called once before the network executing.
- Parameters
run_context (RunContext) – Include some information of the model.
-
end
(run_context)[source]¶ Called once after network training.
- Parameters
run_context (RunContext) – Include some information of the model.
-
epoch_begin
(run_context)[source]¶ Called before each epoch beginning.
- Parameters
run_context (RunContext) – Include some information of the model.
-
epoch_end
(run_context)[source]¶ Called after each epoch finished.
- Parameters
run_context (RunContext) – Include some information of the model.
-
step_begin
(run_context)[source]¶ Called before each step beginning.
- Parameters
run_context (RunContext) – Include some information of the model.
-
step_end
(run_context)[source]¶ Called after each step finished.
- Parameters
run_context (RunContext) – Include some information of the model.
-
class
tinyms.callbacks.
LossMonitor
(per_print_times=1)[source]¶ Monitor the loss in training.
If the loss is NAN or INF, it will terminate training.
Note
If per_print_times is 0, do not print loss.
- Parameters
per_print_times (int) – Print the loss every seconds. Default: 1.
- Raises
ValueError – If per_print_times is not an integer or less than zero.
-
step_end
(run_context)[source]¶ Print training loss at the end of step.
- Parameters
run_context (RunContext) – Context of the train running.
-
class
tinyms.callbacks.
TimeMonitor
(data_size=None)[source]¶ Monitor the time in training.
- Parameters
data_size (int) – How many steps are the intervals between print information each time. if the program get batch_num during training, data_size will be set to batch_num, otherwise data_size will be used. Default: None.
- Raises
ValueError – If data_size is not positive int.
-
epoch_begin
(run_context)[source]¶ Record time at the begin of epoch.
- Parameters
run_context (RunContext) – Context of the process running.
-
epoch_end
(run_context)[source]¶ Print process cost time at the end of epoch.
- Parameters
run_context (RunContext) – Context of the process running.
-
class
tinyms.callbacks.
ModelCheckpoint
(prefix='CKP', directory=None, config=None)[source]¶ The checkpoint callback class.
It is called to combine with train process and save the model and network parameters after training.
Note
In the distributed training scenario, please specify different directories for each training process to save the checkpoint file. Otherwise, the training may fail.
- Parameters
prefix (str) – The prefix name of checkpoint files. Default: “CKP”.
directory (str) – The path of the folder which will be saved in the checkpoint file. By default, the file is saved in the current directory. Default: None.
config (CheckpointConfig) – Checkpoint strategy configuration. Default: None.
- Raises
ValueError – If the prefix is invalid.
TypeError – If the config is not CheckpointConfig type.
-
end
(run_context)[source]¶ Save the last checkpoint after training finished.
- Parameters
run_context (RunContext) – Context of the train running.
-
property
latest_ckpt_file_name
¶ Return the latest checkpoint path and file name.
-
step_end
(run_context)[source]¶ Save the checkpoint at the end of step.
- Parameters
run_context (RunContext) – Context of the train running.
-
class
tinyms.callbacks.
SummaryCollector
(summary_dir, collect_freq=10, collect_specified_data=None, keep_default_action=True, custom_lineage_data=None, collect_tensor_freq=None, max_file_size=None, export_options=None)[source]¶ SummaryCollector can help you to collect some common information.
It can help you to collect loss, learning late, computational graph and so on. SummaryCollector also enables the summary operator to collect data to summary files.
Note
Multiple SummaryCollector instances in callback list are not allowed.
Not all information is collected at the training phase or at the eval phase.
SummaryCollector always record the data collected by the summary operator.
SummaryCollector only supports Linux systems.
- Parameters
summary_dir (str) – The collected data will be persisted to this directory. If the directory does not exist, it will be created automatically.
collect_freq (int) – Set the frequency of data collection, it should be greater then zero, and the unit is step. If a frequency is set, we will collect data when (current steps % freq) equals to 0, and the first step will be collected at any time. It is important to note that if the data sink mode is used, the unit will become the epoch. It is not recommended to collect data too frequently, which can affect performance. Default: 10.
collect_specified_data (Union[None, dict]) –
Perform custom operations on the collected data. By default, if set to None, all data is collected as the default behavior. You can customize the collected data with a dictionary. For example, you can set {‘collect_metric’: False} to control not collecting metrics. The data that supports control is shown below. Default: None.
collect_metric (bool): Whether to collect training metrics, currently only the loss is collected. The first output will be treated as the loss and it will be averaged. Optional: True/False. Default: True.
collect_graph (bool): Whether to collect the computational graph. Currently, only training computational graph is collected. Optional: True/False. Default: True.
collect_train_lineage (bool): Whether to collect lineage data for the training phase, this field will be displayed on the lineage page of Mindinsight. Optional: True/False. Default: True.
collect_eval_lineage (bool): Whether to collect lineage data for the evaluation phase, this field will be displayed on the lineage page of Mindinsight. Optional: True/False. Default: True.
collect_input_data (bool): Whether to collect dataset for each training. Currently only image data is supported. If there are multiple columns of data in the dataset, the first column should be image data. Optional: True/False. Default: True.
collect_dataset_graph (bool): Whether to collect dataset graph for the training phase. Optional: True/False. Default: True.
histogram_regular (Union[str, None]): Collect weight and bias for parameter distribution page and displayed in MindInsight. This field allows regular strings to control which parameters to collect. It is not recommended to collect too many parameters at once, as it can affect performance. Note that if you collect too many parameters and run out of memory, the training will fail. Default: None, it means only the first five parameters are collected.
keep_default_action (bool) – This field affects the collection behavior of the ‘collect_specified_data’ field. True: it means that after specified data is set, non-specified data is collected as the default behavior. False: it means that after specified data is set, only the specified data is collected, and the others are not collected. Optional: True/False, Default: True.
custom_lineage_data (Union[dict, None]) – Allows you to customize the data and present it on the MingInsight lineage page. In the custom data, the type of the key supports str, and the type of value supports str, int and float. Default: None, it means there is no custom data.
collect_tensor_freq (Optional[int]) – The same semantics as the collect_freq, but controls TensorSummary only. Because TensorSummary data is too large to be compared with other summary data, this parameter is used to reduce its collection. By default, The maximum number of steps for collecting TensorSummary data is 20, but it will not exceed the number of steps for collecting other summary data. For example, given collect_freq=10, when the total steps is 600, TensorSummary will be collected 20 steps, while other summary data 61 steps, but when the total steps is 20, both TensorSummary and other summary will be collected 3 steps. Also note that when in parallel mode, the total steps will be split evenly, which will affect the number of steps TensorSummary will be collected. Default: None, which means to follow the behavior as described above.
max_file_size (Optional[int]) – The maximum size in bytes of each file that can be written to the disk. For example, to write not larger than 4GB, specify max_file_size=4*1024**3. Default: None, which means no limit.
export_options (Union[None, dict]) –
Perform custom operations on the export data. Note that the size of export files is not limited by the max_file_size. You can customize the export data with a dictionary. For example, you can set {‘tensor_format’: ‘npy’} to export tensor as npy file. The data that supports control is shown below. Default: None, it means that the data is not exported.
tensor_format (Union[str, None]): Customize the export tensor format. Supports [“npy”, None]. Default: None, it means that the tensor is not exported.
npy: export tensor as npy file.
- Raises
ValueError – If the parameter value is not expected.
TypeError – If the parameter type is not expected.
RuntimeError – If an error occurs during data collection.
Examples
>>> import mindspore.nn as nn >>> from mindspore import context >>> from mindspore.train.callback import SummaryCollector >>> from mindspore import Model >>> from mindspore.nn import Accuracy >>> >>> if __name__ == '__main__': ... # If the device_target is GPU, set the device_target to "GPU" ... context.set_context(mode=context.GRAPH_MODE, device_target="Ascend") ... mnist_dataset_dir = '/path/to/mnist_dataset_directory' ... # The detail of create_dataset method shown in model_zoo.official.cv.lenet.src.dataset.py ... ds_train = create_dataset(mnist_dataset_dir, 32) ... # The detail of LeNet5 shown in model_zoo.official.cv.lenet.src.lenet.py ... network = LeNet5(10) ... net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") ... net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9) ... model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()}, amp_level="O2") ... ... # Simple usage: ... summary_collector = SummaryCollector(summary_dir='./summary_dir') ... model.train(1, ds_train, callbacks=[summary_collector], dataset_sink_mode=False) ... ... # Do not collect metric and collect the first layer parameter, others are collected by default ... specified={'collect_metric': False, 'histogram_regular': '^conv1.*'} ... summary_collector = SummaryCollector(summary_dir='./summary_dir', collect_specified_data=specified) ... model.train(1, ds_train, callbacks=[summary_collector], dataset_sink_mode=False)
-
class
tinyms.callbacks.
CheckpointConfig
(save_checkpoint_steps=1, save_checkpoint_seconds=0, keep_checkpoint_max=5, keep_checkpoint_per_n_minutes=0, integrated_save=True, async_save=False, saved_network=None, append_info=None, enc_key=None, enc_mode='AES-GCM')[source]¶ The configuration of model checkpoint.
Note
During the training process, if dataset is transmitted through the data channel, It is suggested to set ‘save_checkpoint_steps’ to an integer multiple of loop_size. Otherwise, the time to save the checkpoint may be biased. It is recommended to set only one save strategy and one keep strategy at the same time. If both save_checkpoint_steps and save_checkpoint_seconds are set, save_checkpoint_seconds will be invalid. If both keep_checkpoint_max and keep_checkpoint_per_n_minutes are set, keep_checkpoint_per_n_minutes will be invalid.
- Parameters
save_checkpoint_steps (int) – Steps to save checkpoint. Default: 1.
save_checkpoint_seconds (int) – Seconds to save checkpoint. Can’t be used with save_checkpoint_steps at the same time. Default: 0.
keep_checkpoint_max (int) – Maximum number of checkpoint files can be saved. Default: 5.
keep_checkpoint_per_n_minutes (int) – Save the checkpoint file every keep_checkpoint_per_n_minutes minutes. Can’t be used with keep_checkpoint_max at the same time. Default: 0.
integrated_save (bool) – Whether to merge and save the split Tensor in the automatic parallel scenario. Integrated save function is only supported in automatic parallel scene, not supported in manual parallel. Default: True.
async_save (bool) – Whether asynchronous execution saves the checkpoint to a file. Default: False.
saved_network (Cell) – Network to be saved in checkpoint file. If the saved_network has no relation with the network in training, the initial value of saved_network will be saved. Default: None.
append_info (list) – The information save to checkpoint file. Support “epoch_num”, “step_num” and dict. The key of dict must be str, the value of dict must be one of int float and bool. Default: None.
enc_key (Union[None, bytes]) – Byte type key used for encryption. If the value is None, the encryption is not required. Default: None.
enc_mode (str) – This parameter is valid only when enc_key is not set to None. Specifies the encryption mode, currently supports ‘AES-GCM’ and ‘AES-CBC’. Default: ‘AES-GCM’.
- Raises
ValueError – If input parameter is not the correct type.
Examples
>>> from mindspore import Model, nn >>> from mindspore.train.callback import ModelCheckpoint, CheckpointConfig >>> >>> class LeNet5(nn.Cell): ... def __init__(self, num_class=10, num_channel=1): ... super(LeNet5, self).__init__() ... self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid') ... self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid') ... self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02)) ... self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02)) ... self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02)) ... self.relu = nn.ReLU() ... self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) ... self.flatten = nn.Flatten() ... ... def construct(self, x): ... x = self.max_pool2d(self.relu(self.conv1(x))) ... x = self.max_pool2d(self.relu(self.conv2(x))) ... x = self.flatten(x) ... x = self.relu(self.fc1(x)) ... x = self.relu(self.fc2(x)) ... x = self.fc3(x) ... return x >>> >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim) >>> data_path = './MNIST_Data' >>> dataset = create_dataset(data_path) >>> config = CheckpointConfig(saved_network=net) >>> ckpoint_cb = ModelCheckpoint(prefix='LeNet5', directory='./checkpoint', config=config) >>> model.train(10, dataset, callbacks=ckpoint_cb)
-
property
append_dict
¶ Get the value of append_dict.
-
property
async_save
¶ Get the value of _async_save.
-
property
enc_key
¶ Get the value of _enc_key
-
property
enc_mode
¶ Get the value of _enc_mode
-
property
integrated_save
¶ Get the value of _integrated_save.
-
property
keep_checkpoint_max
¶ Get the value of _keep_checkpoint_max.
-
property
keep_checkpoint_per_n_minutes
¶ Get the value of _keep_checkpoint_per_n_minutes.
-
property
save_checkpoint_seconds
¶ Get the value of _save_checkpoint_seconds.
-
property
save_checkpoint_steps
¶ Get the value of _save_checkpoint_steps.
-
property
saved_network
¶ Get the value of _saved_network
-
class
tinyms.callbacks.
RunContext
(original_args)[source]¶ Provide information about the model.
Provide information about original request to model function. Callback objects can stop the loop by calling request_stop() of run_context.
- Parameters
original_args (dict) – Holding the related information of model.
-
get_stop_requested
()[source]¶ Return whether a stop is requested or not.
- Returns
bool, if true, model.train() stops iterations.
-
class
tinyms.callbacks.
LearningRateScheduler
(learning_rate_function)[source]¶ Change the learning_rate during training.
- Parameters
learning_rate_function (Function) – The function about how to change the learning rate during training.
Examples
>>> from mindspore import Model >>> from mindspore.train.callback import LearningRateScheduler >>> import mindspore.nn as nn ... >>> def learning_rate_function(lr, cur_step_num): ... if cur_step_num%1000 == 0: ... lr = lr*0.1 ... return lr ... >>> lr = 0.1 >>> momentum = 0.9 >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits() >>> optim = nn.Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum) >>> model = Model(net, loss_fn=loss, optimizer=optim) ... >>> dataset = create_custom_dataset("custom_dataset_path") >>> model.train(1, dataset, callbacks=[LearningRateScheduler(learning_rate_function)], ... dataset_sink_mode=False)
-
step_end
(run_context)[source]¶ Change the learning_rate at the end of step.
- Parameters
run_context (RunContext) – Context of the train running.