tinyms.metrics

Metrics module provides functions to measure the performance of the machine learning models on the evaluation dataset. It’s used to choose the best model.

class tinyms.metrics.AUCMetric[source]

Calculates the auc value. Implement auc metric method.

Computes the Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve.

Parameters
  • x (Union[np.array, list]) – From the ROC curve(fpr), np.array with false positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • y (Union[np.array, list]) – From the ROC curve(tpr), np.array with true positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • reorder (boolean) – If True, assume that the curve is ascending in the case of ties, as for an ROC curve. If the curve is non-ascending, the result will be wrong. Default: False.

Returns

Compute result.

Return type

area (float)

Examples

>>> from tinyms.metrics import AUCMetric
>>>
>>> metric = AUCMetric()
clear()[source]

Clear the internal evaluation result.

tinyms.metrics.names()[source]

Gets the names of the metric methods.

Returns

List, the name list of metric methods.

tinyms.metrics.get_metric_fn(name, *args, **kwargs)[source]

Gets the metric method based on the input name.

Parameters
  • name (str) – The name of metric method. Refer to the ‘__factory__’ object for the currently supported metrics.

  • args – Arguments for the metric function.

  • kwargs – Keyword arguments for the metric function.

Returns

Metric object, class instance of the metric method.

Examples

>>> metric = nn.get_metric_fn('precision', eval_type='classification')
class tinyms.metrics.Accuracy(eval_type='classification')[source]

Calculates the accuracy for classification and multilabel data.

The accuracy class creates two local variables, the correct number and the total number that are used to compute the frequency with which y_pred matches y. This frequency is ultimately returned as the accuracy: an idempotent operation that simply divides the correct number by the total number.

\[\text{accuracy} =\frac{\text{true_positive} + \text{true_negative}} {\text{true_positive} + \text{true_negative} + \text{false_positive} + \text{false_negative}}\]
Parameters

eval_type (str) – The metric to calculate the accuracy over a dataset, for classification (single-label), and multilabel (multilabel classification). Default: ‘classification’.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]), mindspore.float32)
>>> y = Tensor(np.array([1, 0, 1]), mindspore.float32)
>>> metric = nn.Accuracy('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> accuracy = metric.eval()
>>> print(accuracy)
0.6666666666666666
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the accuracy.

Returns

Float, the computed result.

Raises

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_{pred}\) and \(y\).

Parameters

inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. For the ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate the positive category. The shape of y_pred and y are both \((N, C)\).

Raises

ValueError – If the number of the inputs is not 2.

class tinyms.metrics.MAE[source]

Calculates the mean absolute error(MAE).

Creates a criterion that measures the MAE between each element in the input: \(x\) and the target: \(y\).

\[\text{MAE} = \frac{\sum_{i=1}^n \|y_i - x_i\|}{n}\]

Here \(y_i\) is the prediction and \(x_i\) is the true value.

Note

The method update must be called with the form update(y_pred, y).

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32)
>>> y = Tensor(np.array([0.1, 0.25, 0.7, 0.9]), mindspore.float32)
>>> error = nn.MAE()
>>> error.clear()
>>> error.update(x, y)
>>> result = error.eval()
>>> print(result)
0.037499990314245224
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the mean absolute error(MAE).

Returns

Float, the computed result.

Raises

RuntimeError – If the total number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_{pred}\) and \(y\).

Parameters

inputs – Input y_pred and y for calculating MAE where the shape of y_pred and y are both N-D and the shape are the same.

Raises

ValueError – If the number of the input is not 2.

class tinyms.metrics.MSE[source]

Measures the mean squared error(MSE).

Creates a criterion that measures the MSE (squared L2 norm) between each element in the input: \(x\) and the target: \(y\).

\[\text{MSE}(x,\ y) = \frac{\sum_{i=1}^n(y_i - x_i)^2}{n}\]

where \(n\) is batch size.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32)
>>> y = Tensor(np.array([0.1, 0.25, 0.5, 0.9]), mindspore.float32)
>>> error = nn.MSE()
>>> error.clear()
>>> error.update(x, y)
>>> result = error.eval()
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the mean squared error(MSE).

Returns

Float, the computed result.

Raises

RuntimeError – If the number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_{pred}\) and \(y\).

Parameters

inputs – Input y_pred and y for calculating the MSE where the shape of y_pred and y are both N-D and the shape are the same.

Raises

ValueError – If the number of input is not 2.

class tinyms.metrics.Metric[source]

Base class of metric.

Note

For examples of subclasses, please refer to the definition of class MAE, Recall etc.

abstract clear()[source]

An interface describes the behavior of clearing the internal evaluation result.

Note

All subclasses must override this interface.

abstract eval()[source]

An interface describes the behavior of computing the evaluation result.

Note

All subclasses must override this interface.

property indexes

The _indexes is a private attribute, and you can retrieve it by self.indexes.

set_indexes(indexes)[source]

The _indexes is a private attribute and you can modify it by this function. This allows you to determine the order of logits and labels to be calculated in the inputs, specially when you call the method update within this metrics.

Note

It has been applied in subclass of Metric, eg. Accuracy, BleuScore, ConfusionMatrix, CosineSimilarity, MAE, and MSE.

Parameters

indexes (List(int)) – The order of logits and labels to be rearranged.

Outputs:

Metric, its original Class instance.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> y2 = Tensor(np.array([0, 0, 1]))
>>> metric = nn.Accuracy('classification').set_indexes([0, 2])
>>> metric.clear()
>>> metric.update(x, y, y2)
>>> accuracy = metric.eval()
>>> print(accuracy)
0.3333333333333333
abstract update(*inputs)[source]

An interface describes the behavior of updating the internal evaluation result.

Note

All subclasses must override this interface.

Parameters

inputs – A variable-length input argument list.

tinyms.metrics.rearrange_inputs(func)[source]

This decorator is used to rearrange the inputs according to its _indexes attributes which is specified by the set_indexes method.

Examples

>>> class RearrangeInputsExample:
...     def __init__(self):
...         self._indexes = None
...
...     @property
...     def indexes(self):
...         return getattr(self, '_indexes', None)
...
...     def set_indexes(self, indexes):
...         self._indexes = indexes
...         return self
...
...     @rearrange_inputs
...     def update(self, *inputs):
...         return inputs
>>>
>>> rearrange_inputs_example = RearrangeInputsExample().set_indexes([1, 0])
>>> outs = rearrange_inputs_example.update(5, 9)
>>> print(outs)
(9, 5)
Parameters

func (Callable) – A candidate function to be wrapped whose input will be rearranged.

Returns

Callable, used to exchange metadata between functions.

class tinyms.metrics.Precision(eval_type='classification')[source]

Calculates precision for classification and multilabel data.

The precision function creates two local variables, \(\text{true_positive}\) and \(\text{false_positive}\), that are used to compute the precision. This value is ultimately returned as the precision, an idempotent operation that simply divides \(\text{true_positive}\) by the sum of \(\text{true_positive}\) and \(\text{false_positive}\).

\[\text{precision} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_positive}}\]

Note

In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.

Parameters

eval_type (str) – Metric to calculate accuracy over a dataset, for classification or multilabel. Default: ‘classification’.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = nn.Precision('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> precision = metric.eval()
>>> print(precision)
[0.5 1. ]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the precision.

Parameters

average (bool) – Specify whether calculate the average precision. Default value is False.

Returns

Float, the computed result.

update(*inputs)[source]

Updates the internal evaluation result with y_pred and y.

Parameters

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).

Raises

ValueError – If the number of input is not 2.

class tinyms.metrics.HausdorffDistance(distance_metric='euclidean', percentile=None, directed=False, crop=True)[source]

Calculates the Hausdorff distance. Hausdorff distance is the maximum and minimum distance between two point sets. Given two feature sets A and B, the Hausdorff distance between two point sets A and B is defined as follows:

\[H(A, B) = \text{max}[h(A, B), h(B, A)] h(A, B) = \underset{a \in A}{\text{max}}\{\underset{b \in B}{\text{min}} \rVert a - b \rVert \} h(A, B) = \underset{b \in B}{\text{max}}\{\underset{a \in A}{\text{min}} \rVert b - a \rVert \}\]
Parameters
  • distance_metric (string) – The parameter of calculating Hausdorff distance supports three measurement methods, “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • percentile (float) – Floating point numbers between 0 and 100. Specify the percentile parameter to get the percentile of the Hausdorff distance. Default: None.

  • directed (bool) – It can be divided into directional and non directional Hausdorff distance, and the default is non directional Hausdorff distance, specify the percentile parameter to get the percentile of the Hausdorff distance. Default: False.

  • crop (bool) – Crop input images and only keep the foregrounds. In order to maintain two inputs’ shapes, here the bounding box is achieved by (y_pred | y) which represents the union set of two images. Default: True.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = nn.HausdorffDistance()
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> mean_average_distance = metric.eval()
>>> print(mean_average_distance)
1.4142135623730951
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate the no-directed or directed Hausdorff distance.

Returns

A float with hausdorff_distance.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are Tensor or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. ‘label_idx’, the data type of label_idx is int.

Raises

ValueError – If the number of the inputs is not 3.

class tinyms.metrics.Recall(eval_type='classification')[source]

Calculates recall for classification and multilabel data.

The recall class creates two local variables, \(\text{true_positive}\) and \(\text{false_negative}\), that are used to compute the recall. This value is ultimately returned as the recall, an idempotent operation that simply divides \(\text{true_positive}\) by the sum of \(\text{true_positive}\) and \(\text{false_negative}\).

\[\text{recall} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_negative}}\]

Note

In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.

Parameters

eval_type (str) – The metric to calculate the recall over a dataset, for classification or multilabel. Default: ‘classification’.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = nn.Recall('classification')
>>> metric.clear()
>>> metric.update(x, y)
>>> recall = metric.eval()
>>> print(recall)
[1. 0.5]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the recall.

Parameters

average (bool) – Specify whether calculate the average recall. Default value is False.

Returns

Float, the computed result.

update(*inputs)[source]

Updates the internal evaluation result with y_pred and y.

Parameters

inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).

Raises

ValueError – If the number of input is not 2.

class tinyms.metrics.Fbeta(beta)[source]

Calculates the fbeta score.

Fbeta score is a weighted mean of precision and recall.

\[F_\beta=\frac{(1+\beta^2) \cdot true\_positive} {(1+\beta^2) \cdot true\_positive +\beta^2 \cdot false\_negative + false\_positive}\]
Parameters

beta (Union[float, int]) – The weight of precision.

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = nn.Fbeta(1)
>>> metric.clear()
>>> metric.update(x, y)
>>> fbeta = metric.eval()
>>> print(fbeta)
[0.66666667 0.66666667]
clear()[source]

Clears the internal evaluation result.

eval(average=False)[source]

Computes the fbeta.

Parameters

average (bool) – Whether to calculate the average fbeta. Default value is False.

Returns

Float, computed result.

update(*inputs)[source]

Updates the internal evaluation result y_pred and y.

Parameters

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.

class tinyms.metrics.BleuScore(n_gram=4, smooth=False)[source]

Calculates BLEU score of machine translated text with one or more references.

Parameters
  • n_gram (int) – The n_gram value ranges from 1 to 4. Default: 4.

  • smooth (bool) – Whether or not to apply smoothing. Default: False.

Raises

ValueError – If the value range of n_gram is not from 1 to 4.

Supported Platforms:

Ascend GPU CPU

Example

>>> candidate_corpus = [['i', 'have', 'a', 'pen', 'on', 'my', 'desk']]
>>> reference_corpus = [[['i', 'have', 'a', 'pen', 'in', 'my', 'desk'],
...                      ['there', 'is', 'a', 'pen', 'on', 'the', 'desk']]]
>>> metric = BleuScore()
>>> metric.clear()
>>> metric.update(candidate_corpus, reference_corpus)
>>> bleu_score = metric.eval()
>>> print(bleu_score)
0.5946035575013605
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the bleu score.

Returns

A numpy with bleu score.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result with candidate_corpus and reference_corpus.

Parameters

inputs – Input candidate_corpus and reference_corpus. candidate_corpus and reference_corpus are a list. The candidate_corpus is an iterable of machine translated corpus. The reference_corpus is an iterable of iterables of reference corpus.

Raises

ValueError – If the number of input is not 2.

class tinyms.metrics.CosineSimilarity(similarity='cosine', reduction='none', zero_diagonal=True)[source]

Computes representation similarity

Parameters
  • similarity (str) – ‘dot’ or ‘cosine’. Default: ‘cosine’

  • reduction (str) – ‘none’, ‘sum’, ‘mean’ (all along dim -1). Default: ‘none’

  • zero_diagonal (bool) – If true, the diagonals are set to zero. Default: True

Returns

A square matrix (input1, input1) with the similarity scores between all elements. If sum or mean is used, then returns (b, 1) with the reduced value for each row.

Supported Platforms:

Ascend GPU CPU

Example

>>> import numpy as np
>>> from mindspore import nn
>>>
>>> test_data = np.array([[1, 3, 4, 7], [2, 4, 2, 5], [3, 1, 5, 8]])
>>> metric = nn.CosineSimilarity()
>>> metric.clear()
>>> metric.update(test_data)
>>> square_matrix = metric.eval()
>>> print(square_matrix)
[[0.  0.94025615  0.95162452]
 [0.94025615  0.  0.86146098]
 [0.95162452  0.86146098  0.]]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the Cosine_Similarity square matrix.

Returns

A square matrix.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(inputs)[source]

Updates the internal evaluation result with ‘input1’.

Parameters

inputs – input_data input1. The input_data is a Tensor or an array.

class tinyms.metrics.OcclusionSensitivity(pad_val=0.0, margin=2, n_batch=128, b_box=None)[source]

This function is used to calculate the occlusion sensitivity of the model for a given image. Occlusion sensitivity refers to how the probability of a given prediction changes with the change of the occluded part of the image.

For a given result, the output probability is the probability of a region.

The higher the value in the output image is, the greater the decline of certainty, indicating that the occluded area is more important in the decision-making process.

Parameters
  • pad_val (float) – What values need to be entered in the image when a part of the image is occluded. Default: 0.0.

  • margin (Union[int, Sequence]) – Create a cuboid / cube around the voxel you want to occlude. Default: 2.

  • n_batch (int) – number of images in a batch before inference. Default: 128.

  • b_box (Sequence) – Bounding box on which to perform the analysis. The output image will also match in size. There should be a minimum and maximum for all dimensions except batch: [min1, max1, min2, max2,...]. If no bounding box is supplied, this will be the same size as the input image. If a bounding box is used, the output image will be cropped to this size. Default: None.

Supported Platforms:

Ascend GPU CPU

Example

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> class DenseNet(nn.Cell):
...     def __init__(self):
...         super(DenseNet, self).__init__()
...         w = np.array([[0.1, 0.8, 0.1, 0.1],[1, 1, 1, 1]]).astype(np.float32)
...         b = np.array([0.3, 0.6]).astype(np.float32)
...         self.dense = nn.Dense(4, 2, weight_init=Tensor(w), bias_init=Tensor(b))
...
...     def construct(self, x):
...         return self.dense(x)
>>>
>>> model = DenseNet()
>>> test_data = np.array([[0.1, 0.2, 0.3, 0.4]]).astype(np.float32)
>>> label = np.array(1).astype(np.int32)
>>> metric = nn.OcclusionSensitivity()
>>> metric.clear()
>>> metric.update(model, test_data, label)
>>> score = metric.eval()
>>> print(score)
[0.29999995    0.6    1    0.9]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the occlusion_sensitivity.

Returns

A numpy ndarray.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates input, including model, y_pred and label.

Inputs:
  • model (nn.Cell) - classification model to use for inference.

  • y_pred (Union[Tensor, list, np.ndarray]) - image to test. Should be a tensor consisting of 1 batch, can be 2- or 3D.

  • label (Union[int, Tensor]) - classification label to check for changes (normally the true label, but doesn’t have to be

Raises
  • ValueError – If the number of input is not 3.

  • RuntimeError – If the batch size is not 1.

  • RuntimeError – If the number of labels is different from the number of batches.

class tinyms.metrics.F1[source]

Calculates the F1 score. F1 is a special case of Fbeta when beta is 1. Refer to class mindspore.nn.Fbeta for more details.

\[F_1=\frac{2\cdot true\_positive}{2\cdot true\_positive + false\_negative + false\_positive}\]

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = nn.F1()
>>> metric.update(x, y)
>>> result = metric.eval()
>>> print(result)
[0.66666667 0.66666667]
class tinyms.metrics.Dice(smooth=1e-05)[source]

The Dice coefficient is a set similarity metric. It is used to calculate the similarity between two samples. The value of the Dice coefficient is 1 when the segmentation result is the best and is 0 when the segmentation result is the worst. The Dice coefficient indicates the ratio of the area between two objects to the total area. The function is shown as follows:

\[dice = \frac{2 * (pred \bigcap true)}{pred \bigcup true}\]
Parameters

smooth (float) – A term added to the denominator to improve numerical stability. Should be greater than 0. Default: 1e-5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([[0, 1], [1, 0], [0, 1]]))
>>> metric = nn.Dice(smooth=1e-5)
>>> metric.clear()
>>> metric.update(x, y)
>>> dice = metric.eval()
>>> print(dice)
0.20467791371802546
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes the Dice.

Returns

Float, the computed result.

Raises

RuntimeError – If the total number of samples is 0.

update(*inputs)[source]

Updates the internal evaluation result \(y_pred\) and \(y\).

Parameters

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is the predicted value, y is the true value. The shape of y_pred and y are both \((N, ...)\).

Raises
  • ValueError – If the number of the inputs is not 2.

  • RuntimeError – If y_pred and y do not have the same shape.

class tinyms.metrics.ROC(class_num=None, pos_label=None)[source]

Calculates the ROC curve. It is suitable for solving binary classification and multi classification problems. In the case of multiclass, the values will be calculated based on a one-vs-the-rest approach.

Parameters
  • class_num (int) – Integer with the number of classes. For the problem of binary classification, it is not necessary to provide this argument. Default: None.

  • pos_label (int) – Determine the integer of positive class. Default: None. For binary problems, it is translated to 1. For multiclass problems, this argument should not be set, as it is iteratively changed in the range [0,num_classes-1]. Default: None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> # 1) binary classification example
>>> x = Tensor(np.array([3, 1, 4, 2]))
>>> y = Tensor(np.array([0, 1, 2, 3]))
>>> metric = nn.ROC(pos_label=2)
>>> metric.clear()
>>> metric.update(x, y)
>>> fpr, tpr, thresholds = metric.eval()
>>> print(fpr)
[0. 0. 0.33333333 0.6666667 1.]
>>> print(tpr)
[0. 1. 1. 1. 1.]
>>> print(thresholds)
[5 4 3 2 1]
>>>
>>> # 2) multiclass classification example
>>> x = Tensor(np.array([[0.28, 0.55, 0.15, 0.05], [0.10, 0.20, 0.05, 0.05], [0.20, 0.05, 0.15, 0.05],
...                     [0.05, 0.05, 0.05, 0.75]]))
>>> y = Tensor(np.array([0, 1, 2, 3]))
>>> metric = nn.ROC(class_num=4)
>>> metric.clear()
>>> metric.update(x, y)
>>> fpr, tpr, thresholds = metric.eval()
>>> print(fpr)
[array([0., 0., 0.33333333, 0.66666667, 1.]), array([0., 0.33333333, 0.33333333, 1.]),
array([0., 0.33333333, 1.]), array([0., 0., 1.])]
>>> print(tpr)
[array([0., 1., 1., 1., 1.]), array([0., 0., 1., 1.]), array([0., 1., 1.]), array([0., 1., 1.])]
>>> print(thresholds)
[array([1.28, 0.28, 0.2, 0.1, 0.05]), array([1.55, 0.55, 0.2, 0.05]), array([1.15, 0.15, 0.05]),
array([1.75, 0.75, 0.05])]
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the ROC curve.

Returns

A tuple, composed of fpr, tpr, and thresholds.

  • fpr (np.array) - np.array with false positive rates. If multiclass, this is a list of such np.array, one for each class.

  • tps (np.array) - np.array with true positive rates. If multiclass, this is a list of such np.array, one for each class.

  • thresholds (np.array) - thresholds used for computing false- and true positive rates.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

roc(y_pred, y, class_num=None, pos_label=None, sample_weights=None)[source]
update(*inputs)[source]

Update state with predictions and targets.

Parameters

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. In most cases (not strictly), y_pred is a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers.

tinyms.metrics.auc(x, y, reorder=False)[source]

Computes the AUC(Area Under the Curve) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve.

Parameters
  • x (Union[np.array, list]) – From the ROC curve(fpr), np.array with false positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • y (Union[np.array, list]) – From the ROC curve(tpr), np.array with true positive rates. If multiclass, this is a list of such np.array, one for each class. The shape \((N)\).

  • reorder (boolean) – If True, assume that the curve is ascending in the case of ties, as for an ROC curve. If the curve is non-ascending, the result will be wrong. Default: False.

Returns

Compute result.

Return type

area (float)

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn
>>>
>>> y_pred = np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]])
>>> y = np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]])
>>> metric = nn.ROC(pos_label=2)
>>> metric.clear()
>>> metric.update(y_pred, y)
>>> fpr, tpr, thre = metric.eval()
>>> output = auc(fpr, tpr)
>>> print(output)
0.5357142857142857
class tinyms.metrics.TopKCategoricalAccuracy(k)[source]

Calculates the top-k categorical accuracy.

Note

The method update must receive input of the form \((y_{pred}, y)\). If some samples have the same accuracy, the first sample will be chosen.

Parameters

k (int) – Specifies the top-k categorical accuracy to compute.

Raises

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...         [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = nn.TopKCategoricalAccuracy(3)
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
0.6666666666666666
clear()[source]

Clear the internal evaluation result.

eval()[source]

Computes the top-k categorical accuracy.

Returns

Float, computed result.

update(*inputs)[source]

Updates the internal evaluation result y_pred and y.

Parameters

inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.

class tinyms.metrics.Top1CategoricalAccuracy[source]

Calculates the top-1 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to TopKCategoricalAccuracy for more details.

Examples

>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...         [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = nn.Top1CategoricalAccuracy()
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
0.0
class tinyms.metrics.Top5CategoricalAccuracy[source]

Calculates the top-5 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to TopKCategoricalAccuracy for more details.

Examples

>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.],
...            [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32)
>>> y = Tensor(np.array([2, 0, 1]), mindspore.float32)
>>> topk = nn.Top5CategoricalAccuracy()
>>> topk.clear()
>>> topk.update(x, y)
>>> output = topk.eval()
>>> print(output)
1.0
class tinyms.metrics.Loss[source]

Calculates the average of the loss. If method ‘update’ is called every \(n\) iterations, the result of evaluation will be:

\[loss = \frac{\sum_{k=1}^{n}loss_k}{n}\]

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array(0.2), mindspore.float32)
>>> loss = nn.Loss()
>>> loss.clear()
>>> loss.update(x)
>>> result = loss.eval()
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculates the average of the loss.

Returns

Float, the average of the loss.

Raises

RuntimeError – If the total number is 0.

update(*inputs)[source]

Updates the internal evaluation result.

Parameters

inputs – Inputs contain only one element, the element is loss. The dimension of loss must be 0 or 1.

Raises
  • ValueError – If the length of inputs is not 1.

  • ValueError – If the dimension of loss is not 1.

class tinyms.metrics.MeanSurfaceDistance(symmetric=False, distance_metric='euclidean')[source]

This function is used to compute the Average Surface Distance from y_pred to y under the default setting. Mean Surface Distance(MSD), the mean of the vector is taken. This tell us how much, on average, the surface varies between the segmentation and the GT.

Parameters
  • distance_metric (string) – The parameter of calculating Hausdorff distance supports three measurement methods, “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • symmetric (bool) – if calculate the symmetric average surface distance between y_pred and y. In addition, if sets symmetric = True, the average symmetric surface distance between these two inputs will be returned. Default: False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = nn.MeanSurfaceDistance(symmetric=False, distance_metric="euclidean")
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> mean_average_distance = metric.eval()
>>> print(mean_average_distance)
0.8047378541243649
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate mean surface distance.

Returns

A float with mean surface distance.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are Tensor or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. ‘label_idx’, the data type of label_idx is int.

Raises
  • ValueError – If the number of the inputs is not 3.

  • TypeError – If the data type of label_idx is not int or float.

  • ValueError – If the value of label_idx is not in y_pred or y.

  • ValueError – If y_pred and y have different shapes.

class tinyms.metrics.RootMeanSquareDistance(symmetric=False, distance_metric='euclidean')[source]

This function is used to compute the Residual Mean Square Distance from y_pred to y under the default setting. Residual Mean Square Distance(RMS), the mean is taken from each of the points in the vector, these residuals are squared (to remove negative signs), summed, weighted by the mean and then the square-root is taken. Measured in mm.

Parameters
  • distance_metric (string) – The parameter of calculating Hausdorff distance supports three measurement methods, “euclidean”, “chessboard” or “taxicab”. Default: “euclidean”.

  • symmetric (bool) – if calculate the symmetric average surface distance between y_pred and y. In addition, if sets symmetric = True, the average symmetric surface distance between these two inputs will be returned. Default: False.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[3, 0, 1], [1, 3, 0], [1, 0, 2]]))
>>> y = Tensor(np.array([[0, 2, 1], [1, 2, 1], [0, 0, 1]]))
>>> metric = nn.RootMeanSquareDistance(symmetric=False, distance_metric="euclidean")
>>> metric.clear()
>>> metric.update(x, y, 0)
>>> root_mean_square_distance = metric.eval()
>>> print(root_mean_square_distance)
1.0000000000000002
clear()[source]

Clears the internal evaluation result.

eval()[source]

Calculate residual mean square surface distance.

Returns

A float with residual mean square surface distance.

Raises

RuntimeError – If the update method is not called first, an error will be reported.

update(*inputs)[source]

Updates the internal evaluation result ‘y_pred’, ‘y’ and ‘label_idx’.

Parameters

inputs – Input ‘y_pred’, ‘y’ and ‘label_idx’. ‘y_pred’ and ‘y’ are Tensor or numpy.ndarray. ‘y_pred’ is the predicted binary image. ‘y’ is the actual binary image. ‘label_idx’, the data type of label_idx is int.

Raises
  • ValueError – If the number of the inputs is not 3.

  • TypeError – If the data type of label_idx is not int or float.

  • ValueError – If the value of label_idx is not in y_pred or y.

  • ValueError – If y_pred and y have different shapes.

class tinyms.metrics.Perplexity(ignore_label=None)[source]

Computes perplexity. Perplexity is a measurement about how well a probability distribution or a model predicts a sample. A low perplexity indicates the model can predict the sample well. The function is shown as follows:

\[PP(W)=P(w_{1}w_{2}...w_{N})^{-\frac{1}{N}}=\sqrt[N]{\frac{1}{P(w_{1}w_{2}...w_{N})}}\]
Parameters

ignore_label (int) – Index of an invalid label to be ignored when counting. If set to None, it will include all entries. Default: -1.

Supported Platforms:

Ascend GPU CPU

Note

The method update must be called with the form update(preds, labels).

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = nn.Perplexity(ignore_label=None)
>>> metric.clear()
>>> metric.update(x, y)
>>> perplexity = metric.eval()
>>> print(perplexity)
2.231443166940565
clear()[source]

Clears the internal evaluation result.

eval()[source]

Returns the current evaluation result.

Returns

float, the computed result.

Raises

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the internal evaluation result: math:preds and :math:labels.

Parameters

inputs – Input preds and labels. preds and labels are Tensor, list or numpy.ndarray. preds is the predicted values, labels is the label of the data. The shape of preds and labels are both \((N, C)\).

Raises
  • ValueError – If the number of the inputs is not 2.

  • RuntimeError – If preds and labels have different lengths.

  • RuntimeError – If label shape is not equal to pred shape.

class tinyms.metrics.ConfusionMatrix(num_classes, normalize='no_norm', threshold=0.5)[source]

Computes the confusion matrix. The performance matrix of measurement classification model is the model whose output is binary or multi class. The confusion matrix is calculated. An array of shape [BC4] is returned. The third dimension represents each channel of each sample in the input batch.Where B is the batch size and C is the number of classes to be calculated.

If you only want to find confusion matrix, use this class. If you want to find ‘PPV’, ‘TPR’, ‘TNR’, etc., use class ‘mindspore.metrics.ConfusionMatrixMetric’.

Parameters
  • num_classes (int) – Number of classes in the dataset.

  • normalize (str) –

    The parameter of calculating ConfusionMatrix supports four Normalization modes, Choose from:

    • ’no_norm’ (None) - No Normalization is used. Default: None.

    • ’target’ (str) - Normalization based on target value.

    • ’prediction’ (str) - Normalization based on predicted value.

    • ’all’ (str) - Normalization over the whole matrix.

  • threshold (float) – A threshold, which is used to compare with the input tensor. Default: 0.5.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import nn, Tensor
>>>
>>> x = Tensor(np.array([1, 0, 1, 0]))
>>> y = Tensor(np.array([1, 0, 0, 1]))
>>> metric = nn.ConfusionMatrix(num_classes=2, normalize='no_norm', threshold=0.5)
>>> metric.clear()
>>> metric.update(x, y)
>>> output = metric.eval()
>>> print(output)
[[1. 1.]
 [1. 1.]]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes confusion matrix.

Returns

numpy.ndarray, the computed result.

update(*inputs)[source]

Update state with y_pred and y.

Parameters

inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. y_pred is the predicted value, y is the true value. The shape of y_pred is \((N, C, ...)\) or \((N, ...)\). The shape of y is \((N, ...)\).

Raises

ValueError – If the number of the inputs is not 2.

class tinyms.metrics.ConfusionMatrixMetric(skip_channel=True, metric_name='sensitivity', calculation_method=False, decrease='mean')[source]

The performance matrix of measurement classification model is the model whose output is binary or multi class. The correlation measure of confusion matrix was calculated from the full-scale tensor, and the average values of batch, class channel and iteration were collected. This function supports the calculation of all measures described below: the metric name in parameter metric_name.

If you want to use confusion matrix to calculate, such as ‘PPV’, ‘TPR’, ‘TNR’, use this class. If you only want to calculate confusion matrix, please use ‘mindspore.metrics.ConfusionMatrix’.

Parameters
  • skip_channel (bool) – Whether to skip the measurement calculation on the first channel of the predicted output. Default: True.

  • metric_name (str) – The names of indicators are in the following range. Of course, you can also set the industry common aliases for these indicators. Choose from: [“sensitivity”, “specificity”, “precision”, “negative predictive value”, “miss rate”, “fall out”, “false discovery rate”, “false omission rate”, “prevalence threshold”, “threat score”, “accuracy”, “balanced accuracy”, “f1 score”, “matthews correlation coefficient”, “fowlkes mallows index”, “informedness”, “markedness”].

  • calculation_method (bool) – If true, the measurement for each sample will be calculated first. If not, the confusion matrix of all samples will be accumulated first. As for classification task, ‘calculation_method’ should be False. Default: False.

  • decrease (str) – Define the mode to reduce the calculation result of one batch of data. Decrease is used only if calculation_method is True. Default: “mean”. Choose from: [“none”, “mean”, “sum”, “mean_batch”, “sum_batch”, “mean_channel”, “sum_channel”].

Supported Platforms:

Ascend GPU CPU

Examples

>>> metric = ConfusionMatrixMetric(skip_channel=True, metric_name="tpr",
...                                calculation_method=False, decrease="mean")
>>> metric.clear()
>>> x = Tensor(np.array([[[0], [1]], [[1], [0]]]))
>>> y = Tensor(np.array([[[0], [1]], [[0], [1]]]))
>>> metric.update(x, y)
>>> x = Tensor(np.array([[[0], [1]], [[1], [0]]]))
>>> y = Tensor(np.array([[[0], [1]], [[1], [0]]]))
>>> avg_output = metric.eval()
>>> print(avg_output)
[0.5]
clear()[source]

Clears the internal evaluation result.

eval()[source]

Computes confusion matrix metric.

Returns

ndarray, the computed result.

update(*inputs)[source]

Update state with predictions and targets.

inputs:

Input y_pred and y. y_pred and y are a Tensor, a list or an array.

  • y_pred (ndarray) - Input data to compute. It must be one-hot format and first dim is batch. The shape of y_pred is \((N, C, ...)\) or \((N, ...)\). As for classification tasks, y_pred should has the shape [BN] where N is larger than 1. As for segmentation tasks, the shape should be [BNHW] or [BNHWD].

  • y (ndarray) - Compute the true value of the measure. It must be one-hot format and first dim is batch. The shape of y is \((N, C, ...)\).

Raises

ValueError – If the number of the inputs is not 2.