tinyms.metrics¶
Metrics module provides functions to measure the performance of the machine learning models on the evaluation dataset. It’s used to choose the best model.
-
tinyms.metrics.
names
()[source]¶ Gets the names of the metric methods.
- Returns
List, the name list of metric methods.
-
tinyms.metrics.
get_metric_fn
(name, *args, **kwargs)[source]¶ Gets the metric method based on the input name.
- Parameters
name (str) – The name of metric method. Refer to the ‘__factory__’ object for the currently supported metrics.
args – Arguments for the metric function.
kwargs – Keyword arguments for the metric function.
- Returns
Metric object, class instance of the metric method.
Examples
>>> metric = nn.get_metric_fn('precision', eval_type='classification')
-
class
tinyms.metrics.
Accuracy
(eval_type='classification')[source]¶ Calculates the accuracy for classification and multilabel data.
The accuracy class creates two local variables, the correct number and the total number that are used to compute the frequency with which predictions matches labels. This frequency is ultimately returned as the accuracy: an idempotent operation that simply divides the correct number by the total number.
\[\text{accuracy} =\frac{\text{true_positive} + \text{true_negative}} {\text{true_positive} + \text{true_negative} + \text{false_positive} + \text{false_negative}}\]- Parameters
eval_type (str) – Metric to calculate the accuracy over a dataset, for classification (single-label), and multilabel (multilabel classification). Default: ‘classification’.
Examples
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]), mindspore.float32) >>> y = Tensor(np.array([1, 0, 1]), mindspore.float32) >>> metric = nn.Accuracy('classification') >>> metric.clear() >>> metric.update(x, y) >>> accuracy = metric.eval() >>> print(accuracy) 0.6666666666666666
-
eval
()[source]¶ Computes the accuracy.
- Returns
Float, the computed result.
- Raises
RuntimeError – If the sample size is 0.
-
update
(*inputs)[source]¶ Updates the internal evaluation result \(y_{pred}\) and \(y\).
- Parameters
inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. For the ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate the positive category. The shape of y_pred and y are both \((N, C)\).
- Raises
ValueError – If the number of the inputs is not 2.
-
class
tinyms.metrics.
MAE
[source]¶ Calculates the mean absolute error.
Creates a criterion that measures the mean absolute error (MAE) between each element in the input: \(x\) and the target: \(y\).
\[\text{MAE} = \frac{\sum_{i=1}^n \|y_i - x_i\|}{n}\]Here \(y_i\) is the prediction and \(x_i\) is the true value.
Note
The method update must be called with the form update(y_pred, y).
Examples
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32) >>> y = Tensor(np.array([0.1, 0.25, 0.7, 0.9]), mindspore.float32) >>> error = nn.MAE() >>> error.clear() >>> error.update(x, y) >>> result = error.eval() >>> print(result) 0.037499990314245224
-
eval
()[source]¶ Computes the mean absolute error.
- Returns
Float, the computed result.
- Raises
RuntimeError – If the number of the total samples is 0.
-
update
(*inputs)[source]¶ Updates the internal evaluation result \(y_{pred}\) and \(y\).
- Parameters
inputs – Input y_pred and y for calculating mean absolute error where the shape of y_pred and y are both N-D and the shape are the same.
- Raises
ValueError – If the number of the input is not 2.
-
-
class
tinyms.metrics.
MSE
[source]¶ Measures the mean squared error.
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input: \(x\) and the target: \(y\).
\[\text{MSE}(x,\ y) = \frac{\sum_{i=1}^n(y_i - x_i)^2}{n}\]where \(n\) is batch size.
Examples
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32) >>> y = Tensor(np.array([0.1, 0.25, 0.5, 0.9]), mindspore.float32) >>> error = nn.MSE() >>> error.clear() >>> error.update(x, y) >>> result = error.eval()
-
eval
()[source]¶ Compute the mean squared error.
- Returns
Float, the computed result.
- Raises
RuntimeError – If the number of samples is 0.
-
update
(*inputs)[source]¶ Updates the internal evaluation result \(y_{pred}\) and \(y\).
- Parameters
inputs – Input y_pred and y for calculating mean square error where the shape of y_pred and y are both N-D and the shape are the same.
- Raises
ValueError – If the number of input is not 2.
-
-
class
tinyms.metrics.
Metric
[source]¶ Base class of metric.
Note
For examples of subclasses, please refer to the definition of class MAE, ‘Recall’ etc.
-
abstract
clear
()[source]¶ An interface describes the behavior of clearing the internal evaluation result.
Note
All subclasses must override this interface.
-
abstract
-
class
tinyms.metrics.
Precision
(eval_type='classification')[source]¶ Calculates precision for classification and multilabel data.
The precision function creates two local variables, \(\text{true_positive}\) and \(\text{false_positive}\), that are used to compute the precision. This value is ultimately returned as the precision, an idempotent operation that simply divides \(\text{true_positive}\) by the sum of \(\text{true_positive}\) and \(\text{false_positive}\).
\[\text{precision} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_positive}}\]Note
In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.
- Parameters
eval_type (str) – Metric to calculate accuracy over a dataset, for classification or multilabel. Default: ‘classification’.
Examples
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]])) >>> y = Tensor(np.array([1, 0, 1])) >>> metric = nn.Precision('classification') >>> metric.clear() >>> metric.update(x, y) >>> precision = metric.eval() >>> print(precision) [0.5 1. ]
-
eval
(average=False)[source]¶ Computes the precision.
- Parameters
average (bool) – Specify whether calculate the average precision. Default value is False.
- Returns
Float, the computed result.
-
update
(*inputs)[source]¶ Updates the internal evaluation result with y_pred and y.
- Parameters
inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).
- Raises
ValueError – If the number of input is not 2.
-
class
tinyms.metrics.
Recall
(eval_type='classification')[source]¶ Calculates recall for classification and multilabel data.
The recall class creates two local variables, \(\text{true_positive}\) and \(\text{false_negative}\), that are used to compute the recall. This value is ultimately returned as the recall, an idempotent operation that simply divides \(\text{true_positive}\) by the sum of \(\text{true_positive}\) and \(\text{false_negative}\).
\[\text{recall} = \frac{\text{true_positive}}{\text{true_positive} + \text{false_negative}}\]Note
In the multi-label cases, the elements of \(y\) and \(y_{pred}\) must be 0 or 1.
- Parameters
eval_type (str) – Metric to calculate the recall over a dataset, for classification or multilabel. Default: ‘classification’.
Examples
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]])) >>> y = Tensor(np.array([1, 0, 1])) >>> metric = nn.Recall('classification') >>> metric.clear() >>> metric.update(x, y) >>> recall = metric.eval() >>> print(recall) [1. 0.5]
-
eval
(average=False)[source]¶ Computes the recall.
- Parameters
average (bool) – Specify whether calculate the average recall. Default value is False.
- Returns
Float, the computed result.
-
update
(*inputs)[source]¶ Updates the internal evaluation result with y_pred and y.
- Parameters
inputs – Input y_pred and y. y_pred and y are a Tensor, a list or an array. For ‘classification’ evaluation type, y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. Shape of y can be \((N, C)\) with values 0 and 1 if one-hot encoding is used or the shape is \((N,)\) with integer values if index of category is used. For ‘multilabel’ evaluation type, y_pred and y can only be one-hot encoding with values 0 or 1. Indices with 1 indicate positive category. The shape of y_pred and y are both \((N, C)\).
- Raises
ValueError – If the number of input is not 2.
-
class
tinyms.metrics.
Fbeta
(beta)[source]¶ Calculates the fbeta score.
Fbeta score is a weighted mean of precison and recall.
\[F_\beta=\frac{(1+\beta^2) \cdot true\_positive} {(1+\beta^2) \cdot true\_positive +\beta^2 \cdot false\_negative + false\_positive}\]Examples
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]])) >>> y = Tensor(np.array([1, 0, 1])) >>> metric = nn.Fbeta(1) >>> metric.clear() >>> metric.update(x, y) >>> fbeta = metric.eval() >>> print(fbeta) [0.66666667 0.66666667]
-
eval
(average=False)[source]¶ Computes the fbeta.
- Parameters
average (bool) – Whether to calculate the average fbeta. Default value is False.
- Returns
Float, computed result.
-
update
(*inputs)[source]¶ Updates the internal evaluation result y_pred and y.
- Parameters
inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.
-
-
class
tinyms.metrics.
F1
[source]¶ Calculates the F1 score. F1 is a special case of Fbeta when beta is 1. Refer to class
mindspore.nn.Fbeta
for more details.\[F_1=\frac{2\cdot true\_positive}{2\cdot true\_positive + false\_negative + false\_positive}\]Examples
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]])) >>> y = Tensor(np.array([1, 0, 1])) >>> metric = nn.F1() >>> metric.update(x, y) >>> result = metric.eval() >>> print(result) [0.66666667 0.66666667]
-
class
tinyms.metrics.
TopKCategoricalAccuracy
(k)[source]¶ Calculates the top-k categorical accuracy.
Note
The method update must receive input of the form \((y_{pred}, y)\). If some samples have the same accuracy, the first sample will be chosen.
- Parameters
k (int) – Specifies the top-k categorical accuracy to compute.
- Raises
TypeError – If k is not int.
ValueError – If k is less than 1.
Examples
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.], ... [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32) >>> y = Tensor(np.array([2, 0, 1]), mindspore.float32) >>> topk = nn.TopKCategoricalAccuracy(3) >>> topk.clear() >>> topk.update(x, y) >>> output = topk.eval() >>> print(output) 0.6666666666666666
-
update
(*inputs)[source]¶ Updates the internal evaluation result y_pred and y.
- Parameters
inputs – Input y_pred and y. y_pred and y are Tensor, list or numpy.ndarray. y_pred is in most cases (not strictly) a list of floating numbers in range \([0, 1]\) and the shape is \((N, C)\), where \(N\) is the number of cases and \(C\) is the number of categories. y contains values of integers. The shape is \((N, C)\) if one-hot encoding is used. Shape can also be \((N,)\) if category index is used.
-
class
tinyms.metrics.
Top1CategoricalAccuracy
[source]¶ Calculates the top-1 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to class ‘TopKCategoricalAccuracy’ for more details.
Examples
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.], ... [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32) >>> y = Tensor(np.array([2, 0, 1]), mindspore.float32) >>> topk = nn.Top1CategoricalAccuracy() >>> topk.clear() >>> topk.update(x, y) >>> output = topk.eval() >>> print(output) 0.0
-
class
tinyms.metrics.
Top5CategoricalAccuracy
[source]¶ Calculates the top-5 categorical accuracy. This class is a specialized class for TopKCategoricalAccuracy. Refer to class ‘TopKCategoricalAccuracy’ for more details.
Examples
>>> x = Tensor(np.array([[0.2, 0.5, 0.3, 0.6, 0.2], [0.1, 0.35, 0.5, 0.2, 0.], ... [0.9, 0.6, 0.2, 0.01, 0.3]]), mindspore.float32) >>> y = Tensor(np.array([2, 0, 1]), mindspore.float32) >>> topk = nn.Top5CategoricalAccuracy() >>> topk.clear() >>> topk.update(x, y) >>> output = topk.eval() >>> print(output) 1.0
-
class
tinyms.metrics.
Loss
[source]¶ Calculates the average of the loss. If method ‘update’ is called every \(n\) iterations, the result of evaluation will be:
\[loss = \frac{\sum_{k=1}^{n}loss_k}{n}\]Examples
>>> x = Tensor(np.array(0.2), mindspore.float32) >>> loss = nn.Loss() >>> loss.clear() >>> loss.update(x) >>> result = loss.eval()
-
eval
()[source]¶ Calculates the average of the loss.
- Returns
Float, the average of the loss.
- Raises
RuntimeError – If the total number is 0.
-
update
(*inputs)[source]¶ Updates the internal evaluation result.
- Parameters
inputs – Inputs contain only one element, the element is loss. The dimension of loss must be 0 or 1.
- Raises
ValueError – If the length of inputs is not 1.
ValueError – If the dimensions of loss is not 1.
-