accuracy¶
-
getml.pipeline.metrics.
accuracy
= 'accuracy'¶ Accuracy - measures the share of accurate predictions as of total samples in the testing set.
Used for classification problems.
\[accuracy = \frac{number \; of \; correct \; predictions}{number \; of \; all \; predictions}\]The number of correct predictions depends on the threshold used: For instance, we could interpret all predictions for which the probability is greater than 0.5 as a positive and all others as a negative. But we do not have to use a threshold of 0.5 - we might as well use any other threshold. Which threshold we choose will impact the calculated accuracy.
When calculating the accuracy, the value returned is the accuracy returned by the best threshold.
Even though accuracy is the most intuitive way to measure a classification algorithm, it can also be very misleading when the samples are very skewed. For instance, if only 2% of the samples are positive, a predictor that always predicts negative outcomes will have an accuracy of 98%. This sounds very good to the layman, but the predictor in this example actually has no predictive value.