Skip to contents

Evaluate the performance of a ModelClassification object on a binary classification problem using the Accuracy. The Accuracy is computed as the number of correctly classified observations divided by the total number of observations.

Usage

EvaluatorAccuracy(.prediction, .dataset, .target, .threshold = 0.5)

Arguments

.prediction

A data.frame containing the predictions and the true values as columns or a numeric vector containing only the predictions. The true values have to be encoded by 0/1 or by TRUE/FALSE. The predicted values have to be numeric and be in a range of 0 to 1.

.dataset

An optional Dataset or data.frame object, that has to be provided if .prediction is a numeric vector.

.target

A character vector of length one, being the name of the target variable contained as column in the .dataset

.threshold

An optional argument for setting the threshold at which a prediction gets assigned to a class.

See also

EvaluatorAUC() for evaluating the AUC of a classifier, EvaluatorMAE() for computing the mean absolute error, EvaluatorMSE() for the mean-squared error (corresponding to the Brier-Score in binary classification).

Examples

x <- data.frame(var1 = c(1, 2, 3, 4, 5, 6, 7), target = c(1, 1, 1, 1, 0, 1, 0))
predictions <- c(1)
EvaluatorAccuracy(predictions, x, "target")
#> [1] 0.7142857
predictions <- data.frame(prediction = c(0.8, 0.2, 0.6, 0.8, 0.8), truth = c(1, 0, 1, 1, 1))
EvaluatorAccuracy(predictions)
#> [1] 1
EvaluatorAccuracy(.prediction = predictions, .threshold = 0.7)
#> [1] 0.8