Skip to contents

Evaluate the performance of a 'ModelClassification' object on a binary classification problem using the area under the ROC curve.

Usage

EvaluatorAUC(.prediction, .dataset, .target)

Arguments

.prediction

A data.frame containing the predictions and the true values as columns or a numeric vector containing only the predictions. The true values have to be encoded by 0/1 or by TRUE/FALSE. The predicted values have to be numeric and be in a range of 0 to 1.

.dataset

An optional Dataset or data.frame object, that has to be provided if .prediction is a numeric vector.

.target

A character vector of length one, being the name of the target variable contained as column in the .dataset

See also

EvaluatorAccuracy() for evaluating the accuracy of a classifier, EvaluatorMAE() for computing the mean absolute error, EvaluatorMSE() for the mean-squared error (corresponding to the Brier-Score in binary classification).

Examples

x <- data.frame(var1 = c(1, 2, 3, 4, 5, 6, 7), target = c(1, 1, 1, 1, 0, 1, 0))
predictions <- c(1)
EvaluatorAUC(predictions, x, "target")
#> Area under the curve: 0.5
predictions <- data.frame(prediction = c(0.8, 0.2, 0.6, 0.8, 0.8), truth = c(1, 0, 1, 1, 1))
EvaluatorAUC(predictions)
#> Area under the curve: 1
predictions <- data.frame(prediction = c(0.8, 0.2, 0.6, 0.8, 0.8), truth = c(TRUE, FALSE, TRUE, TRUE, TRUE))
EvaluatorAUC(predictions)
#> Area under the curve: 1