Classification lossfunctions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
Lis the weighted average classification loss.
nis the sample size.
For binary classification:
yjis the observed class label. The software codes it as –1 or 1, indicating the negative or positive class, respectively.
f(Xj) is the raw classification score for observation (row)jof the predictor dataX.
mj=yjf(Xj) is the classification score for classifying observationjinto the class corresponding toyj. Positive values ofmjindicate correct classification and do not contribute much to the average loss. Negative values ofmjindicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is,K≥ 3):
yj*is a vector ofK– 1 zeros, with 1 in the position corresponding to the true, observed classyj. For example, if the true class of the second observation is the third class andK= 4, theny2*= [0 0 1 0]′. The order of the classes corresponds to the order in theClassNames
property of the input model.
f(Xj) is the lengthKvector of class scores for observationjof the predictor dataX. The order of the scores corresponds to the order of the classes in theClassNames
property of the input model.
mj=yj*′f(Xj). Therefore,mjis the scalar classification score that the model predicts for the true, observed class.
The weight for observationjiswj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,
Given this scenario, the following table describes the supported loss functions that you can specify by using the'LossFun'
name-value pair argument.
Loss Function |
Value ofLossFun |
Equation |
Binomial deviance |
'binodeviance' |
|
Exponential loss |
'exponential' |
|
Classification error |
'classiferror' |
The classification error is the weighted fraction of misclassified observations where
is the class label corresponding to the class with the maximal posterior probability.I{x} is the indicator function. |
Hinge loss |
'hinge' |
|
Logit loss |
'logit' |
|
Minimal cost |
'mincost' |
The software computes the weighted minimal cost using this procedure for observationsj= 1,...,n.
Estimate the 1-by-Kvector of expected classification costs for observationj:
f(Xj) is the column vector of class posterior probabilities for binary and multiclass classification.Cis the cost matrix stored by the input model in theCost property.
For observationj, predict the class label corresponding to the minimum expected classification cost:
UsingC, identify the cost incurred (cj) for making the prediction.
The weighted, average, minimum cost loss is
|
Quadratic loss |
'quadratic' |
|
This figure compares the loss functions (except'mincost'
) for one observation overm. Some functions are normalized to pass through [0,1].