Package deepnetts.eval
Class ClassificationMetrics
java.lang.Object
javax.visrec.ml.eval.EvaluationMetrics
deepnetts.eval.ClassificationMetrics
public final class ClassificationMetrics
extends javax.visrec.ml.eval.EvaluationMetrics
Various metrics that tell us how good is a classifier.
Calculates various classification metrics which are used for classifier evaluation.
For multi class classification enables setting to which specific class values refer to.
- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final class
Average values of commonly used classification metrics. -
Field Summary
Fields inherited from class javax.visrec.ml.eval.EvaluationMetrics
ACCURACY, F_STAT, F1SCORE, MEAN_ABSOLUTE_ERROR, MEAN_SQUARED_ERROR, PRECISION, R_SQUARED, RECALL, RESIDUAL_SQUARE_SUM, RESIDUAL_STANDARD_ERROR, ROOT_MEAN_SQUARED_ERROR
-
Constructor Summary
ConstructorsConstructorDescriptionClassificationMetrics
(int trueNegative, int falsePositive, int falseNegative, int truePositive) Constructs a new classification metrics using specified arguments.ClassificationMetrics
(ConfusionMatrix confMatrix) Constructs a new classification metrics from specified confusion matrix.ClassificationMetrics
(ConfusionMatrix confMatrix, String classLabel, int classIdx) Constructs a new classification metrics of a single class for multi class classification. -
Method Summary
Modifier and TypeMethodDescriptionstatic ClassificationMetrics.Stats
average
(ClassificationMetrics[] results) static ClassificationMetrics[]
createFrom
(ConfusionMatrix confusionMatrix) Creates classification metrics from the given confusion matrix.float
Percent of correct classifications (for both positive and negative classes).double
Balanced accuracy is a good metric to use when data set is not balanced.int
Returns class label that these metric correspond to (used for multi class classification).Returns a confusion matrix that is used to generate these metrics.float
A percent of wrong classifications/predictions made.float
Calculates and returns F1 score - a balance between recall and precision.float
When its actually no, how often it is classified as yesfloat
When its actually yes, how often does it predicts nofloat
When it's actually no, how often does it predict yes? FP/actual nofloat
getFScore
(int beta) Balance between precision and recall.double
Calculates and returns the matthews corellation coefficient.float
What percent of those predicted as positive are really positive.float
Ratio between those classified as positive compared to those that are actually positive.float
Specificity or true negative rate.int
getTotal()
Returns total number of classifications.float
How often does negative class actually occur in the samplefloat
How often does positive class actually occur in the samplevoid
setClassLabel
(String classLabel) Sets class label to which this metrics corresponds tootoString()
Methods inherited from class javax.visrec.ml.eval.EvaluationMetrics
get, set
-
Constructor Details
-
ClassificationMetrics
Constructs a new classification metrics from specified confusion matrix.- Parameters:
confMatrix
- confusion matrix to extract metrics from.
-
ClassificationMetrics
Constructs a new classification metrics of a single class for multi class classification.- Parameters:
confMatrix
-classLabel
-classIdx
-
-
ClassificationMetrics
public ClassificationMetrics(int trueNegative, int falsePositive, int falseNegative, int truePositive) Constructs a new classification metrics using specified arguments.- Parameters:
trueNegative
-falsePositive
-falseNegative
-truePositive
-
-
-
Method Details
-
getClassLabel
Returns class label that these metric correspond to (used for multi class classification). In case you have multiple classes each class has its classification metrics.- Returns:
- class label
-
setClassLabel
Sets class label to which this metrics corresponds too- Parameters:
classLabel
-
-
getClassIdx
public int getClassIdx() -
getConfusionMatrix
Returns a confusion matrix that is used to generate these metrics.- Returns:
-
getAccuracy
public float getAccuracy()Percent of correct classifications (for both positive and negative classes). Answers the question how often a classifier gives correct answer. Accuracy is a good measure classes in the data are nearly balanced. This metric might be misleading if the classes are not balanced. Accuracy = ( TruePositive + TrueNegative ) / Total- Returns:
- how often is the classifier correct
-
getErrorRate
public float getErrorRate()A percent of wrong classifications/predictions made. Answers the question how often a classifier gives wrong answer? error = (fp + fn) / total error = 1 - accuracy- Returns:
- classification error rate
-
getPrecision
public float getPrecision()What percent of those predicted as positive are really positive. Answers the question: when it predicts yes, how often is it correct? precision = truePositive / (truePositive + falsePositive)- Returns:
- percent of those predicted as positive that are really positive.
-
getRecall
public float getRecall()Ratio between those classified as positive compared to those that are actually positive. Also called Sensitivity or True Positive Rate.- Returns:
- how often classifier predicts yes, when actual class is yes
-
getSpecificity
public float getSpecificity()Specificity or true negative rate. When it's actually no, how often does it predict no?- Returns:
-
getF1Score
public float getF1Score()Calculates and returns F1 score - a balance between recall and precision. f1 = 2 * ( (precision*recall) / (precision+recall))- Returns:
- f-score metric (harmonic average of recall and precision)
-
getFScore
public float getFScore(int beta) Balance between precision and recall.- Parameters:
beta
-- Returns:
- f-score
-
getTotal
public int getTotal()Returns total number of classifications.- Returns:
- total number of classifications
-
getFalsePositiveRate
public float getFalsePositiveRate()When it's actually no, how often does it predict yes? FP/actual no- Returns:
-
positiveFreqency
public float positiveFreqency()How often does positive class actually occur in the sample- Returns:
-
negativeFreqency
public float negativeFreqency()How often does negative class actually occur in the sample- Returns:
-
getFalseNegativeRate
public float getFalseNegativeRate()When its actually yes, how often does it predicts no- Returns:
-
getFalseDiscoveryRate
public float getFalseDiscoveryRate()When its actually no, how often it is classified as yes- Returns:
-
getMatthewsCorrelationCoefficient
public double getMatthewsCorrelationCoefficient()Calculates and returns the matthews corellation coefficient. The F1 metric is not a suitable method of combining precision and recall i measure of the quality of binary (two-class) classifications. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. The coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.[5] The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation.- Returns:
- value of matthews correlation coeffiicent
-
getBalancedAccuracy
public double getBalancedAccuracy()Balanced accuracy is a good metric to use when data set is not balanced. It is an average of specificity and recall(sensitivity)- Returns:
-
toString
- Overrides:
toString
in classjavax.visrec.ml.eval.EvaluationMetrics
-
createFrom
Creates classification metrics from the given confusion matrix. Creates an array of ClassificationMetrics objects one for each class.- Parameters:
confusionMatrix
-- Returns:
- classification metrics
-
average
- Parameters:
results
- list of different metric results computed on different sets of data- Returns:
- average metrics computed different MetricResults
-