PyCM

Version : 1.3


Overview

PyCM is a multi-class confusion matrix library written in Python that supports both input data vectors and direct matrix, and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. PyCM is the swiss-army knife of confusion matrices, targeted mainly at data scientists that need a broad array of metrics for predictive models and an accurate evaluation of large variety of classifiers.

Installation

Source Code

  • Download Version 1.3 or Latest Source
  • Run pip install -r requirements.txt or pip3 install -r requirements.txt (Need root access)
  • Run python3 setup.py install or python setup.py install (Need root access)

PyPI

Easy Install

  • Run easy_install --upgrade pycm (Need root access)

Usage

From Vector

In [1]:
from pycm import *
In [2]:
y_actu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
y_pred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]
In [3]:
cm = ConfusionMatrix(y_actu, y_pred,digit=5)
  • Notice : digit (the number of digits to the right of the decimal point in a number) is new in version 0.6 (default valaue : 5)
  • Only for print and save
In [4]:
cm
Out[4]:
pycm.ConfusionMatrix(classes: [0, 1, 2])
In [5]:
cm.actual_vector
Out[5]:
[2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
In [6]:
cm.predict_vector
Out[6]:
[0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]
In [7]:
cm.classes
Out[7]:
[0, 1, 2]
In [8]:
cm.class_stat
Out[8]:
{'ACC': {0: 0.8333333333333334, 1: 0.75, 2: 0.5833333333333334},
 'BM': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
 'CEN': {0: 0.25, 1: 0.49657842846620864, 2: 0.6044162769630221},
 'DOR': {0: 'None', 1: 3.999999999999998, 2: 1.9999999999999998},
 'ERR': {0: 0.16666666666666663, 1: 0.25, 2: 0.41666666666666663},
 'F0.5': {0: 0.6521739130434783,
  1: 0.45454545454545453,
  2: 0.5769230769230769},
 'F1': {0: 0.75, 1: 0.4, 2: 0.5454545454545454},
 'F2': {0: 0.8823529411764706, 1: 0.35714285714285715, 2: 0.5172413793103449},
 'FDR': {0: 0.4, 1: 0.5, 2: 0.4},
 'FN': {0: 0, 1: 2, 2: 3},
 'FNR': {0: 0.0, 1: 0.6666666666666667, 2: 0.5},
 'FOR': {0: 0.0, 1: 0.19999999999999996, 2: 0.4285714285714286},
 'FP': {0: 2, 1: 1, 2: 2},
 'FPR': {0: 0.2222222222222222,
  1: 0.11111111111111116,
  2: 0.33333333333333337},
 'G': {0: 0.7745966692414834, 1: 0.408248290463863, 2: 0.5477225575051661},
 'IS': {0: 1.263034405833794, 1: 1.0, 2: 0.2630344058337938},
 'J': {0: 0.6, 1: 0.25, 2: 0.375},
 'LR+': {0: 4.5, 1: 2.9999999999999987, 2: 1.4999999999999998},
 'LR-': {0: 0.0, 1: 0.7500000000000001, 2: 0.75},
 'MCC': {0: 0.6831300510639732, 1: 0.25819888974716115, 2: 0.1690308509457033},
 'MCEN': {0: 0.2643856189774724, 1: 0.5, 2: 0.6875},
 'MK': {0: 0.6000000000000001, 1: 0.30000000000000004, 2: 0.17142857142857126},
 'N': {0: 9, 1: 9, 2: 6},
 'NPV': {0: 1.0, 1: 0.8, 2: 0.5714285714285714},
 'P': {0: 3, 1: 3, 2: 6},
 'POP': {0: 12, 1: 12, 2: 12},
 'PPV': {0: 0.6, 1: 0.5, 2: 0.6},
 'PRE': {0: 0.25, 1: 0.25, 2: 0.5},
 'RACC': {0: 0.10416666666666667,
  1: 0.041666666666666664,
  2: 0.20833333333333334},
 'RACCU': {0: 0.1111111111111111,
  1: 0.04340277777777778,
  2: 0.21006944444444442},
 'TN': {0: 7, 1: 8, 2: 4},
 'TNR': {0: 0.7777777777777778, 1: 0.8888888888888888, 2: 0.6666666666666666},
 'TON': {0: 7, 1: 10, 2: 7},
 'TOP': {0: 5, 1: 2, 2: 5},
 'TP': {0: 3, 1: 1, 2: 3},
 'TPR': {0: 1.0, 1: 0.3333333333333333, 2: 0.5}}
  • Notice : cm.statistic_result in prev versions (<0.2)
In [9]:
cm.overall_stat
Out[9]:
{'95% CI': (0.30438856248221097, 0.8622781041844558),
 'Bennett_S': 0.37500000000000006,
 'Chi-Squared': 6.6,
 'Chi-Squared DF': 4,
 'Conditional Entropy': 0.9591479170272448,
 'Cramer_V': 0.5244044240850757,
 'Cross Entropy': 1.5935164295556343,
 'Gwet_AC1': 0.3893129770992367,
 'Hamming Loss': 0.41666666666666663,
 'Joint Entropy': 2.4591479170272446,
 'KL Divergence': 0.09351642955563438,
 'Kappa': 0.35483870967741943,
 'Kappa 95% CI': (-0.07707577422109269, 0.7867531935759315),
 'Kappa No Prevalence': 0.16666666666666674,
 'Kappa Standard Error': 0.2203645326012817,
 'Kappa Unbiased': 0.34426229508196726,
 'Lambda A': 0.16666666666666666,
 'Lambda B': 0.42857142857142855,
 'Mutual Information': 0.5242078379544426,
 'NIR': 0.5,
 'Overall_ACC': 0.5833333333333334,
 'Overall_CEN': 0.4638112995385119,
 'Overall_J': (1.225, 0.4083333333333334),
 'Overall_MCEN': 0.5189369467580801,
 'Overall_RACC': 0.3541666666666667,
 'Overall_RACCU': 0.3645833333333333,
 'P-Value': 0.38720703125,
 'PPV_Macro': 0.5666666666666668,
 'PPV_Micro': 0.5833333333333334,
 'Phi-Squared': 0.5499999999999999,
 'Reference Entropy': 1.5,
 'Response Entropy': 1.4833557549816874,
 'Scott_PI': 0.34426229508196726,
 'Standard Error': 0.14231876063832777,
 'Strength_Of_Agreement(Altman)': 'Fair',
 'Strength_Of_Agreement(Cicchetti)': 'Poor',
 'Strength_Of_Agreement(Fleiss)': 'Poor',
 'Strength_Of_Agreement(Landis and Koch)': 'Fair',
 'TPR_Macro': 0.611111111111111,
 'TPR_Micro': 0.5833333333333334,
 'Zero-one Loss': 5}
  • Notice : new in version 0.3
In [10]:
cm.table
Out[10]:
{0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}}
In [11]:
import numpy
In [12]:
y_actu = numpy.array([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2])
y_pred = numpy.array([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2])
In [13]:
cm = ConfusionMatrix(y_actu, y_pred,digit=5)
In [14]:
cm
Out[14]:
pycm.ConfusionMatrix(classes: [0, 1, 2])
  • Notice : numpy.array support in version>0.7

Direct CM

In [15]:
cm2 = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5)
In [16]:
cm2
Out[16]:
pycm.ConfusionMatrix(classes: [0, 1, 2])
In [17]:
cm2.actual_vector
In [18]:
cm2.predict_vector
In [19]:
cm2.classes
Out[19]:
[0, 1, 2]
In [20]:
cm2.class_stat
Out[20]:
{'ACC': {0: 0.8333333333333334, 1: 0.75, 2: 0.5833333333333334},
 'BM': {0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652},
 'CEN': {0: 0.25, 1: 0.49657842846620864, 2: 0.6044162769630221},
 'DOR': {0: 'None', 1: 3.999999999999998, 2: 1.9999999999999998},
 'ERR': {0: 0.16666666666666663, 1: 0.25, 2: 0.41666666666666663},
 'F0.5': {0: 0.6521739130434783,
  1: 0.45454545454545453,
  2: 0.5769230769230769},
 'F1': {0: 0.75, 1: 0.4, 2: 0.5454545454545454},
 'F2': {0: 0.8823529411764706, 1: 0.35714285714285715, 2: 0.5172413793103449},
 'FDR': {0: 0.4, 1: 0.5, 2: 0.4},
 'FN': {0: 0, 1: 2, 2: 3},
 'FNR': {0: 0.0, 1: 0.6666666666666667, 2: 0.5},
 'FOR': {0: 0.0, 1: 0.19999999999999996, 2: 0.4285714285714286},
 'FP': {0: 2, 1: 1, 2: 2},
 'FPR': {0: 0.2222222222222222,
  1: 0.11111111111111116,
  2: 0.33333333333333337},
 'G': {0: 0.7745966692414834, 1: 0.408248290463863, 2: 0.5477225575051661},
 'IS': {0: 1.263034405833794, 1: 1.0, 2: 0.2630344058337938},
 'J': {0: 0.6, 1: 0.25, 2: 0.375},
 'LR+': {0: 4.5, 1: 2.9999999999999987, 2: 1.4999999999999998},
 'LR-': {0: 0.0, 1: 0.7500000000000001, 2: 0.75},
 'MCC': {0: 0.6831300510639732, 1: 0.25819888974716115, 2: 0.1690308509457033},
 'MCEN': {0: 0.2643856189774724, 1: 0.5, 2: 0.6875},
 'MK': {0: 0.6000000000000001, 1: 0.30000000000000004, 2: 0.17142857142857126},
 'N': {0: 9, 1: 9, 2: 6},
 'NPV': {0: 1.0, 1: 0.8, 2: 0.5714285714285714},
 'P': {0: 3, 1: 3, 2: 6},
 'POP': {0: 12, 1: 12, 2: 12},
 'PPV': {0: 0.6, 1: 0.5, 2: 0.6},
 'PRE': {0: 0.25, 1: 0.25, 2: 0.5},
 'RACC': {0: 0.10416666666666667,
  1: 0.041666666666666664,
  2: 0.20833333333333334},
 'RACCU': {0: 0.1111111111111111,
  1: 0.04340277777777778,
  2: 0.21006944444444442},
 'TN': {0: 7, 1: 8, 2: 4},
 'TNR': {0: 0.7777777777777778, 1: 0.8888888888888888, 2: 0.6666666666666666},
 'TON': {0: 7, 1: 10, 2: 7},
 'TOP': {0: 5, 1: 2, 2: 5},
 'TP': {0: 3, 1: 1, 2: 3},
 'TPR': {0: 1.0, 1: 0.3333333333333333, 2: 0.5}}
In [21]:
cm.overall_stat
Out[21]:
{'95% CI': (0.30438856248221097, 0.8622781041844558),
 'Bennett_S': 0.37500000000000006,
 'Chi-Squared': 6.6,
 'Chi-Squared DF': 4,
 'Conditional Entropy': 0.9591479170272448,
 'Cramer_V': 0.5244044240850757,
 'Cross Entropy': 1.5935164295556343,
 'Gwet_AC1': 0.3893129770992367,
 'Hamming Loss': 0.41666666666666663,
 'Joint Entropy': 2.4591479170272446,
 'KL Divergence': 0.09351642955563438,
 'Kappa': 0.35483870967741943,
 'Kappa 95% CI': (-0.07707577422109269, 0.7867531935759315),
 'Kappa No Prevalence': 0.16666666666666674,
 'Kappa Standard Error': 0.2203645326012817,
 'Kappa Unbiased': 0.34426229508196726,
 'Lambda A': 0.16666666666666666,
 'Lambda B': 0.42857142857142855,
 'Mutual Information': 0.5242078379544426,
 'NIR': 0.5,
 'Overall_ACC': 0.5833333333333334,
 'Overall_CEN': 0.4638112995385119,
 'Overall_J': (1.225, 0.4083333333333334),
 'Overall_MCEN': 0.5189369467580801,
 'Overall_RACC': 0.3541666666666667,
 'Overall_RACCU': 0.3645833333333333,
 'P-Value': 0.38720703125,
 'PPV_Macro': 0.5666666666666668,
 'PPV_Micro': 0.5833333333333334,
 'Phi-Squared': 0.5499999999999999,
 'Reference Entropy': 1.5,
 'Response Entropy': 1.4833557549816874,
 'Scott_PI': 0.34426229508196726,
 'Standard Error': 0.14231876063832777,
 'Strength_Of_Agreement(Altman)': 'Fair',
 'Strength_Of_Agreement(Cicchetti)': 'Poor',
 'Strength_Of_Agreement(Fleiss)': 'Poor',
 'Strength_Of_Agreement(Landis and Koch)': 'Fair',
 'TPR_Macro': 0.611111111111111,
 'TPR_Micro': 0.5833333333333334,
 'Zero-one Loss': 5}
  • Notice : new in version 0.8.1
  • In direct matrix mode actual_vector and predict_vector are empty

Activation Threshold

threshold is added in Version 0.9 for real value prediction.

For more information visit Example3

  • Notice : new in version 0.9

Load From File

file is added in Version 0.9.5 in order to load saved confusion matrix with .obj format generated by save_obj method.

For more information visit Example4

  • Notice : new in version 0.9.5

Sample Weights

sample_weight is added in Version 1.2

For more information visit Example5

  • Notice : new in version 1.2

Transpose

transpose is added in Version 1.2 in order to transpose input matrix (only in Direct CM mode)

In [22]:
cm = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5,transpose=True)
In [23]:
cm.matrix()
Predict          0    1    2    
Actual
0                3    0    2    
1                0    1    1    
2                0    2    3    

  • Notice : new in version 1.2

Online Help

online_help function is added in Version 1.1 in order to open each statistics definition in web browser

>>> from pycm import online_help
>>> online_help("J")
>>> online_help("Strength_Of_Agreement(Landis and Koch)")
>>> online_help(2)
  • list of items are available by calling online_help() (without argument)
In [24]:
online_help()
Please choose one parameter : 

Example : online_help("J") or online_help(2)

1-95% CI
2-ACC
3-BM
4-Bennett_S
5-CEN
6-Chi-Squared
7-Chi-Squared DF
8-Conditional Entropy
9-Cramer_V
10-Cross Entropy
11-DOR
12-ERR
13-F0.5
14-F1
15-F2
16-FDR
17-FN
18-FNR
19-FOR
20-FP
21-FPR
22-G
23-Gwet_AC1
24-Hamming Loss
25-IS
26-J
27-Joint Entropy
28-KL Divergence
29-Kappa
30-Kappa 95% CI
31-Kappa No Prevalence
32-Kappa Standard Error
33-Kappa Unbiased
34-LR+
35-LR-
36-Lambda A
37-Lambda B
38-MCC
39-MCEN
40-MK
41-Mutual Information
42-N
43-NIR
44-NPV
45-Overall_ACC
46-Overall_CEN
47-Overall_J
48-Overall_MCEN
49-Overall_RACC
50-Overall_RACCU
51-P
52-P-Value
53-POP
54-PPV
55-PPV_Macro
56-PPV_Micro
57-PRE
58-Phi-Squared
59-RACC
60-RACCU
61-Reference Entropy
62-Response Entropy
63-Scott_PI
64-Standard Error
65-Strength_Of_Agreement(Altman)
66-Strength_Of_Agreement(Cicchetti)
67-Strength_Of_Agreement(Fleiss)
68-Strength_Of_Agreement(Landis and Koch)
69-TN
70-TNR
71-TON
72-TOP
73-TP
74-TPR
75-TPR_Macro
76-TPR_Micro
77-Zero-one Loss

Acceptable Data Types

  1. actual_vector : python list or numpy array of any stringable objects
  2. predict_vector : python list or numpy array of any stringable objects
  3. matrix : dict
  4. digit: int
  5. threshold : FunctionType (function or lambda)
  6. file : File object
  7. sample_weight : python list or numpy array of any stringable objects
  8. transpose : bool
  • run help(ConfusionMatrix) for more information

Basic Parameters

TP (True positive / hit)

A true positive test result is one that detects the condition when the condition is present. (correctly identified)

In [25]:
cm.TP
Out[25]:
{0: 3, 1: 1, 2: 3}

TN (True negative/correct rejection)

A true negative test result is one that does not detect the condition when the condition is absent. (correctly rejected)

In [26]:
cm.TN
Out[26]:
{0: 7, 1: 8, 2: 4}

FP (False positive/false alarm/Type I error)

A false positive test result is one that detects the condition when the condition is absent. (incorrectly identified)

In [27]:
cm.FP
Out[27]:
{0: 0, 1: 2, 2: 3}

FN (False negative/miss/Type II error)

A false negative test result is one that does not detect the condition when the condition is present. (incorrectly rejected)

In [28]:
cm.FN
Out[28]:
{0: 2, 1: 1, 2: 2}

P (Condition positive)

(number of) positive samples. Also known as support (the number of occurrences of each class in y_true)

In [29]:
cm.P
Out[29]:
{0: 5, 1: 2, 2: 5}

N (Condition negative)

(number of) negative samples

In [30]:
cm.N
Out[30]:
{0: 7, 1: 10, 2: 7}

TOP (Test outcome positive)

(number of) positive outcomes

In [31]:
cm.TOP
Out[31]:
{0: 3, 1: 3, 2: 6}

TON (Test outcome negative)

(number of) negative outcomes

In [32]:
cm.TON
Out[32]:
{0: 9, 1: 9, 2: 6}

POP (Population)

In [33]:
cm.POP
Out[33]:
{0: 12, 1: 12, 2: 12}
  • For more information visit here

Class Statistics

TPR (sensitivity, recall, hit rate, or true positive rate)

Sensitivity (also called the true positive rate, the recall, or probability of detection in some fields) measures the proportion of positives that are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition).

For more information visit here

$$TPR=\frac{TP}{P}=\frac{TP}{TP+FN}$$

In [34]:
cm.TPR
Out[34]:
{0: 0.6, 1: 0.5, 2: 0.6}

TNR (specificity or true negative rate)

Specificity (also called the true negative rate) measures the proportion of negatives that are correctly identified as such (e.g. the percentage of healthy people who are correctly identified as not having the condition).

For more information visit here

$$TNR=\frac{TN}{N}=\frac{TN}{TN+FP}$$

In [35]:
cm.TNR
Out[35]:
{0: 1.0, 1: 0.8, 2: 0.5714285714285714}

PPV (precision or positive predictive value)

Predictive value positive is the proportion of positives that correspond to the presence of the condition

For more information visit here

$$PPV=\frac{TP}{TP+FP}$$

In [36]:
cm.PPV
Out[36]:
{0: 1.0, 1: 0.3333333333333333, 2: 0.5}

NPV (negative predictive value)

Predictive value negative is the proportion of negatives that correspond to the absence of the condition

For more information visit here

$$NPV=\frac{TN}{TN+FN}$$

In [37]:
cm.NPV
Out[37]:
{0: 0.7777777777777778, 1: 0.8888888888888888, 2: 0.6666666666666666}

FNR (miss rate or false negative rate)

The false negative rate is the proportion of positives which yield negative test outcomes with the test, i.e., the conditional probability of a negative test result given that the condition being looked for is present.

For more information visit here

$$FNR=\frac{FN}{P}=\frac{FN}{FN+TP}=1-TPR$$

In [38]:
cm.FNR
Out[38]:
{0: 0.4, 1: 0.5, 2: 0.4}

FPR (fall-out or false positive rate)

The false positive rate is the proportion of all negatives that still yield positive test outcomes, i.e., the conditional probability of a positive test result given an event that was not present.

The false positive rate is equal to the significance level. The specificity of the test is equal to 1 minus the false positive rate.

For more information visit here

$$FPR=\frac{FP}{N}=\frac{FP}{FP+TN}=1-TNR$$

In [39]:
cm.FPR
Out[39]:
{0: 0.0, 1: 0.19999999999999996, 2: 0.4285714285714286}

FDR (false discovery rate)

The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections)

For more information visit here

$$FDR=\frac{FP}{FP+TP}=1-PPV$$

In [40]:
cm.FDR
Out[40]:
{0: 0.0, 1: 0.6666666666666667, 2: 0.5}

FOR (false omission rate)

False omission rate (FOR) is a statistical method used in multiple hypothesis testing to correct for multiple comparisons and it is the complement of the negative predictive value. It measures the proportion of false negatives which are incorrectly rejected.

For more information visit here

$$FOR=\frac{FN}{FN+TN}=1-NPV$$

In [41]:
cm.FOR
Out[41]:
{0: 0.2222222222222222, 1: 0.11111111111111116, 2: 0.33333333333333337}

ACC (accuracy)

The accuracy is the number of correct predictions from all predictions made

For more information visit here

$$ACC=\frac{TP+TN}{P+N}=\frac{TP+TN}{TP+TN+FP+FN}$$

In [42]:
cm.ACC
Out[42]:
{0: 0.8333333333333334, 1: 0.75, 2: 0.5833333333333334}

ERR(Error rate)

The accuracy is the number of incorrect predictions from all predictions made

$$ERR=\frac{FP+FN}{P+N}=\frac{FP+FN}{TP+TN+FP+FN}=1-ACC$$

In [43]:
cm.ERR
Out[43]:
{0: 0.16666666666666663, 1: 0.25, 2: 0.41666666666666663}
  • Notice : new in version 0.4

FBeta-Score

In statistical analysis of classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score. The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.

For more information visit here

$$F_{\beta}=(1+\beta^2).\frac{PPV.TPR}{(\beta^2.PPV)+TPR}=\frac{(1+\beta^2).TP}{(1+\beta^2).TP+FP+\beta^2.FN}$$

In [44]:
cm.F1
Out[44]:
{0: 0.75, 1: 0.4, 2: 0.5454545454545454}
In [45]:
cm.F05
Out[45]:
{0: 0.8823529411764706, 1: 0.35714285714285715, 2: 0.5172413793103449}
In [46]:
cm.F2
Out[46]:
{0: 0.6521739130434783, 1: 0.45454545454545453, 2: 0.5769230769230769}
In [47]:
cm.F_beta(Beta=4)
Out[47]:
{0: 0.6144578313253012, 1: 0.4857142857142857, 2: 0.5930232558139535}
  • Notice : new in version 0.4

MCC (Matthews correlation coefficient)

The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation.

For more information visit here

$$MCC=\frac{TP \times TN-FP \times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}$$

In [48]:
cm.MCC
Out[48]:
{0: 0.6831300510639732, 1: 0.25819888974716115, 2: 0.1690308509457033}

BM (Informedness or Bookmaker Informedness)

The informedness of a prediction method as captured by a contingency matrix is defined as the probability that the prediction method will make a correct decision as opposed to guessing and is calculated using the bookmaker algorithm.

$$BM=TPR+TNR-1$$

In [49]:
cm.BM
Out[49]:
{0: 0.6000000000000001, 1: 0.30000000000000004, 2: 0.17142857142857126}

MK (Markedness)

In statistics and psychology, the social science concept of markedness is quantified as a measure of how much one variable is marked as a predictor or possible cause of another, and is also known as Δp (deltaP) in simple two-choice cases

$$MK=PPV+NPV-1$$

In [50]:
cm.MK
Out[50]:
{0: 0.7777777777777777, 1: 0.2222222222222221, 2: 0.16666666666666652}

PLR (Positive likelihood ratio)

Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954.

For more information visit here

$$(LR+)=\frac{TPR}{FPR}$$

In [51]:
cm.PLR
Out[51]:
{0: 'None', 1: 2.5000000000000004, 2: 1.4}

NLR (Negative likelihood ratio)

Likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954.

For more information visit here

$$(LR-)=\frac{FNR}{TNR}$$

In [52]:
cm.NLR
Out[52]:
{0: 0.4, 1: 0.625, 2: 0.7000000000000001}

DOR (Diagnostic odds ratio)

The diagnostic odds ratio is a measure of the effectiveness of a diagnostic test. It is defined as the ratio of the odds of the test being positive if the subject has a disease relative to the odds of the test being positive if the subject does not have the disease.

For more information visit here

$$DOR=\frac{LR+}{LR-}$$

In [53]:
cm.DOR
Out[53]:
{0: 'None', 1: 4.000000000000001, 2: 1.9999999999999998}

PRE (Prevalence)

Prevalence is a statistical concept referring to the number of cases of a disease that are present in a particular population at a given time (Reference Likelihood)

For more information visit here

$$Prevalence=\frac{P}{Population}$$

In [54]:
cm.PRE
Out[54]:
{0: 0.4166666666666667, 1: 0.16666666666666666, 2: 0.4166666666666667}

G (G-measure geometric mean of precision and sensitivity)

Geometric mean of precision and sensitivity

For more information visit here

$$G=\sqrt{PPV.TPR}$$

In [55]:
cm.G
Out[55]:
{0: 0.7745966692414834, 1: 0.408248290463863, 2: 0.5477225575051661}

RACC(Random accuracy)

The expected accuracy from a strategy of randomly guessing categories according to reference and response distributions

$$RACC=\frac{TOP\times P}{Population^2}$$

In [56]:
cm.RACC
Out[56]:
{0: 0.10416666666666667, 1: 0.041666666666666664, 2: 0.20833333333333334}
  • Notice : new in version 0.3

RACCU(Random accuracy unbiased)

The expected accuracy from a strategy of randomly guessing categories according to the average of the reference and response distributions

$$RACCU=(\frac{TOP+P}{2\times Population})^2$$

In [57]:
cm.RACCU
Out[57]:
{0: 0.1111111111111111, 1: 0.04340277777777778, 2: 0.21006944444444442}
  • Notice : new in version 0.8.1

J (Jaccard index)

The Jaccard index, also known as Intersection over Union and the Jaccard similarity coefficient (originally coined coefficient de communauté by Paul Jaccard), is a statistic used for comparing the similarity and diversity of sample sets.

For more information visit here

$$J(A,B)=\frac{|A\cap B|}{|A\cup B|}=\frac{|A\cap B|}{|A|+|B|-|A\cap B|}$$

In [58]:
cm.J
Out[58]:
{0: 0.6, 1: 0.25, 2: 0.375}
  • Notice : new in version 0.9

IS (Information Score)

The amount of information needed to correctly classify an example into class C, whose prior probability is p(C), is defined as -log2(p(C)).

For more information visit here

$$IS=-log_2(\frac{TP+FN}{POP})+log_2(\frac{TP}{TP+FP})$$

In [59]:
cm.IS
Out[59]:
{0: 1.2630344058337937, 1: 0.9999999999999998, 2: 0.26303440583379367}
  • Notice : new in version 1.3

CEN (Confusion Entropy)

CEN based upon the concept of entropy for evaluating classifier performances. By exploiting the misclassification information of confusion matrices, the measure evaluates the confusion level of the class distribution of misclassified samples. Both theoretical analysis and statistical results show that the proposed measure is more discriminating than accuracy and RCI while it remains relatively consistent with the two measures. Moreover, it is more capable of measuring how the samples of different classes have been separated from each other. Hence the proposed measure is more precise than the two measures and can substitute for them to evaluate classifiers in classification applications.

For more information visit here

$$P_{i,j}^{j}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}Matrix(j,k)+Matrix(k,j)}$$

$$P_{i,j}^{i}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}Matrix(i,k)+Matrix(k,i)}$$

$$CEN_j=-\sum_{k=1,k\neq j}^{|C|}(P_{j,k}^jlog_{2(|C|-1)}(P_{j,k}^j)+P_{k,j}^jlog_{2(|C|-1)}(P_{k,j}^j))$$

In [60]:
cm.CEN
Out[60]:
{0: 0.25, 1: 0.49657842846620864, 2: 0.6044162769630221}
  • Notice : new in version 1.3

MCEN (Modified Confusion Entropy)

Modified version of CEN

For more information visit here

$$P_{i,j}^{j}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}(Matrix(j,k)+Matrix(k,j))-Matrix(j,j)}$$

$$P_{i,j}^{i}=\frac{Matrix(i,j)}{\sum_{k=1}^{|C|}(Matrix(i,k)+Matrix(k,i))-Matrix(i,i)}$$

$$MCEN_j=-\sum_{k=1,k\neq j}^{|C|}(P_{j,k}^jlog_{2(|C|-1)}(P_{j,k}^j)+P_{k,j}^jlog_{2(|C|-1)}(P_{k,j}^j))$$

In [61]:
cm.MCEN
Out[61]:
{0: 0.2643856189774724, 1: 0.5, 2: 0.6875}
  • Notice : new in version 1.3

Overall Statistics

Kappa (Nominal)

Kappa is a statistic which measures inter-rater agreement for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as kappa takes into account the possibility of the agreement occurring by chance.

For more information visit here

$$Kappa=\frac{ACC_{Overall}-RACC_{Overall}}{1-RACC_{Overall}}$$

In [62]:
cm.Kappa
Out[62]:
0.35483870967741943
  • Notice : new in version 0.3

Kappa Unbiased

The unbiased kappa value is defined in terms of total accuracy and a slightly different computation of expected likelihood that averages the reference and response probabilities

$$Kappa_{Unbiased}=\frac{ACC_{Overall}-RACCU_{Overall}}{1-RACCU_{Overall}}$$

In [63]:
cm.KappaUnbiased
Out[63]:
0.34426229508196726
  • Notice : new in version 0.8.1

Kappa No Prevalence

The kappa statistic adjusted for prevalence

$$Kappa_{NoPrevalence}=2 \times ACC_{Overall}-1$$

In [64]:
cm.KappaNoPrevalence
Out[64]:
0.16666666666666674
  • Notice : new in version 0.8.1

Kappa 95% CI

Kappa 95% Confidence Interval

$$SE_{Kappa}=\sqrt{\frac{ACC_{Overall}(1-RACC_{Overall})}{(1-RACC_{Overall})^2}}$$

$$Kappa \pm 1.96\times SE_{Kappa}$$

In [65]:
cm.Kappa_SE
Out[65]:
0.2203645326012817
In [66]:
cm.Kappa_CI
Out[66]:
(-0.07707577422109269, 0.7867531935759315)
  • Notice : new in version 0.7

Chi-Squared

Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is suitable for unpaired data from large samples.

For more information visit here

$$\chi^2=\sum_{i=1}^n\sum_{j=1}^n\frac{(Matrix(i,j)-E(i,j))^2}{E(i,j)}$$

$$E(i,j)=\frac{TOP_j\times P_i}{Population}$$

In [67]:
cm.Chi_Squared
Out[67]:
6.6000000000000005
  • Notice : new in version 0.7

Chi-Squared DF

Number of degrees of freedom of this confusion matrix for the chi-squared statistic

$$DF=(|Classes|-1)^2$$

In [68]:
cm.DF
Out[68]:
4
  • Notice : new in version 0.7

Phi-Squared

In statistics, the phi coefficient (or mean square contingency coefficient) is a measure of association for two binary variables. Introduced by Karl Pearson, this measure is similar to the Pearson correlation coefficient in its interpretation. In fact, a Pearson correlation coefficient estimated for two binary variables will return the phi coefficient

For more information visit here

$$\phi^2=\frac{\chi^2}{Population}$$

In [69]:
cm.Phi_Squared
Out[69]:
0.55
  • Notice : new in version 0.7

Cramer's V

In statistics, Cramér's V (sometimes referred to as Cramér's phi) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based on Pearson's chi-squared statistic and was published by Harald Cramér in 1946.

For more information visit here

$$V=\sqrt{\frac{\phi^2}{|Classes|-1}}$$

In [70]:
cm.V
Out[70]:
0.5244044240850758
  • Notice : new in version 0.7

95% CI

In statistics, a confidence interval (CI) is a type of interval estimate (of a population parameter) that is computed from the observed data. The confidence level is the frequency (i.e., the proportion) of possible confidence intervals that contain the true value of their corresponding parameter. In other words, if confidence intervals are constructed using a given confidence level in an infinite number of independent experiments, the proportion of those intervals that contain the true value of the parameter will match the confidence level.

For more information visit here

$$SE_{ACC}=\sqrt{\frac{ACC\times (1-ACC)}{Population}}$$

$$ACC \pm 1.96\times SE_{ACC}$$

In [71]:
cm.CI
Out[71]:
(0.30438856248221097, 0.8622781041844558)
In [72]:
cm.SE
Out[72]:
0.14231876063832777
  • Notice : new in version 0.7

Bennett et al.'s S score (Nominal)

Bennett, Alpert & Goldstein’s S is a statistical measure of inter-rater agreement. It was created by Bennett et al. in 1954 Bennett et al. suggested adjusting inter-rater reliability to accommodate the percentage of rater agreement that might be expected by chance was a better measure than simple agreement between raters.

For more information visit here

$$p_c=\frac{1}{|C|}$$

$$S=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In [73]:
cm.S
Out[73]:
0.37500000000000006
  • Notice : new in version 0.5

Scott's pi (Nominal)

Scott's pi (named after William A. Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem in natural language processing, and goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance.

For more information visit here

$$p_c=\sum_{i=1}^{|C|}(\frac{TOP_i + P_i}{2\times Population})^2$$

$$\pi=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In [74]:
cm.PI
Out[74]:
0.34426229508196726
  • Notice : new in version 0.5

Gwet's AC1

AC1 was originally introduced by Gwet in 2001 (Gwet, 2001). The interpretation of AC1 is similar to generalized kappa (Fleiss, 1971), which is used to assess interrater reliability of when there are multiple raters. Gwet (2002) demonstrated that AC1 can overcome the limitations that kappa is sensitive to trait prevalence and rater's classification probabilities (i.e., marginal probabilities), whereas AC1 provides more robust measure of interrater reliability

$$\pi=\frac{TOP_i + P_i}{2\times Population}$$

$$p_c=\frac{1}{|C|-1}\sum_{i=1}^{|C|}(\pi_i\times (1-\pi_i))$$

$$AC1=\frac{ACC_{Overall}-p_c}{1-p_c}$$

In [75]:
cm.AC1
Out[75]:
0.3893129770992367
  • Notice : new in version 0.5

Reference Entropy

The entropy of the decision problem itself as defined by the counts for the reference. The entropy of a distribution is the average negative log probability of outcomes

$$Likelihood_{Reference}=\frac{P_i}{Population}$$

$$Entropy_{Reference}=-\sum_{i=1}^{|C|}Likelihood_{Reference}(i)\times\log_{2}{Likelihood_{Reference}(i)}$$

In [76]:
cm.ReferenceEntropy
Out[76]:
1.4833557549816874
  • Notice : new in version 0.8.1

Response Entropy

The entropy of the response distribution. The entropy of a distribution is the average negative log probability of outcomes

$$Likelihood_{Response}=\frac{TOP_i}{Population}$$

$$Entropy_{Response}=-\sum_{i=1}^{|C|}Likelihood_{Response}(i)\times\log_{2}{Likelihood_{Response}(i)}$$

In [77]:
cm.ResponseEntropy
Out[77]:
1.5
  • Notice : new in version 0.8.1

Cross Entropy

The cross-entropy of the response distribution against the reference distribution. The cross-entropy is defined by the negative log probabilities of the response distribution weighted by the reference distribution

$$Likelihood_{Reference}=\frac{P_i}{Population}$$

$$Likelihood_{Response}=\frac{TOP_i}{Population}$$

$$Entropy_{Cross}=-\sum_{i=1}^{|C|}Likelihood_{Reference}(i)\times\log_{2}{Likelihood_{Response}(i)}$$

In [78]:
cm.CrossEntropy
Out[78]:
1.5833333333333335
  • Notice : new in version 0.8.1

Joint Entropy

The entropy of the joint reference and response distribution as defined by the underlying matrix

$$P^{'}(i,j)=\frac{Matrix(i,j)}{Population}$$

$$Entropy_{Joint}=-\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}P^{'}(i,j)\times\log_{2}{P^{'}(i,j)}$$

$$0\times\log_{2}{0}\equiv0$$

In [79]:
cm.JointEntropy
Out[79]:
2.4591479170272446
  • Notice : new in version 0.8.1

Conditional Entropy

The entropy of the distribution of categories in the response given that the reference category was as specified

$$P^{'}(j|i)=\frac{Matrix(j,i)}{P_i}$$

$$Entropy_{Conditional}=-\sum_{j=1}^{|C|}P^{'}(j|i)\times\log_{2}{P^{'}(j|i)}$$

In [80]:
cm.ConditionalEntropy
Out[80]:
0.9757921620455572
  • Notice : new in version 0.8.1

Kullback-Liebler (KL) divergence

In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution

For more information visit here

$$Likelihood_{Response}=\frac{TOP_i}{Population}$$

$$Likelihood_{Reference}=\frac{P_i}{Population}$$

$$Divergence=-\sum_{i=1}^{|C|}Likelihood_{Reference}\times\log_{2}{\frac{Likelihood_{Reference}}{Likelihood_{Response}}}$$

In [81]:
cm.KL
Out[81]:
0.09997757835164581
  • Notice : new in version 0.8.1

Mutual Information

Mutual information is defined Kullback-Lieblier divergence, between the product of the individual distributions and the joint distribution. Mutual information is symmetric. We could also subtract the conditional entropy of the reference given the response from the reference entropy to get the same result.

$$P^{'}(i,j)=\frac{Matrix(i,j)}{Population}$$

$$Likelihood_{Reference}=\frac{P_i}{Population}$$

$$Likelihood_{Response}=\frac{TOP_i}{Population}$$

$$MI=-\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}P^{'}(i,j)\times\log_{2}{\frac{P^{'}(i,j)}{Likelihood_{Reference}(i)\times Likelihood_{Response}(i) }}$$

$$MI=Entropy_{Response}-Entropy_{Conditional}$$

In [82]:
cm.MutualInformation
Out[82]:
0.5242078379544428
  • Notice : new in version 0.8.1

Goodman and Kruskal's lambda A

In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis.

For more information visit here

$$\lambda_A=\frac{(\sum_{j=1}^{|C|}Max(Matrix(-,j))-Max(P)}{Population-Max(P)}$$

In [83]:
cm.LambdaA
Out[83]:
0.42857142857142855
  • Notice : new in version 0.8.1

Goodman and Kruskal's lambda B

In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis

For more information visit here

$$\lambda_B=\frac{(\sum_{i=1}^{|C|}Max(Matrix(i,-))-Max(TOP)}{Population-Max(TOP)}$$

In [84]:
cm.LambdaB
Out[84]:
0.16666666666666666
  • Notice : new in version 0.8.1

SOA1 (Strength of Agreement, Landis and Koch benchmark)

Kappa Strength of Agreement
0 > Poor
0 - 0.20 Slight
0.21 – 0.40 Fair
0.41 – 0.60 Moderate
0.61 – 0.80 Substantial
0.81 – 1.00 Almost perfect
In [85]:
cm.SOA1
Out[85]:
'Fair'
  • Notice : new in version 0.3

SOA2 (Strength of Agreement, : Fleiss’ benchmark)

Kappa Strength of Agreement
0.40 > Poor
0.4 - 0.75 Intermediate to Good
More than 0.75 Excellent
In [86]:
cm.SOA2
Out[86]:
'Poor'
  • Notice : new in version 0.4

SOA3 (Strength of Agreement, Altman’s benchmark)

Kappa Strength of Agreement
0.2 > Poor
0.21 – 0.40 Fair
0.41 – 0.60 Moderate
0.61 – 0.80 Good
0.81 – 1.00 Very Good
In [87]:
cm.SOA3
Out[87]:
'Fair'
  • Notice : new in version 0.4

SOA4 (Strength of Agreement, Cicchetti’s benchmark)

Kappa Strength of Agreement
0.4 > Poor
0.4 – 0.59 Fair
0.6 – 0.74 Good
0.74 – 1.00 Excellent
In [88]:
cm.SOA4
Out[88]:
'Poor'
  • Notice : new in version 0.7

Overall_ACC

$$ACC_{Overall}=\frac{\sum_{i=1}^{|C|}TP_i}{Population}$$

In [89]:
cm.Overall_ACC
Out[89]:
0.5833333333333334
  • Notice : new in version 0.4

Overall_RACC

$$RACC_{Overall}=\sum_{i=1}^{|C|}RACC_i$$

In [90]:
cm.Overall_RACC
Out[90]:
0.3541666666666667
  • Notice : new in version 0.4

Overall_RACCU

$$RACCU_{Overall}=\sum_{i=1}^{|C|}RACCU_i$$

In [91]:
cm.Overall_RACCU
Out[91]:
0.3645833333333333
  • Notice : new in version 0.8.1

PPV_Micro

$$PPV_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FP_i}$$

In [92]:
cm.PPV_Micro
Out[92]:
0.5833333333333334
  • Notice : new in version 0.4

TPR_Micro

$$TPR_{Micro}=\frac{\sum_{i=1}^{|C|}TP_i}{\sum_{i=1}^{|C|}TP_i+FN_i}$$

In [93]:
cm.TPR_Micro
Out[93]:
0.5833333333333334
  • Notice : new in version 0.4

PPV_Macro

$$PPV_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FP_i}$$

In [94]:
cm.PPV_Macro
Out[94]:
0.611111111111111
  • Notice : new in version 0.4

TPR_Macro

$$TPR_{Macro}=\frac{1}{|C|}\sum_{i=1}^{|C|}\frac{TP_i}{TP_i+FN_i}$$

In [95]:
cm.TPR_Macro
Out[95]:
0.5666666666666668
  • Notice : new in version 0.4

Overall_J

$$J_{Mean}=\frac{1}{|C|}\sum_{i=1}^{|C|}J_i$$

$$J_{Sum}=\sum_{i=1}^{|C|}J_i$$

$$J_{Overall}=(J_{Sum},J_{Mean})$$

In [96]:
cm.Overall_J
Out[96]:
(1.225, 0.4083333333333334)
  • Notice : new in version 0.9

Hamming Loss

The hamming_loss computes the average Hamming loss or Hamming distance between two sets of samples

$$L_{Hamming}=\frac{1}{Population}\sum_{i=1}^{|P|}1(y_i \neq \widehat{y}_i)$$

In [97]:
cm.HammingLoss
Out[97]:
0.41666666666666663
  • Notice : new in version 1.0

Zero-one Loss

$$L_{0-1}=\sum_{i=1}^{|P|}1(y_i \neq \widehat{y}_i)$$

In [98]:
cm.ZeroOneLoss
Out[98]:
5
  • Notice : new in version 1.1

NIR (No Information Rate)

The no information error rate is the error rate when the input and output are independent

$$NIR=\frac{1}{Population}Max(P)$$

In [99]:
cm.NIR
Out[99]:
0.4166666666666667
  • Notice : new in version 1.2

P-Value

$$x=\sum_{i=1}^{|C|}TP_{i}$$

$$p=NIR$$

$$n=Population$$

$$P-Value_{(ACC > NIR)}=1-\sum_{i=1}^{x}\left(\begin{array}{c}n\\ i\end{array}\right)p^{i}(1-p)^{n-i}$$

In [100]:
cm.PValue
Out[100]:
0.18926430237560654
  • Notice : new in version 1.2

Overall_CEN

$$P_j=\frac{\sum_{k=1}^{|C|}(Matrix(j,k)+Matrix(k,j))}{2\sum_{k,l=1}^{|C|}Matrix(k,l)}$$

$$CEN_{Overall}=\sum_{j=1}^{|C|}(P_jCEN_j)$$

In [101]:
cm.Overall_CEN
Out[101]:
0.4638112995385119
  • Notice : new in version 1.3

Overall_MCEN

$$\alpha=\begin{cases}1 & |C| > 2\\0 & |C| = 2\end{cases}$$

$$P_j=\frac{\sum_{k=1}^{|C|}(Matrix(j,k)+Matrix(k,j))-Matrix(j,j)}{2\sum_{k,l=1}^{|C|}Matrix(k,l)-\alpha \sum_{k=1}^{|C|}Matrix(k,k)}$$

$$MCEN_{Overall}=\sum_{j=1}^{|C|}(P_jMCEN_j)$$

In [102]:
cm.Overall_MCEN
Out[102]:
0.5189369467580801
  • Notice : new in version 1.3

Print

Full

In [103]:
print(cm)
Predict          0    1    2    
Actual
0                3    0    2    
1                0    1    1    
2                0    2    3    




Overall Statistics : 

95% CI                                                           (0.30439,0.86228)
Bennett_S                                                        0.375
Chi-Squared                                                      6.6
Chi-Squared DF                                                   4
Conditional Entropy                                              0.97579
Cramer_V                                                         0.5244
Cross Entropy                                                    1.58333
Gwet_AC1                                                         0.38931
Hamming Loss                                                     0.41667
Joint Entropy                                                    2.45915
KL Divergence                                                    0.09998
Kappa                                                            0.35484
Kappa 95% CI                                                     (-0.07708,0.78675)
Kappa No Prevalence                                              0.16667
Kappa Standard Error                                             0.22036
Kappa Unbiased                                                   0.34426
Lambda A                                                         0.42857
Lambda B                                                         0.16667
Mutual Information                                               0.52421
NIR                                                              0.41667
Overall_ACC                                                      0.58333
Overall_CEN                                                      0.46381
Overall_J                                                        (1.225,0.40833)
Overall_MCEN                                                     0.51894
Overall_RACC                                                     0.35417
Overall_RACCU                                                    0.36458
P-Value                                                          0.18926
PPV_Macro                                                        0.61111
PPV_Micro                                                        0.58333
Phi-Squared                                                      0.55
Reference Entropy                                                1.48336
Response Entropy                                                 1.5
Scott_PI                                                         0.34426
Standard Error                                                   0.14232
Strength_Of_Agreement(Altman)                                    Fair
Strength_Of_Agreement(Cicchetti)                                 Poor
Strength_Of_Agreement(Fleiss)                                    Poor
Strength_Of_Agreement(Landis and Koch)                           Fair
TPR_Macro                                                        0.56667
TPR_Micro                                                        0.58333
Zero-one Loss                                                    5

Class Statistics :

Classes                                                          0                       1                       2                       
ACC(Accuracy)                                                    0.83333                 0.75                    0.58333                 
BM(Informedness or bookmaker informedness)                       0.6                     0.3                     0.17143                 
CEN(Confusion entropy)                                           0.25                    0.49658                 0.60442                 
DOR(Diagnostic odds ratio)                                       None                    4.0                     2.0                     
ERR(Error rate)                                                  0.16667                 0.25                    0.41667                 
F0.5(F0.5 score)                                                 0.88235                 0.35714                 0.51724                 
F1(F1 score - harmonic mean of precision and sensitivity)        0.75                    0.4                     0.54545                 
F2(F2 score)                                                     0.65217                 0.45455                 0.57692                 
FDR(False discovery rate)                                        0.0                     0.66667                 0.5                     
FN(False negative/miss/type 2 error)                             2                       1                       2                       
FNR(Miss rate or false negative rate)                            0.4                     0.5                     0.4                     
FOR(False omission rate)                                         0.22222                 0.11111                 0.33333                 
FP(False positive/type 1 error/false alarm)                      0                       2                       3                       
FPR(Fall-out or false positive rate)                             0.0                     0.2                     0.42857                 
G(G-measure geometric mean of precision and sensitivity)         0.7746                  0.40825                 0.54772                 
IS(Information score)                                            1.26303                 1.0                     0.26303                 
J(Jaccard index)                                                 0.6                     0.25                    0.375                   
LR+(Positive likelihood ratio)                                   None                    2.5                     1.4                     
LR-(Negative likelihood ratio)                                   0.4                     0.625                   0.7                     
MCC(Matthews correlation coefficient)                            0.68313                 0.2582                  0.16903                 
MCEN(Modified confusion entropy)                                 0.26439                 0.5                     0.6875                  
MK(Markedness)                                                   0.77778                 0.22222                 0.16667                 
N(Condition negative)                                            7                       10                      7                       
NPV(Negative predictive value)                                   0.77778                 0.88889                 0.66667                 
P(Condition positive or support)                                 5                       2                       5                       
POP(Population)                                                  12                      12                      12                      
PPV(Precision or positive predictive value)                      1.0                     0.33333                 0.5                     
PRE(Prevalence)                                                  0.41667                 0.16667                 0.41667                 
RACC(Random accuracy)                                            0.10417                 0.04167                 0.20833                 
RACCU(Random accuracy unbiased)                                  0.11111                 0.0434                  0.21007                 
TN(True negative/correct rejection)                              7                       8                       4                       
TNR(Specificity or true negative rate)                           1.0                     0.8                     0.57143                 
TON(Test outcome negative)                                       9                       9                       6                       
TOP(Test outcome positive)                                       3                       3                       6                       
TP(True positive/hit)                                            3                       1                       3                       
TPR(Sensitivity, recall, hit rate, or true positive rate)        0.6                     0.5                     0.6                     

Matrix

In [104]:
cm.matrix()
Predict          0    1    2    
Actual
0                3    0    2    
1                0    1    1    
2                0    2    3    

Normalized Matrix

In [105]:
cm.normalized_matrix()
Predict          0               1               2               
Actual
0                0.6             0.0             0.4             
1                0.0             0.5             0.5             
2                0.0             0.4             0.6             

Stat

In [106]:
cm.stat()
Overall Statistics : 

95% CI                                                           (0.30439,0.86228)
Bennett_S                                                        0.375
Chi-Squared                                                      6.6
Chi-Squared DF                                                   4
Conditional Entropy                                              0.97579
Cramer_V                                                         0.5244
Cross Entropy                                                    1.58333
Gwet_AC1                                                         0.38931
Hamming Loss                                                     0.41667
Joint Entropy                                                    2.45915
KL Divergence                                                    0.09998
Kappa                                                            0.35484
Kappa 95% CI                                                     (-0.07708,0.78675)
Kappa No Prevalence                                              0.16667
Kappa Standard Error                                             0.22036
Kappa Unbiased                                                   0.34426
Lambda A                                                         0.42857
Lambda B                                                         0.16667
Mutual Information                                               0.52421
NIR                                                              0.41667
Overall_ACC                                                      0.58333
Overall_CEN                                                      0.46381
Overall_J                                                        (1.225,0.40833)
Overall_MCEN                                                     0.51894
Overall_RACC                                                     0.35417
Overall_RACCU                                                    0.36458
P-Value                                                          0.18926
PPV_Macro                                                        0.61111
PPV_Micro                                                        0.58333
Phi-Squared                                                      0.55
Reference Entropy                                                1.48336
Response Entropy                                                 1.5
Scott_PI                                                         0.34426
Standard Error                                                   0.14232
Strength_Of_Agreement(Altman)                                    Fair
Strength_Of_Agreement(Cicchetti)                                 Poor
Strength_Of_Agreement(Fleiss)                                    Poor
Strength_Of_Agreement(Landis and Koch)                           Fair
TPR_Macro                                                        0.56667
TPR_Micro                                                        0.58333
Zero-one Loss                                                    5

Class Statistics :

Classes                                                          0                       1                       2                       
ACC(Accuracy)                                                    0.83333                 0.75                    0.58333                 
BM(Informedness or bookmaker informedness)                       0.6                     0.3                     0.17143                 
CEN(Confusion entropy)                                           0.25                    0.49658                 0.60442                 
DOR(Diagnostic odds ratio)                                       None                    4.0                     2.0                     
ERR(Error rate)                                                  0.16667                 0.25                    0.41667                 
F0.5(F0.5 score)                                                 0.88235                 0.35714                 0.51724                 
F1(F1 score - harmonic mean of precision and sensitivity)        0.75                    0.4                     0.54545                 
F2(F2 score)                                                     0.65217                 0.45455                 0.57692                 
FDR(False discovery rate)                                        0.0                     0.66667                 0.5                     
FN(False negative/miss/type 2 error)                             2                       1                       2                       
FNR(Miss rate or false negative rate)                            0.4                     0.5                     0.4                     
FOR(False omission rate)                                         0.22222                 0.11111                 0.33333                 
FP(False positive/type 1 error/false alarm)                      0                       2                       3                       
FPR(Fall-out or false positive rate)                             0.0                     0.2                     0.42857                 
G(G-measure geometric mean of precision and sensitivity)         0.7746                  0.40825                 0.54772                 
IS(Information score)                                            1.26303                 1.0                     0.26303                 
J(Jaccard index)                                                 0.6                     0.25                    0.375                   
LR+(Positive likelihood ratio)                                   None                    2.5                     1.4                     
LR-(Negative likelihood ratio)                                   0.4                     0.625                   0.7                     
MCC(Matthews correlation coefficient)                            0.68313                 0.2582                  0.16903                 
MCEN(Modified confusion entropy)                                 0.26439                 0.5                     0.6875                  
MK(Markedness)                                                   0.77778                 0.22222                 0.16667                 
N(Condition negative)                                            7                       10                      7                       
NPV(Negative predictive value)                                   0.77778                 0.88889                 0.66667                 
P(Condition positive or support)                                 5                       2                       5                       
POP(Population)                                                  12                      12                      12                      
PPV(Precision or positive predictive value)                      1.0                     0.33333                 0.5                     
PRE(Prevalence)                                                  0.41667                 0.16667                 0.41667                 
RACC(Random accuracy)                                            0.10417                 0.04167                 0.20833                 
RACCU(Random accuracy unbiased)                                  0.11111                 0.0434                  0.21007                 
TN(True negative/correct rejection)                              7                       8                       4                       
TNR(Specificity or true negative rate)                           1.0                     0.8                     0.57143                 
TON(Test outcome negative)                                       9                       9                       6                       
TOP(Test outcome positive)                                       3                       3                       6                       
TP(True positive/hit)                                            3                       1                       3                       
TPR(Sensitivity, recall, hit rate, or true positive rate)        0.6                     0.5                     0.6                     

  • Notice : cm.params() in prev versions (<0.2)

Save

.pycm file

In [107]:
cm.save_stat("cm1")
Out[107]:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\cm1.pycm',
 'Status': True}
In [108]:
cm.save_stat("cm1asdasd/")
Out[108]:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.pycm'",
 'Status': False}
  • Notice : new in version 0.4

HTML

In [109]:
cm.save_html("cm1")
Out[109]:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\cm1.html',
 'Status': True}
In [110]:
cm.save_html("cm1asdasd/")
Out[110]:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.html'",
 'Status': False}
  • Notice : new in version 0.5

CSV

In [111]:
cm.save_csv("cm1")
Out[111]:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\cm1.csv',
 'Status': True}
In [112]:
cm.save_csv("cm1asdasd/")
Out[112]:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.csv'",
 'Status': False}
  • Notice : new in version 0.6

OBJ

In [113]:
cm.save_obj("cm1")
Out[113]:
{'Message': 'D:\\For Asus Laptop\\projects\\pycm\\Document\\cm1.obj',
 'Status': True}
In [114]:
cm.save_obj("cm1asdasd/")
Out[114]:
{'Message': "[Errno 2] No such file or directory: 'cm1asdasd/.obj'",
 'Status': False}
  • Notice : new in version 0.9.5

Input Errors

In [115]:
try:
    cm2=ConfusionMatrix(y_actu, 2)
except pycmVectorError as e:
    print(str(e))
Input Vectors Must Be List
In [116]:
try:
    cm3=ConfusionMatrix(y_actu, [1,2,3])
except pycmVectorError as e:
    print(str(e))
Input Vectors Must Be The Same Length
In [117]:
try:
    cm_4 = ConfusionMatrix([], [])
except pycmVectorError as e:
    print(str(e))
Input Vectors Are Empty
In [118]:
try:
    cm_5 = ConfusionMatrix([1,1,1,], [1,1,1,1])
except pycmVectorError as e:
    print(str(e))
Input Vectors Must Be The Same Length
In [119]:
try:
    cm3=ConfusionMatrix(matrix={})
except pycmMatrixError as e:
    print(str(e))
Input Confusion Matrix Format Error
In [120]:
try:
    cm_4=ConfusionMatrix(matrix={1:{1:2,"1":2},"1":{1:2,"1":3}})
except pycmMatrixError as e:
    print(str(e))
Input Matrix Classes Must Be Same Type
In [121]:
try:
    cm_5=ConfusionMatrix(matrix={1:{1:2}})
except pycmVectorError as e:
    print(str(e))
Number Of Classes < 2
  • Notice : updated in version 0.8

Examples

Example-1 (Comparison of three different classifiers)

Example-2 (How to plot via matplotlib)

Example-3 (Activation Threshold)

Example-4 (Activation Threshold)

Example-5 (Sample Weights)

References

1- J. R. Landis, G. G. Koch, “The measurement of observer agreement for categorical data. Biometrics,” in International Biometric Society, pp. 159–174, 1977.
2- D. M. W. Powers, “Evaluation: from precision, recall and f-measure to roc, informedness, markedness & correlation,” in Journal of Machine Learning Technologies, pp.37-63, 2011.
3- C. Sammut, G. Webb, “Encyclopedia of Machine Learning” in Springer, 2011.
4- J. L. Fleiss, “Measuring nominal scale agreement among many raters,” in Psychological Bulletin, pp. 378-382.
5- D.G. Altman, “Practical Statistics for Medical Research,” in Chapman and Hall, 1990.
6- K. L. Gwet, “Computing inter-rater reliability and its variance in the presence of high agreement,” in The British Journal of Mathematical and Statistical Psychology, pp. 29–48, 2008.”
7- W. A. Scott, “Reliability of content analysis: The case of nominal scaling,” in Public Opinion Quarterly, pp. 321–325, 1955.
8- E. M. Bennett, R. Alpert, and A. C. Goldstein, “Communication through limited response questioning,” in The Public Opinion Quarterly, pp. 303–308, 1954.
9- D. V. Cicchetti, "Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology," in Psychological Assessment, pp. 284–290, 1994.
10- R.B. Davies, "Algorithm AS155: The Distributions of a Linear Combination of χ2 Random Variables," in Journal of the Royal Statistical Society, pp. 323–333, 1980.
11- S. Kullback, R. A. Leibler "On information and sufficiency," in Annals of Mathematical Statistics, pp. 79–86, 1951.
12- L. A. Goodman, W. H. Kruskal, "Measures of Association for Cross Classifications, IV: Simplification of Asymptotic Variances," in Journal of the American Statistical Association, pp. 415–421, 1972.
13- L. A. Goodman, W. H. Kruskal, "Measures of Association for Cross Classifications III: Approximate Sampling Theory," in Journal of the American Statistical Association, pp. 310–364, 1963.
14- T. Byrt, J. Bishop and J. B. Carlin, “Bias, prevalence, and kappa,” in Journal of Clinical Epidemiology pp. 423-429, 1993.
15- M. Shepperd, D. Bowes, and T. Hall, “Researcher Bias: The Use of Machine Learning in Software Defect Prediction,” in IEEE Transactions on Software Engineering, pp. 603-616, 2014.
16- X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, “An improved method to construct basic probability assignment based on the confusion matrix for classification problem, ” in Information Sciences, pp.250-261, 2016.
17- Wei, J.-M., Yuan, X.-Y., Hu, Q.-H., Wang, S.-Q.: A novel measure for evaluating classifiers. Expert Systems with Applications, Vol 37, 3799–3809 (2010).
18- Kononenko I. and Bratko I. Information-based evaluation criterion for classifier’s performance. Machine Learning, 6:67–80, 1991.
19- Delgado R., Núñez-González J.D. (2019) Enhancing Confusion Entropy as Measure for Evaluating Classifiers. In: Graña M. et al. (eds) International Joint Conference SOCO’18-CISIS’18-ICEUTE’18. SOCO’18-CISIS’18-ICEUTE’18 2018. Advances in Intelligent Systems and Computing, vol 771. Springer, Cham