Category:Definitions/Cohen's Kappa Statistic

From ProofWiki
Jump to navigation Jump to search

This category contains definitions related to Cohen's Kappa Statistic.
Related results can be found in Category:Cohen's Kappa Statistic.


Let two observers $A$ and $B$ independently classify each of a set of observations into $2$ or more categories.

Cohen's kappa statistic $\kappa$ is a measure of agreement between $A$ and $B$.


Let there be $N$ observations.

Let $n$ denote the number of agreements over all categories.

Let $p_{\mathrm {obs} } := \dfrac n N$ be the observed proportion of agreements.

Let $p_{\mathrm {exp} }$ denote the expected proportion of agreements over all categories under random assignment, as calculated in the usual manner for a contingency table.

Then:

$\kappa = \dfrac {p_{\mathrm {obs} } - p_{\mathrm {exp} } } {1 - p_{\mathrm {exp} } }$

Pages in category "Definitions/Cohen's Kappa Statistic"

The following 4 pages are in this category, out of 4 total.