Definition:Cohen's Kappa Statistic

From ProofWiki
Jump to navigation Jump to search

Definition

Let two observers $A$ and $B$ independently classify each of a set of observations into $2$ or more categories.

Cohen's kappa statistic $\kappa$ is a measure of agreement between $A$ and $B$.


Let there be $N$ observations.

Let $n$ denote the number of agreements over all categories.

Let $p_{\mathrm {obs} } := \dfrac n N$ be the observed proportion of agreements.

Let $p_{\mathrm {exp} }$ denote the expected proportion of agreements over all categories under random assignment, as calculated in the usual manner for a contingency table.

Then:

$\kappa = \dfrac {p_{\mathrm {obs} } - p_{\mathrm {exp} } } {1 - p_{\mathrm {exp} } }$


Examples

Medical Diagnosis

Let there be $80$ patients claiming to suffer from depression

Let there be $2$ doctors who are to assess whether or not it is appropriate to treat each patient with a particular antidepressant drug.


In $32$ cases, both agree that treatment is appropriate.

In $35$ cases, both agree that treatment is not appropriate.

In the remaining $13$ cases, they disagree: one doctor believes treatment is appropriate, while the other does not.


Then Cohen's kappa statistic $\kappa$ is evaluated to be:

$\kappa = 0 \cdotp 675$


Also known as

Cohen's kappa statistic is also known as:


Also see

  • Results about Cohen's kappa statistic can be found here.


Source of Name

This entry was named for Jacob Cohen.


Historical Note

Cohen's kappa statistic was devised by Jacob Cohen in $1960$.


Linguistic Note

The kappa in the name of Cohen's kappa statistic is the Greek letter $\kappa$ which is used to denote it.


Sources