Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb;55(2):633-638.
doi: 10.3758/s13428-022-01836-1. Epub 2022 Apr 5.

KappaAcc: A program for assessing the adequacy of kappa

Affiliations

KappaAcc: A program for assessing the adequacy of kappa

Roger Bakeman. Behav Res Methods. 2023 Feb.

Abstract

Categorical cutpoints used to assess the adequacy of various statistics-like small, medium, and large for correlation coefficients of .10, .30, and .50 (Cohen, Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.)-are as useful as they are arbitrary, but not all statistics are suitable candidates for categorical cutpoints. One such is kappa, a statistic that gauges inter-observer agreement corrected for chance (Cohen Educational and Psychological Measurement, 20(1), 37-46, Cohen, Educational and Psychological Measurement 20:37-46, 1960). Depending on circumstances, a specific value of kappa may be judged adequate in one case but not in another. Thus, no one value of kappa can be regarded as universally acceptable and the question for investigators should be, are observers accurate enough, not is kappa big enough. A principled way to assess whether a specific value of kappa is adequate is to estimate observer accuracy-how accurate would simulated observers need to be to have generated a specific value of kappa obtained by actual observers, given specific circumstances. Estimating observer accuracy based on a kappa table the user provides is what KappaAcc, the program described here, does.

Keywords: Kappa; Kappa accuracy computer program; Statistics.

PubMed Disclaimer

References

    1. Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis (2nd ed.). Cambridge University Press.
    1. Bakeman, R., & Quera, V. (2011). Sequential analysis and observational methods for the behavioral sciences. Cambridge University Press. - DOI
    1. Bakeman, R., Quera, V., McArthur, D., & Robinson, B. F. (1997). Detecting sequential patterns and determining their reliability with fallible observers. Psychological Methods, 2(4), 357–370. https://doi.org/10.1037/1082-989X.2.4.357 - DOI
    1. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104 - DOI
    1. Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. https://doi.org/10.1037/h0026256 - DOI - PubMed

Publication types