Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Sep;88(3):1002-1025.
doi: 10.1007/s11336-023-09919-4. Epub 2023 Jun 8.

Measuring Agreement Using Guessing Models and Knowledge Coefficients

Affiliations

Measuring Agreement Using Guessing Models and Knowledge Coefficients

Jonas Moss. Psychometrika. 2023 Sep.

Abstract

Several measures of agreement, such as the Perreault-Leigh coefficient, the [Formula: see text], and the recent coefficient of van Oest, are based on explicit models of how judges make their ratings. To handle such measures of agreement under a common umbrella, we propose a class of models called guessing models, which contains most models of how judges make their ratings. Every guessing model have an associated measure of agreement we call the knowledge coefficient. Under certain assumptions on the guessing models, the knowledge coefficient will be equal to the multi-rater Cohen's kappa, Fleiss' kappa, the Brennan-Prediger coefficient, or other less-established measures of agreement. We provide several sample estimators of the knowledge coefficient, valid under varying assumptions, and their asymptotic distributions. After a sensitivity analysis and a simulation study of confidence intervals, we find that the Brennan-Prediger coefficient typically outperforms the others, with much better coverage under unfavorable circumstances.

Keywords: AC1; Agreement; Cohen’s kappa; Interrater reliability.

PubMed Disclaimer

Conflict of interest statement

The authors do not have any conflicts of interest to disclose.

References

    1. Aickin M. Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen’s kappa. Biometrics. 1990;46(2):293–302. doi: 10.2307/2531434. - DOI - PubMed
    1. Berry KJ, Mielke PW. A generalization of Cohen’s kappa agreement measure to interval measurement and multiple raters. Educational and Psychological Measurement. 1988;48(4):921–933. doi: 10.1177/0013164488484007. - DOI
    1. Brennan RL, Prediger DJ. Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement. 1981;41(3):687–699. doi: 10.1177/001316448104100307. - DOI
    1. Cohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 1960;20(1):37–46. doi: 10.1177/001316446002000104. - DOI
    1. Cohen J. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin. 1968;70(4):213–220. doi: 10.1037/h0026256. - DOI - PubMed

Publication types