Large-Sample Variance of Fleiss Generalized Kappa
- PMID: 34267400
- PMCID: PMC8243202
- DOI: 10.1177/0013164420973080
Large-Sample Variance of Fleiss Generalized Kappa
Abstract
Cohen's kappa coefficient was originally proposed for two raters only, and it later extended to an arbitrarily large number of raters to become what is known as Fleiss' generalized kappa. Fleiss' generalized kappa and its large-sample variance are still widely used by researchers and were implemented in several software packages, including, among others, SPSS and the R package "rel." The purpose of this article is to show that the large-sample variance of Fleiss' generalized kappa is systematically being misused, is invalid as a precision measure for kappa, and cannot be used for constructing confidence intervals. A general-purpose variance expression is proposed, which can be used in any statistical inference procedure. A Monte-Carlo experiment is presented, showing the validity of the new variance estimation procedure.
Keywords: Cohen kappa; Fleiss kappa; Gwet AC1; interrater reliability.
© The Author(s) 2020.
Conflict of interest statement
Declaration of Conflicting Interests: The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Figures
References
-
- Cohen J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46. 10.1177/001316446002000104 - DOI
-
- Conger A. J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin, 88(2), 322-328. 10.1037/0033-2909.88.2.322 - DOI
-
- Fleiss J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378-382. 10.1037/h0031619 - DOI
LinkOut - more resources
Full Text Sources