Assessing agreement
- PMID: 2716662
- DOI: 10.5694/j.1326-5377.1989.tb136531.x
Assessing agreement
Abstract
Formal evaluation of the ability of clinicians and researchers to agree, for example, on the clinical assessment of patients, increasingly is becoming important. Two measures of agreement, kappa and the intraclass correlation coefficient, are described and illustrated. The calculation of confidence intervals that correspond to these statistics by means of the "bootstrap" method also is discussed.
Comment in
-
Assessing agreement.Med J Aust. 1989 Aug 21;151(4):235-6. doi: 10.5694/j.1326-5377.1989.tb116001.x. Med J Aust. 1989. PMID: 2818736 No abstract available.
MeSH terms
LinkOut - more resources
Full Text Sources