Agreement, the f-measure, and reliability in information retrieval
- PMID: 15684123
- PMCID: PMC1090460
- DOI: 10.1197/jamia.M1733
Agreement, the f-measure, and reliability in information retrieval
Abstract
Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.
References
-
- Hersh WR. Information retrieval: a health care prospective. New York: Springer, 1995, pp 45–50.
-
- Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer, 1997.
-
- Fleiss JL. Statistical methods for rates and proportions. , 2nd ed. New York: John Wiley & Sons, 1981, pp 212–36.
-
- Uebersax JS. [cited 2005 March 23]. Available from: http://ourworld.compuserve.com/homepages/jsuebersax/agree.htm/.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources