Inter-rater reliability of medication error classification in a voluntary patient safety incident reporting system HaiPro in Finland
- PMID: 30509853
- DOI: 10.1016/j.sapharm.2018.11.013
Inter-rater reliability of medication error classification in a voluntary patient safety incident reporting system HaiPro in Finland
Abstract
Background: Medication errors are common in healthcare. Medication error reporting systems can be established for learning from medication errors and risk prone processes, and their data can be analysed and used for improving medication processes in healthcare organisations. However, data reliability testing is crucial to avoid biases in data interpretation and misleading findings informing patient safety improvement.
Objective: To assess the inter-rater reliability of medication error classifications in a voluntary patient safety incident reporting system (HaiPro) widely used in Finland, and to explore reported medication errors and their contributing factors.
Method: The data consisted of medication errors (n = 32 592), including near misses, reported by 36 Finnish healthcare organisations in 2007-2009. The reliability of the original classifications was tested by an independent researcher reclassifying a random sample of errors (1%, n = 288) based on narratives. The inter-rater reliability of agreement (κ) of the classifications was calculated to describe the degree of conformity between the researcher and the original data classifiers. Descriptive statistics were used to describe the medication errors.
Results: The inter-rater reliability between the researcher and the original data classifiers was acceptable (κ ≥ 0.41) in 11 of 42 (26%) medication error classes. Thus, these errors could be pooled from different healthcare units for the exploration of medication errors at the level of all reporting organisations. Contributing factors were identified in 48% (n = 137) of the medication error narratives in the random sample (n = 288). The most commonly reported errors were dispensing errors (34%, n = 10 906), administration errors 25% (n = 7972), and documentation errors 17% (n = 5641).
Conclusions: The data classified by different classifiers can be pooled for some of the medication error classes. Consistency of the classification and the quality of narratives need improvement, as well as reporting and classification of contributing factors to provide high quality information on medication errors.
Keywords: Adverse events; Incident reporting and analysis; Inter-rater reliability; Medication error; Medication safety.
Copyright © 2018 Elsevier Inc. All rights reserved.
Comment in
-
Inter-rater reliability of medication error classification in a voluntary patient safety incident reporting system HaiPro in Finland: Methodological issue.Res Social Adm Pharm. 2019 Sep;15(9):1183-1184. doi: 10.1016/j.sapharm.2019.07.008. Epub 2019 Jul 13. Res Social Adm Pharm. 2019. PMID: 31324569 No abstract available.
-
Methodological aspects when assessing the inter-rater reliability of medication error classification - A response to a letter to editor.Res Social Adm Pharm. 2019 Nov;15(11):1375. doi: 10.1016/j.sapharm.2019.07.012. Epub 2019 Jul 23. Res Social Adm Pharm. 2019. PMID: 31445984 No abstract available.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Medical