Interrater reliability of EEG-video monitoring
- PMID: 19752450
- PMCID: PMC2744280
- DOI: 10.1212/WNL.0b013e3181b78425
Interrater reliability of EEG-video monitoring
Abstract
Objective: The diagnosis of psychogenic nonepileptic seizures (PNES) can be challenging. In the absence of a gold standard to verify the reliability of the diagnosis by EEG-video, we sought to assess the interrater reliability of the diagnosis using EEG-video recordings.
Methods: Patient samples consisted of 22 unselected consecutive patients who underwent EEG-video monitoring and had at least an episode recorded. Other test results and histories were not provided because the goal was to assess the reliability of the EEG-video. Data were sent to 22 reviewers, who were board-certified neurologists and practicing epileptologists at epilepsy centers. Choices were 1) PNES, 2) epilepsy, and 3) nonepileptic but not psychogenic ("physiologic") events. Interrater agreement was measured using a kappa coefficient for each diagnostic category. We used generalized kappa coefficients, which measure the overall level of between-method agreement beyond that which can be ascribed to chance. We also report category-specific kappa values.
Results: For the diagnosis of PNES, there was moderate agreement (kappa = 0.57, 95% confidence interval [CI] 0.39-0.76). For the diagnosis of epilepsy, there was substantial agreement (kappa = 0.69, 95% CI 0.51-0.86). For physiologic nonepileptic episodes, the agreement was low (kappa = 0.09, 95% CI 0.02-0.27). The overall kappa statistic across all 3 diagnostic categories was moderate at 0.56 (95% CI 0.41-0.73).
Conclusions: Interrater reliability for the diagnosis of psychogenic nonepileptic seizures by EEG-video monitoring was only moderate. Although this may be related to limitations of the study (diagnosis based on EEG-video alone, artificial nature of the forced choice paradigm, single episode), it highlights the difficulties and subjective components inherent to this diagnosis.
References
-
- Benbadis SR. Differential diagnosis of epilepsy. Continuum Lifelong Learning Neurology 2007;13:48–70.
-
- Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull 1971;76:378–382.
-
- Fleiss JL, Levin BA, Paik MC. Statistical Methods for Rates and Proportions. 3rd ed. Hoboken, NJ: J. Wiley; 2003.
-
- Cook RJ. Kappa and its dependence on marginal rates. In: Armitage P, Colton T, eds. Encyclopedia of Biostatistics. New York: J. Wiley; 1998:2166–2168.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical