Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Sep 15;73(11):843-6.
doi: 10.1212/WNL.0b013e3181b78425.

Interrater reliability of EEG-video monitoring

Collaborators, Affiliations

Interrater reliability of EEG-video monitoring

S R Benbadis et al. Neurology. .

Abstract

Objective: The diagnosis of psychogenic nonepileptic seizures (PNES) can be challenging. In the absence of a gold standard to verify the reliability of the diagnosis by EEG-video, we sought to assess the interrater reliability of the diagnosis using EEG-video recordings.

Methods: Patient samples consisted of 22 unselected consecutive patients who underwent EEG-video monitoring and had at least an episode recorded. Other test results and histories were not provided because the goal was to assess the reliability of the EEG-video. Data were sent to 22 reviewers, who were board-certified neurologists and practicing epileptologists at epilepsy centers. Choices were 1) PNES, 2) epilepsy, and 3) nonepileptic but not psychogenic ("physiologic") events. Interrater agreement was measured using a kappa coefficient for each diagnostic category. We used generalized kappa coefficients, which measure the overall level of between-method agreement beyond that which can be ascribed to chance. We also report category-specific kappa values.

Results: For the diagnosis of PNES, there was moderate agreement (kappa = 0.57, 95% confidence interval [CI] 0.39-0.76). For the diagnosis of epilepsy, there was substantial agreement (kappa = 0.69, 95% CI 0.51-0.86). For physiologic nonepileptic episodes, the agreement was low (kappa = 0.09, 95% CI 0.02-0.27). The overall kappa statistic across all 3 diagnostic categories was moderate at 0.56 (95% CI 0.41-0.73).

Conclusions: Interrater reliability for the diagnosis of psychogenic nonepileptic seizures by EEG-video monitoring was only moderate. Although this may be related to limitations of the study (diagnosis based on EEG-video alone, artificial nature of the forced choice paradigm, single episode), it highlights the difficulties and subjective components inherent to this diagnosis.

PubMed Disclaimer

References

    1. Benbadis SR. Differential diagnosis of epilepsy. Continuum Lifelong Learning Neurology 2007;13:48–70.
    1. LaFrance WC Jr, Alper K, Babcock D, et al. Nonepileptic seizures treatment workshop summary. Epilepsy Behav 2006;8:451–461. - PMC - PubMed
    1. Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull 1971;76:378–382.
    1. Fleiss JL, Levin BA, Paik MC. Statistical Methods for Rates and Proportions. 3rd ed. Hoboken, NJ: J. Wiley; 2003.
    1. Cook RJ. Kappa and its dependence on marginal rates. In: Armitage P, Colton T, eds. Encyclopedia of Biostatistics. New York: J. Wiley; 1998:2166–2168.

Publication types