Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jun 1;24(3):496-506.
doi: 10.1684/epd.2021.1409.

Measuring expertise in identifying interictal epileptiform discharges

Measuring expertise in identifying interictal epileptiform discharges

Nitish M Harid et al. Epileptic Disord. .

Abstract

Objective: Interictal epileptiform discharges on EEG are integral to diagnosing epilepsy. However, EEGs are interpreted by readers with and without specialty training, and there is no accepted method to assess skill in interpretation. We aimed to develop a test to quantify IED recognition skills.

Methods: A total of 13,262 candidate IEDs were selected from EEGs and scored by eight fellowship-trained reviewers to establish a gold standard. An online test was developed to assess how well readers with different training levels could distinguish candidate waveforms. Sensitivity, false positive rate and calibration were calculated for each reader. A simple mathematical model was developed to estimate each reader's skill and threshold in identifying an IED, and to develop receiver operating characteristics curves for each reader. We investigated the number of IEDs needed to measure skill level with acceptable precision.

Results: Twenty-nine raters completed the test; nine experts, seven experienced non-experts and thirteen novices. Median calibration errors for experts, experienced non-experts and novices were -0.056, 0.012, 0.046; median sensitivities were 0.800, 0.811, 0.715; and median false positive rates were 0.177, 0.272, 0.396, respectively. The number of test questions needed to measure those scores was 549. Our analysis identified that novices had a higher noise level (uncertainty) compared to experienced non-experts and experts. Using calculated noise and threshold levels, receiver operating curves were created, showing increasing median area under the curve from novices (0.735), to experienced non-experts (0.852) and experts (0.891).

Significance: Expert and non-expert readers can be distinguished based on ability to identify IEDs. This type of assessment could also be used to identify and correct differences in thresholds in identifying IEDs.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
(A) Performance metrics for sensitivity and specificity of clinical experts (blue), the original eight experts used for the reference standard (black), experienced non-clinical experts (green) and novices (red). (B-D) Calibration curves for experts (B), experienced non-experts (C) and novices (D).
Figure 2.
Figure 2.
95% confidence interval (CI) for each of the performance metrics: sensitivity (A), false positive rate (B) and calibration error (C) as a function of the number of questions answered. The black vertical dashed lines show the minimum number of questions required to drive the 95% CI below 0.1, corresponding to 549 (A), 514 (B) and 250 (C).
Figure 3.
Figure 3.
The “latent trait” framework for analyzing level of expertise in spike detection: (A) schematic of our framework for measuring a scorer’s level of expertise in recognizing epileptiform discharges; and (B) simulation of the decision process for the ideal observer, expert (including the original eight), experienced non-expert and novice (from top to bottom).
Figure 4.
Figure 4.
(A) Estimation of scorer’s internal parameters for internal noise levels and σ and θ threshold for experts (blue), the original eight experts used as the reference standard (black), experienced non-experts (green) and novices (red). (B) Updated ROC curves based on estimated internal parameter (blue: experts; green: experienced non-experts; red: novices).

References

    1. Pillai J, Sperling MR. Interictal EEG and the diagnosis of epilepsy. Epilepsia 2006; 47(S1): 14–22. - PubMed
    1. Louis EKS, Frey LC, Britton JW, Frey LC, Hopp JL, Korb P, et al. Electroencephalography (EEG): An Introductory Text and Atlas of Normal and Abnormal Findings in Adults, Children, and Infants [Internet]. Chicago: American Epilepsy Society, 2016. Available from: https://www.ncbi.nlm.nih.gov/books/NBK390346/ - PubMed
    1. Kane N, Acharya J, Beniczky S, Caboclo L, Finnigan S, Kaplan PW, et al. A revised glossary of terms most commonly used by clinical electroencephalographers and updated proposal for the report format of the EEG findings. Revision 2017. Clin Neurophysiol Pract 2017; 2: 170–85. - PMC - PubMed
    1. Gloor P. The EEG in the differential diagnosis of epilepsy. Current Concepts in Clinical Neurophysiology. Didactic Lectures of the Ninth International Congress of Electroencephalography and Clinical Neurophysiology, Amsterdam, The Netherlands, September 1977. p. 9–21.
    1. Maulsby RL. Some guidelines for assessment of spikes and sharp waves in EEG tracings. Am J EEG Technol 1971; 11(1): 3–16.