Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Apr 26;12(4):e0176349.
doi: 10.1371/journal.pone.0176349. eCollection 2017.

Pattern classification of EEG signals reveals perceptual and attentional states

Affiliations

Pattern classification of EEG signals reveals perceptual and attentional states

Alexandra List et al. PLoS One. .

Abstract

Pattern classification techniques have been widely used to differentiate neural activity associated with different perceptual, attentional, or other cognitive states, often using fMRI, but more recently with EEG as well. Although these methods have identified EEG patterns (i.e., scalp topographies of EEG signals occurring at certain latencies) that decode perceptual and attentional states on a trial-by-trial basis, they have yet to be applied to the spatial scope of attention toward global or local features of the display. Here, we initially used pattern classification to replicate and extend the findings that perceptual states could be reliably decoded from EEG. We found that visual perceptual states, including stimulus location and object category, could be decoded with high accuracy peaking between 125-250 ms, and that the discriminative spatiotemporal patterns mirrored and extended our (and other well-established) ERP results. Next, we used pattern classification to investigate whether spatiotemporal EEG signals could reliably predict attentional states, and particularly, the scope of attention. The EEG data were reliably differentiated for local versus global attention on a trial-by-trial basis, emerging as a specific spatiotemporal activation pattern over posterior electrode sites during the 250-750 ms interval after stimulus onset. In sum, we demonstrate that multivariate pattern analysis of EEG, which reveals unique spatiotemporal patterns of neural activity distinguishing between behavioral states, is a sensitive tool for characterizing the neural correlates of perception and attention.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Stimuli.
A) In Experiment 1, stimuli were presented individually either in the right or left visual field during passive viewing. SF = spatial frequency. B) In Experiment 2, stimuli were presented centrally and participants determined if the letter H or S was present, regardless of whether it appeared at the global or local level. Irrelevant distracter letters (E or A) were presented at the other level.
Fig 2
Fig 2. Grand average ERPs for right versus left stimulus location.
Grand average ERPs are shown for electrodes PO7 (left) and PO8 (right), for the left (blue) and right (red) stimulus locations (top). The difference wave (black) with the within-subjects standard error (gray shading) are plotted (bottom). The black bars on the horizontal axes reflect stimulus duration.
Fig 3
Fig 3. Grand average ERPs for faces versus Gabors.
Grand average ERPs are shown for electrode PO8 for face (blue) and Gabor (red) stimuli (top). The difference wave (black) with the within-subjects standard error (gray shading) are plotted (bottom). The black bars on the horizontal axes reflect stimulus duration.
Fig 4
Fig 4. Grand average ERPs for inverted versus upright faces.
Grand average ERPs are shown for electrode PO8 for inverted (blue) and upright (red) face stimuli (top). The difference wave (black) with the within-subjects standard error (gray shading) are plotted (bottom). The black bars on the horizontal axes reflect stimulus duration.
Fig 5
Fig 5. Group classification accuracy for right versus left stimulus location.
The gray line shows the group-averaged accuracy at each time point. The black line shows the time-averaged accuracy for each 125-ms time bin (areas between vertical bars), on which inferential statistics were carried out (with within-subject standard errors). For the peak accuracy time bin, the heatmap shows the group-averaged electrode weights across the scalp, also averaged over 125-ms. Chance accuracy is 50% (black horizontal line), and the black horizontal bar on the lower axis reflects stimulus duration. * p < .00625 (Bonferroni-corrected α-level).
Fig 6
Fig 6. Group classification accuracy for face versus Gabor stimuli.
The gray line shows the group-averaged accuracy at each time point. The black line shows the time-averaged accuracy for each 125-ms time bin, on which inferential statistics were carried out (with within-subject standard errors). For the peak accuracy time bin, the heatmap shows the group-averaged electrode weights across the scalp, also averaged over 125 ms. Chance accuracy is 50% (black horizontal line), and the black horizontal bar on the lower axis reflects stimulus duration. * p < .00625 (Bonferroni-corrected α-level).
Fig 7
Fig 7. Group classification accuracy for upright versus inverted face stimuli.
The gray line shows the group-averaged accuracy at each time point. The black line shows the time-averaged accuracy for each 125-ms time bin, on which inferential statistics were carried out (with within-subject standard errors). For the peak accuracy time bin, the heatmap shows the group-averaged electrode weights across the scalp, also averaged over 125 ms. Chance accuracy is 50% (black horizontal line), and the black horizontal bar on the lower axis reflects stimulus duration. * p < .00625 (Bonferroni-corrected α-level).
Fig 8
Fig 8. Grand average ERPs for right versus left responses.
Grand average ERPs are shown for electrodes C3 (left) and C4 (right), for the left (blue) and right (red) responses (top). The difference wave (black) with the within-subjects standard error (gray shading) are plotted (bottom). The black bars on the horizontal axes reflect stimulus duration.
Fig 9
Fig 9. Grand average ERPs for local versus global attention.
Grand average ERPs are shown for electrodes PO7 (left) and PO8 (right), for local (blue) and global (red) attention (top). The difference wave (black) with the within-subjects standard error (gray shading) are plotted (bottom). The black bars on the horizontal axes reflect stimulus duration.
Fig 10
Fig 10. Group classification accuracy for left versus right response.
The gray line shows the group-averaged accuracy at each time point. The black line shows the time-averaged accuracy for each 125-ms time bin, on which inferential statistics were carried out (with within-subject standard errors). For the peak accuracy time bin, the heatmap shows the group-averaged electrode weights across the scalp, also averaged over 125 ms. Chance accuracy is 50% (black horizontal line), and the black horizontal bar on the lower axis reflects stimulus duration. * p < .00625 (Bonferroni-corrected α-level).
Fig 11
Fig 11. Group classification accuracy for global versus local attention.
The gray line shows the group-averaged accuracy at each time point. The black line shows the time-averaged accuracy for each 125-ms time bin, on which inferential statistics were carried out (with within-subject standard errors). For the peak accuracy time bin, the heatmap shows the group-averaged electrode weights across the scalp, also averaged over 125 ms. Chance accuracy is 50% (black horizontal line), and the black horizontal bar on the lower axis reflects stimulus duration. * p < .00625 (Bonferroni-corrected α-level).
Fig 12
Fig 12. Individual participants’ importance maps for global versus local attention.
The importance maps show each participant’s average weights over the 500–625 ms time bin (the 125-ms time bin showing peak group classification). For each individual, the color scale maximum and minimum are set to the positive and negative absolute maximum weight value, to be symmetric about 0. The most informative electrodes are reflected in intense blue or red, with white as least informative.

Similar articles

Cited by

References

    1. Haynes JD, Rees G. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience. 2006;7(7):523–534. doi: 10.1038/nrn1931 - DOI - PubMed
    1. Naselaris T, Kay KN, Nishimoto S, Gallant JL. Encoding and decoding in fMRI. Neuroimage. 2011;56(2):400–10. doi: 10.1016/j.neuroimage.2010.07.073 - DOI - PMC - PubMed
    1. Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Science. 2006;10(9):424–30. - PubMed
    1. Tong F, Pratte MS. Decoding patterns of human brain activity. Annual Review of Psychology. 2012;63:483–509. doi: 10.1146/annurev-psych-120710-100412 - DOI - PMC - PubMed
    1. Blank H, Biele G, Heekeren HR, Philiastides MG. Temporal characteristics of the influence of punishment on perceptual decision making in the human brain. Journal of Neuroscience. 2013;33(9):3939–52. doi: 10.1523/JNEUROSCI.4151-12.2013 - DOI - PMC - PubMed

LinkOut - more resources