Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jan 22;34(4):1158-70.
doi: 10.1523/JNEUROSCI.2465-13.2014.

Decoding the dynamics of action, intention, and error detection for conscious and subliminal stimuli

Affiliations

Decoding the dynamics of action, intention, and error detection for conscious and subliminal stimuli

Lucie Charles et al. J Neurosci. .

Abstract

How do we detect our own errors, even before we receive any external feedback? One model hypothesizes that error detection results from the confrontation of two signals: a fast and unconscious motor code, based on a direct sensory-motor pathway; and a slower conscious intention code that computes the required response given the stimulus and task instructions. To test this theory and assess how the chain of cognitive processes leading to error detection is modulated by consciousness, we applied multivariate decoding methods to single-trial magnetoencephalography and electroencephalography data. Human participants performed a fast bimanual number comparison task on masked digits presented at threshold, such that about half of them remained unseen. By using both erroneous and correct trials, we designed orthogonal decoders for the actual response (left or right), the required response (left or right), and the response accuracy (correct or incorrect). While perceptual stimulus information and the actual response hand could be decoded on both conscious and non-conscious trials, the required response could only be decoded on conscious trials. Moreover, whether the current response was correct or incorrect could be decoded only when the target digits were conscious, at a time and with a certainty that varied with the amount of evidence in favor of the correct response. These results are in accordance with the proposed dual-route model of conscious versus nonconscious evidence accumulation, and suggest that explicit error detection is possible only when the brain computes a conscious representation of the desired response, distinct from the ongoing motor program.

Keywords: Action; Consciousness; Error; Intention; MEEG.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental design and dual-route model. A, On each trial, a number was presented for 16 ms at one of two possible locations (top or bottom). It was followed by a mask composed of a fixed array of letters presented at a varying duration after target onset (16, 33, 50, 66, or 100 ms). Participants first performed a fast forced-choice number comparison task where they decided whether the number was smaller or larger than 5. Then, they evaluated the subjective visibility of the target and their own performance in the primary number comparison task. B, Dual-route model for error detection. In this model, two routes accumulate sensory evidence in parallel. A response is emitted by whichever route first reaches its decision threshold. The first route corresponds to automatic sensory–motor association and can be triggered nonconsciously to produce fast motor responses. The second route corresponds to the slower, voluntary processing of the stimulus according to task instructions and produces a conscious representation of the required response (i.e., a conscious intention). The comparison of the outputs of these two routes allows participants to detect a discrepancy between their intended and ongoing responses, and therefore to self-evaluate their performance.
Figure 2.
Figure 2.
Decoding perception, action, intention, and accuracy, for conscious and nonconscious trials. Multivariate decoding was applied either to each time sample (central columns) or to the full trial time window (right columns). Results demonstrate that while stimulus position and actual response could be decoded in both conscious and nonconditions with high accuracy, the required response and the accuracy could be decoded solely in conscious conditions. A, B, D, E, G, H, J, K, Central columns, AUC, a measure of decoding accuracy, is plotted after averaging across subjects, aligned on stimulus onset, separately for the stimulus position decoder (top vs bottom, A, B), actual response decoder (left vs right, D, E), required response decoder (left vs right, G, H), and accuracy decoder (error vs correct, J, K), respectively, in seen (A, C, E, G) and unseen (B, D, F, H) conditions. Gray bars below each graph indicate, for each time point, the number of subjects presenting an above-chance classification score at that instant as computed by cluster analysis. C, F, I, L, Right column, For all six subjects, change in classification scores (AUC) between seen (left points) and unseen (right points) conditions are plotted separately for the stimulus position decoder (C), actual response decoder (F), required response decoder (I), and accuracy decoder (L). In each case, decoding was applied on all the sensors and time points from the full trial time window (0–800 ms after stimulus presentation).
Figure 3.
Figure 3.
Decoding conscious intention independently of motor action. This figure demonstrates that, while subjects are preparing for a given response (correct or erroneous) their brain activity contains decodable information about the response that they should make (the required response). The graph shows the output of the intention decoder (i.e., the estimated probability of the required response being left, averaged across subjects). The decoder was trained on all seen trials. Trials were then sorted according to the actual response and the required response. Errors are plotted in red, and correct trials in blue. Time 0 corresponds to the onset of the stimulus.
Figure 4.
Figure 4.
Decoding perception, action, intention, and accuracy for conscious and nonconscious trials according to SOA. Multivariate decoding was applied either to each time sample (central columns) or to the full trial time window (right columns), and results were spited by SOA condition. Results demonstrate that the required response and the accuracy could be decoded in conscious conditions for each SOA condition. A, B, D, E, G, H, J, K, Central columns, AUC, a measure of decoding accuracy, is plotted for each SOA condition after averaging across subjects, aligned on stimulus onset, separately for the stimulus position decoder (top vs bottom, A, B), actual response decoder (left vs right, D, E), required response decoder (left vs right, G, H), and accuracy decoder (error vs correct, J, K). Due to reduced trial numbers, only the shortest SOAs (16, 33, and 50 ms) are presented for unseen trials (B, E, H, K), while only longer SOAs (33, 50, 66, and 100 ms) are included for seen trials (A, D, G, J). Bars below each graph indicate, for each time point, the number of subjects presenting an above-chance classification score at that instant as computed by cluster analysis. C, F, I, L, Right column, For each subject and each SOA condition, individual measures of AUC are plotted for seen (left) and unseen (right) trials, separately for the stimulus position decoder (C), actual response decoder (F), required response decoder (I), and accuracy decoder (L). In each case, decoding was applied on all the sensors and time points from the full trial time window (0–800 ms after stimulus presentation), and results were then split according to SOA condition.
Figure 5.
Figure 5.
Congruity between action and intention correlates with the strength of error decoding. A, To obtain a trial-by-trial measure of the strength of internal representations of action and intention, we first transformed the output of the classifiers by subtracting the classification probability of the left response from the classification probability of the right response, thus yielding for each trial a measure ranging from −1 (i.e., certainly of a left response) to 1 (certainty of a right response). This computation was done separately for the actual response and for the required response, thus yielding two single-trial indices of the strength of internal representations, the action index, and the intention index. B, The product of the intention and action indices reflects the congruity between intended and executed actions. positive values (blue) are obtained when both action and intention are congruent (the values of the two indices are of the same sign), indicating a high probability of being correct. On the opposite side, negative values (red) indicate a discrepancy between action and intention, and therefore a high probability of committing an error. Note than when no information is available on either the action or the intention, the product is close to 0 and does not allow distinguishing error from correct trials. C, Correlation results of the product of action and intention indices with the decoded error probability for each subject. Each dot corresponds to a single seen trial (red = errors, blue = correct). A negative correlation confirms that the internal representation of an upcoming error is stronger when the discrepancy between internal representations of action and intention is larger.
Figure 6.
Figure 6.
Decoding action, intention, and accuracy before and after the actual key press. A–C, For seen trials only, the figure shows the time course of decoding the actual response (A), the required response (B), and the accuracy (C) relative to actual key-press. The curves were realigned on motor onset and an average measure of decoding success (AUC) was computed across subjects. Gray bars below graphs indicate for each time point the number of subjects presenting an above-chance classification score at that instant, as computed by cluster analysis.
Figure 7.
Figure 7.
The timing of error detection correlates with the slowest of two signals for action and intention. A, Example of a single-trial computation of decoding time. To improve the signal-to-noise ratio, we computed the cumulative sum, across time, of the probability values obtained from each of the three decoders for actual response, required response, and accuracy. Threshold values for each decoder were defined, and the timing of threshold crossing for each time series was taken as an index of the time when this code first became available on this trial. Thus, three values were obtained for each trial (Tact, Tint, and Tacc), corresponding respectively to the time for threshold crossing of the actual response, the required response, and the accuracy decoder. B, Correlation results of the slowest (maximum) time index between Tint and Tact with the time index of error detection Tacc. Each dot corresponds to a single seen trial (red = errors, blue = correct). A positive correlation indicates that, as predicted, error information becomes available only once both action and intention codes have been computed.

Similar articles

Cited by

References

    1. Bernstein PS, Scheffers MK, Coles MG. “Where did I go wrong?” A psychophysiological analysis of error detection. J Exp Psychol Hum Percept Perform. 1995;21:1312–1322. doi: 10.1037/0096-1523.21.6.1312. - DOI - PubMed
    1. Bode S, Haynes JD. Decoding sequential stages of task preparation in the human brain. Neuroimage. 2009;45:606–613. doi: 10.1016/j.neuroimage.2008.11.031. - DOI - PubMed
    1. Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. Conflict monitoring and cognitive control. Psychol Rev. 2001;108:624–652. doi: 10.1037/0033-295X.108.3.624. - DOI - PubMed
    1. Burle B, Roger C, Allain S, Vidal F, Hasbroucq T. Error negativity does not reflect conflict: a reappraisal of conflict monitoring and anterior cingulate cortex activity. J Cogn Neurosci. 2008;20:1637–1655. doi: 10.1162/jocn.2008.20110. - DOI - PubMed
    1. Chang C, Lin C. LIBSVM: a library for support vector machines. ACM TIST. 2011;2 doi: 10.1145/1961189.1961199. article 27. - DOI

Publication types

LinkOut - more resources