Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Jan;124(1):91-114.
doi: 10.1037/rev0000045.

Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation

Affiliations

Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation

Stephen M Fleming et al. Psychol Rev. 2017 Jan.

Abstract

People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record

PubMed Disclaimer

Figures

Figure 1
Figure 1
Schematic graphical models of self-evaluation. Upper panels show graphical models (with variance/covariance parameters omitted for clarity). In each model, a categorical world state (e.g., stimulus = left [−1] or right [1]) gives rise to a binary action (left or right). Building on signal detection theory, we assume both stimuli give rise to internal decision variables that are Gaussian distributed along a unitary decision axis. To make an action, the observer choose “right” if the decision variable is greater than 0, and “left” otherwise. Lower panels depict a computation of confidence on a single trial of each model, in which the observer responds “right”. (A) First-order model. The world state generates a decision variable Xact that supports both actions and confidence reports. (B) Postdecisional first-order model. As in (A), but allowing the confidence variable (Xconf) to sample additional evidence about the world state, which in this case leads to recognition of an error (confidence < 0.5). (C) Second-order model. The decision and confidence variables are represented as two correlated hidden states. A computation of decision confidence proceeds by first inferring the distribution of possible decision variables conditional on the confidence variable (shown by the probability distribution in the inset), and marginalizing conditional on the subject’s action to arrive at an appropriate confidence level.
Figure 2
Figure 2
Illustration of effects of second-order model parameters on decision and confidence variables. Each panel shows samples of the decision variable (Xact) and the confidence variable (Xconf) drawn from models with different parameter settings. The correlation coefficient ρ increases from (A) to (C). Panel (B) shows the effect of selectively increasing the variability in the confidence variable (compare the width of the marginal distributions of Xconf and Xact). The parameter settings in panel (C) mimic a first-order model in which Xact and Xconf are identical. See the online article for the color version of this figure.
Figure 3
Figure 3
Internal representations supporting decision confidence. Simulations of first-order (A), postdecisional (B), and second-order (C) models showing how confidence changes as a function of stimulus strength and decision accuracy. The upper panels show confidence as a function of objective stimulus strength; the lower panels show confidence as a function of the internal state of each model. See the online article for the color version of this figure.
Figure 4
Figure 4
Internal representations supporting error detection. (A) Confidence as a function of the decision variable and uncertainty parameter σ in the first-order model. (B, C) Confidence as a function of the confidence variable, chosen action and uncertainty parameter σconf in the postdecisional model (B) and second-order model (C). (D) Simulation of how error detection emerges from correlated samples in the second-order model. Samples are generated from a true world state d = 1 with parameter settings σact = 1, σconf = 1 and ρ = 0.6. The model makes errors when Xact falls to the left of the neutral (0) criterion. A subset of these objective errors are “detected” due to the confidence variable providing evidence that the alternative action is preferred, generating a confidence level of less than 0.5. (D) Heat map revealing how the proportion of detected errors in (C) varies according to model parameters σconf and ρ. Objective accuracy (governed by σact) is constant. See the online article for the color version of this figure.
Figure 5
Figure 5
Influence of choices on second-order model confidence. (A) Posterior probability of a rightward world state as a function of confidence variable Xconf and the chosen action. (B, C) The lefthand panels show the influence of actions on the posterior probability of d = 1 for a constant, uninformative sample (Xconf = 0). The righthand panels show the corresponding confidence level. In all panels gray lines show expected confidence from a first-order model for comparison. (B) As the confidence variable becomes less informative (σconf increases), actions have a greater effect on posterior beliefs. (C) As the correlation between Xact and Xconf increases, actions provide less new information about the possible values of d, and their influence on confidence reduces. Constant parameters in all panels are set at σact = 1, σconf = 1, ρ = 0.4.
Figure 6
Figure 6
Predicted effects of choice on confidence. (A) Graphical models for choose-rate and rate-choose experiments illustrating the influence of actions on confidence in the choose-rate condition. (B) Simulation of confidence from choose-rate and rate-choose experiments as a function of stimulus strength and decision accuracy for the second-order model (σact = 1, σconf = 1, ρ = 0.6). Overall confidence (bias) decreases relative to the rate-choose condition when choices are made before confidence ratings (choose-rate), whereas the difference in confidence between correct and error trials (metacognitive sensitivity) increases. (C) As in (B) for the first-order model (σact = 1). Here the predictions for confidence from the choose-rate and rate-choose models are identical and the dotted lines are obscured. (D) Data replotted from Siedlecka et al. (2016), with permission, in which choice and rating order were manipulated. (E) Simulations of second-order model predictions at constant stimulus strength, plotted using same conventions as (D). See the online article for the color version of this figure.
Figure 7
Figure 7
Effects of choice on confidence across a range of second-order model parameter settings. (A) Plots of bias as a function of model parameters σconf (left panel) and ρ (right panel). Across a range of parameter settings confidence is decreased in the choose-rate condition. In the σconf simulation, ρ = 0.6, whereas in the ρ simulation, σconf = 1. (B) Similar to (A) for metacognitive sensitivity (the difference between correct and error confidence). Across a range of parameter settings metacognitive sensitivity is increased in the choose-rate condition.
Figure 8
Figure 8
Modeling changes in metacognitive sensitivity in a second-order framework. (A) Simulated Type II ROCs for different levels of noise in the confidence variable, σconf. As Xconf becomes more variable, metacognitive sensitivity is reduced despite task performance remaining constant. (B) Simulated Type II ROCs for different levels of ρ. As the correlation between the confidence and decision variables is increased, metacognitive sensitivity is decreased. (C) Relationship between d′ and meta-d′ of simulated data sets color-coded by settings of model parameters σconf and σact (ρ = 0.5). Cases of “hyper”-metacognitive sensitivity in which meta-d′ > d′ are associated with parameter ratios less than 1, indicating greater variability in the decision variable compared to the confidence variable. (D) Relationship between meta-d′/d′ of simulated data sets and proportion of detected errors in each dataset. Cases of meta-d′/d′ > 1 (log(meta-d′/d′) > 0) are associated with an increase in the number of detected errors. E) Plot of d′ against meta-d′ obtained from data pooled across a number of empirical studies (Fleming et al., 2010; Fleming, Huijgen, & Dolan, 2012; E. C. Palmer et al., 2014; L. G. Weil et al., 2013), demonstrating the substantial frequency of hyper-metacognitive sensitivity observed in these data sets. See the online article for the color version of this figure.
Figure 9
Figure 9
Modeling changes in metacognitive bias in a second-order framework. Simulated performance levels conditioned on 10 equally spaced confidence bins for different beliefs about parameters (A) σact, (B) σconf, or (C) ρ. In each panel we manipulated beliefs about the relevant parameter while holding the other two parameters constant. For all simulations the actual parameters used to generate samples were fixed at σact = 1.5, σconf = 1, ρ = 0.6. See the online article for the color version of this figure.

References

    1. Adams R. A., Stephan K. E., Brown H. R., Frith C. D., & Friston K. J. (2013). The computational anatomy of psychosis. Frontiers in Psychiatry, 4, 47 10.3389/fpsyt.2013.00047 - DOI - PMC - PubMed
    1. Aitchison L., Bang D., Bahrami B., & Latham P. E. (2015). Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making. PLoS Computational Biology, 11(10), e1004519 10.1371/journal.pcbi.1004519 - DOI - PMC - PubMed
    1. Alexander W. H., & Brown J. W. (2011). Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338–1344. 10.1038/nn.2921 - DOI - PMC - PubMed
    1. Bach D. R., & Dolan R. J. (2012). Knowing how much you don’t know: A neural organization of uncertainty estimates. Nature Reviews Neuroscience, 13, 572–586. - PubMed
    1. Bahrami B., Olsen K., Bang D., Roepstorff A., Rees G., & Frith C. (2012). Together, slowly but surely: The role of social interaction and feedback on the build-up of benefit in collective decision-making. Journal of Experimental Psychology: Human Perception and Performance, 38, 3–8. 10.1037/a0025708 - DOI - PMC - PubMed

Publication types