Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2018 Jul 30;14(7):e1006328.
doi: 10.1371/journal.pcbi.1006328. eCollection 2018 Jul.

Inter-trial effects in visual pop-out search: Factorial comparison of Bayesian updating models

Affiliations
Comparative Study

Inter-trial effects in visual pop-out search: Factorial comparison of Bayesian updating models

Fredrik Allenmark et al. PLoS Comput Biol. .

Abstract

Many previous studies on visual search have reported inter-trial effects, that is, observers respond faster when some target property, such as a defining feature or dimension, or the response associated with the target repeats versus changes across consecutive trial episodes. However, what processes drive these inter-trial effects is still controversial. Here, we investigated this question using a combination of Bayesian modeling of belief updating and evidence accumulation modeling in perceptual decision-making. In three visual singleton ('pop-out') search experiments, we explored how the probability of the response-critical states of the search display (e.g., target presence/absence) and the repetition/switch of the target-defining dimension (color/ orientation) affect reaction time distributions. The results replicated the mean reaction time (RT) inter-trial and dimension repetition/switch effects that have been reported in previous studies. Going beyond this, to uncover the underlying mechanisms, we used the Drift-Diffusion Model (DDM) and the Linear Approach to Threshold with Ergodic Rate (LATER) model to explain the RT distributions in terms of decision bias (starting point) and information processing speed (evidence accumulation rate). We further investigated how these different aspects of the decision-making process are affected by different properties of stimulus history, giving rise to dissociable inter-trial effects. We approached this question by (i) combining each perceptual decision making model (DDM or LATER) with different updating models, each specifying a plausible rule for updating of either the starting point or the rate, based on stimulus history, and (ii) comparing every possible combination of trial-wise updating mechanism and perceptual decision model in a factorial model comparison. Consistently across experiments, we found that the (recent) history of the response-critical property influences the initial decision bias, while repetition/switch of the target-defining dimension affects the accumulation rate, likely reflecting an implicit 'top-down' modulation process. This provides strong evidence of a disassociation between response- and dimension-based inter-trial effects.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Illustrations of the drift diffusion model (DDM, shown in blue) and the LATER model (shown in red).
The DDM assumes that evidence accumulates, from the starting point (S0), through random diffusion in combination with a drift rate r until a boundary (i.e., threshold, θ) is reached. The LATER model makes the same assumptions, except that the rate r is considered to be constant within any individual trial, but to vary across trials (so as to explain trial-to-trial variability in RTs).
Fig 2
Fig 2. Error rates in Experiments 1, 2, and 3, for all combinations of target frequency.
Target frequency is defined relative to the target condition, as the frequency with which that target condition occurred within a given block. This means that, for a given frequency, the data from the different target conditions do not necessarily come from the same block of the experiment. Error bars show the standard error of the mean.
Fig 3
Fig 3. Mean RTs in Experiments 1, 2, and 3, for all combinations of target condition and target frequency.
Target frequency is defined relative to the target condition, as the frequency with which that target condition occurred within a given block. This means that for a given frequency, the data from the different target conditions do not necessarily come from the same block of the experiment. Error bars show the standard error of the mean.
Fig 4
Fig 4. Inter-trial effects on mean RTs for all three experiments.
Error bars show the standard error of the mean.
Fig 5
Fig 5. Dimension repetition/switch effect in Experiment 3.
Mean RTs were significantly faster when the target-defining dimension was repeated. Error bars show the standard error of the mean.
Fig 6
Fig 6. Feature repetition/switch effects on mean RTs for all three experiments.
Error bars show the standard error of the mean.
Fig 7
Fig 7. Schematic illustration of prior updating and the resulting changes of the starting point.
The top panels show the hyperprior, i.e., the probability distribution on the frequency of target present trials (p), and how it changes over three subsequent trials. The middle panels show the current best estimate of the frequency distribution over target-present and -absent trials (i.e., p and 1 − p). The best estimate of p is defined as the expected value of the hyperprior. The bottom panels show a sketch of the evidence accumulation process where the starting point is set as the log prior odds for the two response options (target- present vs. -absent), computed based on the current best estimate of p. Tp and Ta are the decision thresholds for target-present and -absent responses, respectively, and μp and μa are the respective drift rates. The sketch of the evidence accumulation process is based on the LATER model (rather than the DDM) and therefore shown with a single boundary (that associated with the correct response). Note that the boundary depicted for trial 2 (target absent) is not the same as those for (target-present trials) trials 1 and 3. In the equivalent figure based on the DDM, there would have been two boundaries, and on trial 2, the drift rate would have been negative and the starting point would have been closer to the upper boundary than on the first trial. Note also that this figure illustrates updating with some memory decay (see level 3). Without memory decay, the distribution on trial 3 would be exactly the same as on trial 1.
Fig 8
Fig 8. Mean relative AICs as a function of the tested models in Experiment 1.
For each participant, the AIC of the best-performing model has been subtracted from the AIC for every model, before averaging across participants. Error bars indicate the standard error of the mean. The response-based updating rules are mapped onto the x-axis (RDF-based updating), while the dimension-based updating rules are indicated by different colors (TDD-based updating). The left-hand panel presents the results for the DDM, the right-hand panel for the LATER model. Only models with a non-decision time component are included in the figure. Models without a non-decision time component generally performed worse, and the best-fitting model included a non-decision time component (see also Table A in S4 Text).
Fig 9
Fig 9. Mean relative AICs as a function of the tested models in Experiment 2.
For each participant, the AIC of the best-performing model has been subtracted from the AIC for every model, before averaging across participants. Error bars indicate the standard error of the mean. The response-based updating rules are mapped onto the x-axis (RDF-based updating), while the dimension-based updating rules are indicated by different colors (TDD-based updating). The left-hand panel presents the results for the DDM, the right-hand panel for the LATER model. Only models with a non-decision time component are included in the figure. Models without a non-decision time component generally performed worse, and the best-fitting model included a non-decision time component (see also Table A in S4 Text).
Fig 10
Fig 10. Mean relative AICs as a function of the tested models in Experiment 3.
For each participant, the AIC of the best-performing model has been subtracted from the AIC for every model, before averaging across participants. Error bars indicate the standard error of the mean. The response-based updating rules are mapped onto the x-axis (RDF-based updating), while the dimension-based updating rules are indicated by different colors (TDD-based updating). The left-hand panel presents the results for the DDM, the right-hand panel for the LATER model. Only models with a non-decision time component are included in the figure. Models without a non-decision time component generally performed worse, and the best-fitting model included a non-decision time component (see also Table A in S4 Text).
Fig 11
Fig 11. Scatterplot of predicted vs. observed mean RTs for all experiments, participants, ratio conditions, and inter-trial conditions, for each experiment.
Lines show the corresponding linear fits.
Fig 12
Fig 12. Examples of the updating of the starting point (s0) and the rate.
Left panels A, C, and E show examples of starting point updating for a representative sample of trials from typical participants from Experiments 1–3. Panels B, D, and F show updating of the rate for the same trial samples (from the same participants); the dashed lines represent the baseline rates before scaling for target-absent, color target, and orientation target trials (i.e., the rate that would be used on every trial of that type if there was no updating). In each case, updating was based on the best model, in terms of average AIC, for that experiment.
Fig 13
Fig 13. Example of visual search display with an orientation target.

References

    1. Kristjánsson Á, Wang D, Nakayama K. The role of priming in conjunctive visual search. Cognition. 2002;85: 37–52. 10.1016/S0010-0277(02)00074-4 - DOI - PubMed
    1. Bravo MJ, Nakayama K. The role of attention in different visual-search tasks. Percept Psychophys. 1992;51: 465–472. 10.3758/BF03211642 - DOI - PubMed
    1. Maljkovic V, Nakayama K. Priming of pop-out: I. Role of features. Mem Cognit. 1994;22: 657–672. 10.3758/BF03209251 - DOI - PubMed
    1. Huang L, Holcombe AO, Pashler H. Repetition priming in visual search: Episodic retrieval, not feature priming. Mem Cognit. 2004;32: 12–20. 10.3758/BF03195816 - DOI - PubMed
    1. Maljkovic V, Nakayama K. Priming of pop-out: II. The role of position. Percept Psychophys. 1996;58: 977–991. - PubMed

Publication types

LinkOut - more resources