Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jan 9:12:507.
doi: 10.3389/fnhum.2018.00507. eCollection 2018.

Multimodal Simon Effect: A Multimodal Extension of the Diffusion Model for Conflict Tasks

Affiliations

Multimodal Simon Effect: A Multimodal Extension of the Diffusion Model for Conflict Tasks

Mohammad-Ali Nikouei Mahani et al. Front Hum Neurosci. .

Abstract

In conflict tasks, like the Simon task, it is usually investigated how task-irrelevant information affects the processing of task-relevant information. In the present experiments, we extended the Simon task to a multimodal setup, in which task-irrelevant information emerged from two sensory modalities. Specifically, in Experiment 1, participants responded to the identity of letters presented at a left, right, or central position with a left- or right-hand response. Additional tactile stimulation occurred on a left, right, or central position on the horizontal body plane. Response congruency of the visual and tactile stimulation was orthogonally varied. In Experiment 2, the tactile stimulation was replaced by auditory stimulation. In both experiments, the visual task-irrelevant information produced congruency effects such that responses were slower and less accurate in incongruent than incongruent conditions. Furthermore, in Experiment 1, such congruency effects, albeit smaller, were also observed for the tactile task-irrelevant stimulation. In Experiment 2, the auditory task-irrelevant stimulation produced the smallest effects. Specifically, the longest reaction times emerged in the neutral condition, while incongruent and congruent conditions differed only numerically. This suggests that in the co-presence of multiple task-irrelevant information sources, location processing is more strongly determined by visual and tactile spatial information than by auditory spatial information. An extended version of the Diffusion Model for Conflict Tasks (DMC) was fitted to the results of both experiments. This Multimodal Diffusion Model for Conflict Tasks (MDMC), and a model variant involving faster processing in the neutral visual condition (FN-MDMC), provided reasonable fits for the observed data. These model fits support the notion that multimodal task-irrelevant information superimposes across sensory modalities and automatically affects the controlled processing of task-relevant information.

Keywords: conflict processing; diffusion model for conflict tasks (DMC); multimodal congruency effect; multisensory processing; reaction time; simon task.

PubMed Disclaimer

Figures

Figure 1
Figure 1
(A) Experimental setup and vibrotactile belt. Running two vibration motors on the left/center/right side of the belt causes a tactile stimulus to the left/center/right side of the participant's waist. (B) Time course of a trial. A left/center/right visual stimulus was presented along with a left/center/right tactile stimulus. Participants were asked to identify the visual stimulus (H or S) with a left/right key press and to ignore the location of the visual stimulus.
Figure 2
Figure 2
Mean reaction time (left figure) and mean percentage of response errors (right figure) in Experiment 1 as a function of visual and tactile congruency. Error bars were computed according to Morey's method (Morey, 2008).
Figure 3
Figure 3
Cumulative distribution functions (CDFs) and delta (Δ) functions for percentiles (5, 10, 15, …, 95%) in Experiment 1. Error bars show 95% confidence intervals and are calculated according to Morey (2008). Each of the visual (tactile) CDFs was calculated as the average over all tactile (visual) congruency conditions. For example, the visual congruent CDF is the average of CVCT, CVNT, and CVIT conditions. Delta functions show the difference between the congruent and incongruent CDFs as a function of response time.
Figure 4
Figure 4
Mean reaction time (left figure) and mean percentage of response errors (right figure) in Experiment 2 as a function of visual and auditory congruency. Error bars were computed according to Morey's method (Morey, 2008).
Figure 5
Figure 5
Cumulative distribution functions (CDFs) and delta (Δ) functions for percentiles (5, 10, 15, …, 95%) in Experiment 2. Error bars show 95% confidence intervals and are calculated according to Morey (2008). Each of the visual (auditory) CDFs was calculated as the average over all auditory (visual) congruency conditions. For example, the visual congruent CDF is the average of CVCA, CVNA, and CVIA conditions. Delta functions show the difference between the congruent and incongruent CDFs as a function of response time.
Figure 6
Figure 6
Multimodal DMC. The decision process (blue line) is a superimposition of a controlled process (red line) and two automatic processes (green and black lines). (A) Both of the automatic processes are congruent. (B) The first automatic process is congruent and the second one is neutral. (C) The first automatic process is congruent and the second one is incongruent. (D) Both of the automatic processes are incongruent.
Figure 7
Figure 7
Experimental data and model predictions of CDFs for both experiments. Blue dots show the experimental data and red lines show the model predictions. Both models provide a reasonable fit of the experimental data, however, FN-MDMC fits slightly better than MDMC.
Figure 8
Figure 8
Observed results and model predictions of CAFs for both experiments, across all congruency conditions, and for both variants of the model. Blue dots show the experimental data and red lines show the model predictions. The model appropriately predicts the experimental data except for small proportions of the incongruent visual conditions.
Figure 9
Figure 9
Predicted delta (Δ) functions by FN-MDMC for the visual-tactile experiment. Delta functions show the difference between the congruent and incongruent CDFs as a function of response time.
Figure 10
Figure 10
Predicted delta (Δ) functions by FN-MDMC for the visual-auditory experiment. Delta functions show the difference between the congruent and incongruent CDFs as a function of response time.
Figure 11
Figure 11
Automatic activation processes of the fitted models. In both models, the peak activation of the visual automatic process is higher than the peak activation of the automatic tactile/auditory process and thus reflects the relatively strong influence of visual-spatial task-irrelevant information.

Similar articles

Cited by

References

    1. Bertelson P., Radeau M. (1981). Cross-modal bias and perceptual fusion with auditory-visual spatial discordance. Percept. Psychophys. 29, 578–584. 10.3758/BF03207374 - DOI - PubMed
    1. Cohen A. L., Sanborn A. N., Shiffrin R. M. (2008). Model evaluation using grouped or individual data. Psychon. Bull. Rev. 15, 692–712. 10.3758/PBR.15.4.692 - DOI - PubMed
    1. Cohen G., Martin M. (1975). Hemisphere differences in an auditory Stroop test. Percep. Psychophys. 17, 79–83. 10.3758/BF03204002 - DOI
    1. Cohen J. D., Dunbar K., McClelland J. L. (1990). On the control of automatic processes: a parallel distributed processing account of the Stroop effect. Psychol. Rev. 97, 332–361. 10.1037/0033-295X.97.3.332 - DOI - PubMed
    1. Coles M. G., Gratton G., Bashore T. R., Eriksen C. W., Donchin E. (1985). A psychophysiological investigation of the continuous flow model of human information processing. J. Exp. Psychol. Hum. Percept. Perform. 11, 529–553. 10.1037/0096-1523.11.5.529 - DOI - PubMed