Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jul;84(5):1666-1688.
doi: 10.3758/s13414-022-02498-z. Epub 2022 May 10.

Self-prioritization with unisensory and multisensory stimuli in a matching task

Affiliations

Self-prioritization with unisensory and multisensory stimuli in a matching task

Clea Desebrock et al. Atten Percept Psychophys. 2022 Jul.

Abstract

A shape-label matching task is commonly used to examine the self-advantage in motor reaction-time responses (the Self-Prioritization Effect; SPE). In the present study, auditory labels were introduced, and, for the first time, responses to unisensory auditory, unisensory visual, and multisensory object-label stimuli were compared across block-type (i.e., trials blocked by sensory modality type, and intermixed trials of unisensory and multisensory stimuli). Auditory stimulus intensity was presented at either 50 dB (Group 1) or 70 dB (Group 2). The participants in Group 2 also completed a multisensory detection task, making simple speeded motor responses to the shape and sound stimuli and their multisensory combinations. In the matching task, the SPE was diminished in intermixed trials, and in responses to the unisensory auditory stimuli as compared with the multisensory (visual shape+auditory label) stimuli. In contrast, the SPE did not differ in responses to the unisensory visual and multisensory (auditory object+visual label) stimuli. The matching task was associated with multisensory 'costs' rather than gains, but response times to self- versus stranger-associated stimuli were differentially affected by the type of multisensory stimulus (auditory object+visual label or visual shape+auditory label). The SPE was thus modulated both by block-type and the combination of object and label stimulus modalities. There was no SPE in the detection task. Taken together, these findings suggest that the SPE with unisensory and multisensory stimuli is modulated by both stimulus- and task-related parameters within the matching task. The SPE does not transfer to a significant motor speed gain when the self-associations are not task-relevant.

Keywords: Auditory labels; Blocked vs. intermixed; Matching; Multisensory; Self-prioritization; Self-relevance; Simple detection.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Unisensory stimuli and multisensory stimuli used in the matching task. Example using the self-associated label. Simultaneously-presented: a. Unisensory visual stimulus type: Visual object + Visual Label = V+VL. b. Multisensory stimulus type: Auditory Object + Visual Label = A+VL. c. Multisensory stimulus type: Visual object + Auditory label = V+AL. d. Unisensory auditory stimulus type: Auditory object + Auditory label = A+AL. ♪ = categorical sound.  = geometric shape (triangle presented as example). The Self label is depicted. /Self/ denotes the spoken self-associated label
Fig. 2
Fig. 2
Schematic overview of an experimental trial sequence for the matching task. (Displayed elements not to scale). a Fixation cross. b Visual, auditory, or audiovisual stimulus onset. Shape-label (Visual Task) stimulus shown. c Blank screen. d Written feedback displayed on screen – “Incorrect” / “Too slow” (for a ‘correct’ response a blank slide was displayed following Schäfer et al., 2016b). E. Inter-trial intervals generated at random
Fig. 3
Fig. 3
Bar graphs with individual data points (outliers excluded). Error bars represent SE. a Selfbias in RT index scores in the shape-label Matching trials as a function of Block type (blocked vs intermixed), Stimulus type, and Auditory stimulus intensity. b Multisensory percentage gains/costs in RT as a function of Block type, AV stimulus type, Association, and Auditory stimulus intensity. c Selfbias in sensitivity index scores (d′; D-prime) as a function of Block type, Stimulus type, and Auditory stimulus intensity. d. Multisensory gains/costs in d′ as a function of Block type, AV stimulus type, Association, and Auditory stimulus intensity. V = visual shape stimulus, A = auditory stimulus, VL = visual (text) label, AL = auditory (spoken) label
Fig. 4
Fig. 4
Bar graphs with individual data points (outliers excluded). Estimated marginal means of self-bias in RT index scores in shape-label Matching trials as a function of Block type and Stimulus type (with the Auditory stimulus intensity condition collapsed) showing the main effects of Block type and Stimulus type, and the borderline-significant interaction between Block type and Stimulus type). Error bars represent SE. V = visual shape stimulus, A = sound stimulus, VL = visual (text) label, AL = auditory (spoken) label
Fig. 5
Fig. 5
Detection task. N = 22 (Group 2). Bar graphs (a-c) with individual data points (outliers excluded). Error bars represent SE. a Mean Percentage Accuracy (detection rate) as a function of Association and Stimulus type. b Mean RTs as a function of Association and Condition (Stimulus type). c Mean multisensory RT gains in ms as a function of Association and Multisensory Condition (left). Mean SPE RT gains in ms as a function of multisensory condition (right). SPE gains = the difference between self- and stranger-associated responses. Positive values indicate a self-advantage. d Cumulative Distribution Functions (CDFs) of percentiles of the rank-ordered RTs (PC grand RT means) as a function of Condition (uni- and multisensory). AV Match self = the sound and shape associated with the self in the preceding matching task. AV Match stranger = the sound and shape associated with the stranger in the preceding matching task. MM = mismatch. AV self MM = self-associated shape and stranger-associated sound used in the preceding matching task. AV MM stranger = stranger-associated shape and self-associated sound used in the preceding matching task

Similar articles

Cited by

References

    1. Alexopoulos T, Muller D, Ric F, Marendaz C. I, me, mine: Automatic attentional capture by self-related stimuli. European Journal of Social Psychology. 2012;42(6):770–779. doi: 10.1002/ejsp.1882. - DOI
    1. Arana S, Marquand A, Hultén A, Hagoort P, Schoffelen J-M. Sensory modality-independent activation of the brain network for language. Journal of Neuroscience. 2020;40(14):2914–2924. doi: 10.1523/JNEUROSCI.2271-19.2020. - DOI - PMC - PubMed
    1. Barutchu A, Spence C. Top–down task-specific determinants of multisensory motor reaction time enhancements and sensory switch costs. Experimental Brain Research. 2021;239(3):1021–1034. doi: 10.1007/s00221-020-06014-3. - DOI - PMC - PubMed
    1. Barutchu A, Crewther DP, Crewther SG. The race that precedes coactivation: Development of multisensory facilitation in children. Developmental Science. 2009;12(3):464–473. doi: 10.1111/j.1467-7687.2008.00782.x. - DOI - PubMed
    1. Barutchu A, Danaher J, Crewther SG, Innes-Brown H, Shivdasani MN, Paolini AG. Audiovisual integration in noise by children and adults. Journal of Experimental Child Psychology. 2010;105(1):38–50. doi: 10.1016/j.jecp.2009.08.005. - DOI - PubMed

LinkOut - more resources