Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1993 Mar;19(2):309-28.
doi: 10.1037//0278-7393.19.2.309.

Episodic encoding of voice attributes and recognition memory for spoken words

Affiliations

Episodic encoding of voice attributes and recognition memory for spoken words

T J Palmeri et al. J Exp Psychol Learn Mem Cogn. 1993 Mar.

Abstract

Recognition memory for spoken words was investigated with a continuous recognition memory task. Independent variables were number of intervening words (lag) between initial and subsequent presentations of a word, total number of talkers in the stimulus set, and whether words were repeated in the same voice or a different voice. In Experiment 1, recognition judgements were based on word identity alone. Same-voice repetitions were recognized more quickly and accurately than different-voice repetitions at all values of lag and at all levels of talker variability. In Experiment 2, recognition judgments were based on both word identity and voice identity. Subjects recognized repeated voices quite accurately. Gender of the talker affected voice recognition but not item recognition. These results suggest that detailed information about a talker's voice is retained in long-term episodic memory representations of spoken words.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Probability of correctly recognizing old items from all multiple-talker conditions in Experiment 1. (The upper panel displays item recognition for same-voice repetitions and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays item recognition for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 2
Figure 2
Probability of correctly recognizing old items from a subset of the multiple-talker conditions in Experiment 1. (In both panels, item recognition for same-voice repetitions is compared with item recognition for different-voice/same-gender and different-voice/different-gender repetitions. The upper panel displays item recognition as a function of talker variability, collapsed across values of lag; the lower panel displays item recognition as a function of lag, collapsed across levels of talker variability.)
Figure 3
Figure 3
Response times for correctly recognizing old items from all multiple-talker conditions in Experiment 1. (The upper panel displays response times for same-voice repetitions and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays the response times for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 4
Figure 4
Response times for correctly recognizing old items from the single-talker condition and the same-voice repetitions of multiple-talker conditions in Experiment 1. (The upper panel displays response times as a function of talker variability, collapsed across values of lag; the lower panel displays response times for the single-talker condition in comparison with the average response times for the same-voice repetitions of the multiple-talker conditions as a function of lag, collapsed across levels of talker variability.)
Figure 5
Figure 5
Response times for correctly recognizing old items from a subset of the conditions in Experiment 1. (In both panels, response times for same-voice repetitions are compared with response times for different-voice/same-gender and different-voice/different-gender repetitions. The upper panel displays response times as a function of talker variability, collapsed across values of lag; the lower panel displays response times as a function of lag, collapsed across levels of talker variability.)
Figure 6
Figure 6
Probability of recognizing old items from all conditions in Experiment 2. (The upper panel displays item recognition for same-voice repetitions and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays item recognition for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 7
Figure 7
Probability of correctly recognizing old items from a subset of the conditions in Experiment 2. (In both panels, item recognition for same-voice repetitions is compared with item recognition for different-voice/same-gender and different-voice/ different-gender repetitions. The upper panel displays item recognition as a function of talker variability, collapsed across values of lag; the lower panel displays item recognition as a function of lag, collapsed across levels of talker variability.)
Figure 8
Figure 8
Response times for correctly recognizing old items from all conditions in Experiment 2. (The upper panel displays response times for same-voice repetitions and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays response times for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 9
Figure 9
Response times for correctly recognizing old items from a subset of the conditions in Experiment 2. (In both panels, response times for same-voice repetitions are compared with response times for different-voice/same-gender and different-voice/different-gender repetitions. The upper panel displays response times as a function of talker variability, collapsed across values of lag; the lower panel displays response times as a function of lag, collapsed across levels of talker variability.)
Figure 10
Figure 10
Probability of correctly recognizing old items as a repetition in the same voice or as a repetition in a different voice from all conditions in Experiment 2. (The upper panel displays voice recognition for same- and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays voice recognition for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 11
Figure 11
Probability of correctly recognizing old items as a repetition in the same voice or as a repetition in a different voice from a subset of the conditions in Experiment 2. (In both panels, voice recognition for same-voice repetitions is compared with voice recognition for different-voice/same-gender and different-voice/different-gender repetitions. The upper panel displays voice recognition as a function of talker variability, collapsed across values of lag; the lower panel displays voice recognition as a function of lag, collapsed across levels of talker variability.)
Figure 12
Figure 12
Response times for correctly recognizing old items as a repetition in the same voice or as a repetition in a different voice from all conditions in Experiment 2. (The upper panel displays voice recognition response times for same- and different-voice repetitions as a function of talker variability, collapsed across values of lag; the lower panel displays response times for same- and different-voice repetitions as a function of lag, collapsed across levels of talker variability.)
Figure 13
Figure 13
Response times for correctly recognizing old items as a repetition in the same voice or as a repetition in a different voice from a subset of the conditions in Experiment 2. (In both panels, the response times for same-voice repetitions are compared with response times for different-voice/same-gender and different-voice/ different-gender repetitions. The upper panel displays voice recognition response times as a function of talker variability, collapsed across values of lag; the lower panel displays response times as a function of lag, collapsed across levels of talker variability.)

Similar articles

Cited by

References

    1. Anderson JR. The Architecture of Cognition. Cambridge, MA: Harvard University Press; 1983.
    1. Assmann PR, Nearey TM, Hogan JT. Vowel identification: Orthographic, perceptual, and acoustic aspects. Journal of the Acoustical Society of America. 1982;71:975–989. - PubMed
    1. Atkinson RC, Shiffrin RM. Human memory: A proposed system and its control processes. In: Spence KW, Spence JT, editors. The psychology of learning and motivation. Vol. 2. New York: Academic Press; 1968. pp. 89–105.
    1. Blandon RAW, Henton CG, Pickering JB. Towards an auditory theory of speaker normalization. Language and Communication. 1984;4:59–69.
    1. Carrell TD. Contributions of fundamental frequency, formant spacing, and glottal waveform to talker identification. Bloomington: Indiana University, Department of Psychology; 1984. (Research on Speech Perception Tech. Rep. No. 5)

Publication types