Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Clinical Trial
. 2020 Jun;63(2):264-291.
doi: 10.1177/0023830919842353. Epub 2019 Apr 19.

Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults

Affiliations
Clinical Trial

Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults

Irene de la Cruz-Pavía et al. Lang Speech. 2020 Jun.

Abstract

The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk "phrases" from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals' segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.

Keywords: artificial grammar learning; bilingualism; co-speech visual information; frequency-based information; phrase segmentation; prosody.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Shared structure of the artificial languages. The table represents the shared basic structure of the ambiguous artificial languages: (a) the lexical categories and tokens of the languages; (b) the two possible structures of the ambiguous stream; (c) three examples of the 36 test pairs. On the right, a picture of the animated line drawing used in the languages containing visual information.
Figure 2.
Figure 2.
Graphical depiction of the artificial languages in Experiments 1, 2 and 3. The brackets depict the duration of the head nods, whereas the arrows signal the location of their peak.
Figure 3.
Figure 3.
Word order preferences of the participants in Experiment 1. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants.
Figure 4.
Figure 4.
Word order preferences of the participants in Experiment 2. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants. Note that the patterned columns in the top figure depict Exp.1’s groups exposed to frequency-only information, that is, the baseline groups. Experiment 1 and 2’s artificial languages share the same tokens and test items.
Figure 5.
Figure 5.
Word order preferences of the participants in Experiment 3. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants. The patterned columns in the top figure depict Exp.1’s groups exposed to frequency-only information, that is, the baseline groups. Experiment 1, 2 and 3’s artificial languages share the same tokens and test items.

Similar articles

Cited by

References

    1. Bernard C., Gervain J. (2012). Prosodic cues to word order: What level of representation? Frontiers in Psychology, 3. doi: 10.3389/fpsyg.2012.00451 - DOI - PMC - PubMed
    1. Bhatara A., Boll-Avetisyan N., Unger A., Nazzi T., Höhle B. (2013). Native language affects rhythmic grouping of speech. Journal of the Acoustical Society of America, 134, 3828–3843. doi:10.1121/1.4823848 - DOI - PubMed
    1. Bion R. A. H., Benavides-Varela S., Nespor M. (2011). Acoustic markers of prominence influence infants’ and adults’ segmentation of speech sequences. Language and Speech, 54, 123–140. doi: 10.1177/0023830910388018 - DOI - PubMed
    1. Braine M. D. (1963). On learning the grammatical order of words. Psychological Review, 70, 323–348. doi: 10.1037/h0047696 - DOI - PubMed
    1. Braine M. D. (1966). Learning the positions of words relative to a marker element. Journal of Experimental Psychology, 72, 532–540. doi: 10.1037/h0023763 - DOI - PubMed

Publication types

LinkOut - more resources