Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2014 Sep 25:5:1084.
doi: 10.3389/fpsyg.2014.01084. eCollection 2014.

Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks

Affiliations
Review

Does complexity matter? Meta-analysis of learner performance in artificial grammar tasks

Rachel Schiff et al. Front Psychol. .

Abstract

Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies.

Keywords: artificial grammar learning; complexity; grammar system; topological entropy.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Charts of the 10 artificial grammars appearing in Table 1.
Figure 2
Figure 2
A simple grammar chart and a more complex grammar chart.

References

    1. Andrade J., Baddeley A. (2011). The contribution of phonological short-term memory to artificial grammar learning. Q. J. Exp. Psychol. 64, 960–974 10.1080/17470218.2010.533440 - DOI - PubMed
    1. Aslin R. N., Newport E. L. (2008). What statistical learning can and can't tell us about language acquisition, in Infant Pathways to Language: Methods, Models, and Research Directions, eds Colombo J., McCardle P., Freund L. (Mahwah, NJ: Erlbaum; ), 15–29
    1. Bailey T. M., Pothos E. M. (2008). AGL StimSelect: Software for automated selection of stimuli for artificial grammar learning. Behav. Res. Methods 40, 164–176 10.3758/BRM.40.1.164 - DOI - PubMed
    1. Bollt E. M., Jones M. A. (2000). The complexity of artificial grammars. Nonlin. Dyn. Psychol. Life Sci. 4, 153–168 10.1023/A:1009524428448 - DOI
    1. Brooks L. R., Vokey J. R. (1991). Abstract analogies and abstracted grammars: Comments on Reber (1989). J. Exp. Psychol. 120, 316–323 10.1037/0096-3445.120.3.316 - DOI

LinkOut - more resources