A language learning model for finite parameter spaces
- PMID: 8990971
- DOI: 10.1016/s0010-0277(96)00718-4
A language learning model for finite parameter spaces
Abstract
This paper shows how to formally characterize language learning in a finite parameter space, for instance, in the principles-and-parameters approach to language, as a Markov structure. New language learning results follow directly; we can explicitly calculate how many positive examples on average ("sample complexity") it will take for a learner to correctly identify a target language with high probability. We show how sample complexity varies with input distributions and learning regimes. In particular we find that the average time to converge under reasonable language input distributions for a simple three-parameter system first described by Gibson and Wexler (1994) is psychologically plausible, in the range of 100-150 positive examples. We further find that a simple random step algorithm-that is, simply jumping from one language hypothesis to another rather than changing one parameter at a time-works faster and always converges to the right target language, in contrast to the single-step, local parameter setting method advocated in some recent work.
Comment in
-
Advances in the computational study of language acquisition.Cognition. 1996 Oct-Nov;61(1-2):1-38. doi: 10.1016/s0010-0277(96)00779-2. Cognition. 1996. PMID: 8990967 Review.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
