Choice as a function of reinforcer "hold": from probability learning to concurrent reinforcement
- PMID: 18954229
- PMCID: PMC2673116
- DOI: 10.1037/0097-7403.34.4.437
Choice as a function of reinforcer "hold": from probability learning to concurrent reinforcement
Abstract
Two procedures commonly used to study choice are concurrent reinforcement and probability learning. Under concurrent-reinforcement procedures, once a reinforcer is scheduled, it remains available indefinitely until collected. Therefore reinforcement becomes increasingly likely with passage of time or responses on other operanda. Under probability learning, reinforcer probabilities are constant and independent of passage of time or responses. Therefore a particular reinforcer is gained or not, on the basis of a single response, and potential reinforcers are not retained, as when betting at a roulette wheel. In the "real" world, continued availability of reinforcers often lies between these two extremes, with potential reinforcers being lost owing to competition, maturation, decay, and random scatter. The authors parametrically manipulated the likelihood of continued reinforcer availability, defined as hold, and examined the effects on pigeons' choices. Choices varied as power functions of obtained reinforcers under all values of hold. Stochastic models provided generally good descriptions of choice emissions with deviations from stochasticity systematically related to hold. Thus, a single set of principles accounted for choices across hold values that represent a wide range of real-world conditions.
(c) 2008 APA, all rights reserved.
Figures
References
-
- Aparicio CF, Cabrera F. Choice with multiple alternatives: The barrier choice paradigm. Mexican Journal of Behavior Analysis. 2001;27:97–118.
