Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan;47(1):19-33.
doi: 10.1177/01466216221125177. Epub 2022 Sep 9.

Modeling Rapid Guessing Behaviors in Computer-Based Testlet Items

Affiliations

Modeling Rapid Guessing Behaviors in Computer-Based Testlet Items

Kuan-Yu Jin et al. Appl Psychol Meas. 2023 Jan.

Abstract

In traditional test models, test items are independent, and test-takers slowly and thoughtfully respond to each test item. However, some test items have a common stimulus (dependent test items in a testlet), and sometimes test-takers lack motivation, knowledge, or time (speededness), so they perform rapid guessing (RG). Ignoring the dependence in responses to testlet items can negatively bias standard errors of measurement, and ignoring RG by fitting a simpler item response theory (IRT) model can bias the results. Because computer-based testing captures response times on testlet responses, we propose a mixture testlet IRT model with item responses and response time to model RG behaviors in computer-based testlet items. Two simulation studies with Markov chain Monte Carlo estimation using the JAGS program showed (a) good recovery of the item and person parameters in this new model and (b) the harmful consequences of ignoring RG (biased parameter estimates: overestimated item difficulties, underestimated time intensities, underestimated respondent latent speed parameters, and overestimated precision of respondent latent estimates). The application of IRT models with and without RG to data from a computer-based language test showed parameter differences resembling those in the simulations.

Keywords: mixture model; multidimensional item response theory; rapid guessing, testlet; response time.

PubMed Disclaimer

Conflict of interest statement

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Figures

Figure 1.
Figure 1.
Correlation, bias, and standard error of person estimates for the TM-RT and MTM-RG in Simulation 2.
Figure 2.
Figure 2.
Histogram of RTs in the reading test. Note. The mean RT on the 3rd all/page testlet was used to draw Figure 2d.
Figure 3.
Figure 3.
Item parameter estimates for the TM-RT and MTM-RG in the reading test. Note. The 1st, 2nd, and 14th testlets are presented in the one/page design. They are items 1–6, 7–12, and 45–50, respectively.
Figure 4.
Figure 4.
Marginal probability of RG and the probability density functions of RT under the MTM-RG in the reading test. Note. The three all/page testlets are items 1–6, 7–12, and 45–50. The upper and lower bounds define the 95% probability interval. The all/page testlets are excluded in Figure 4b.

References

    1. Bolsinova M., Tijmstra J., Molenaar D., De Boeck P. (2017). Conditional dependence between response time and accuracy: An overview of its possible sources and directions for distinguishing between them. Frontiers in Psychology, 8, 202. 10.3389/fpsyg.2017.00202 - DOI - PMC - PubMed
    1. De Boeck P., Jeon M. (2019). An overview of models for response times and processes in cognitive tests. Frontiers in Psychology, 10, 102. 10.3389/fpsyg.2019.00102 - DOI - PMC - PubMed
    1. Gelman A., Rubin D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457–472. 10.1214/ss/1177011136 - DOI
    1. Huang H.-Y. (2020). A mixture IRTree model for performance decline and nonignorable missing data. Educational and Psychological Measurement, 80(6), 1168–1195. 10.1177/0013164420914711 - DOI - PMC - PubMed
    1. Jeon M., De Boeck P., van der Linden W. J. (2017). Modeling answer change behavior: An application of a generalized item response tree model. Journal of Educational and Behavioral Statistics, 42(4), 467–490. 10.3102/1076998616688015 - DOI

LinkOut - more resources