Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Feb 14:11:197.
doi: 10.3389/fpsyg.2020.00197. eCollection 2020.

The Impact of Test and Sample Characteristics on Model Selection and Classification Accuracy in the Multilevel Mixture IRT Model

Affiliations

The Impact of Test and Sample Characteristics on Model Selection and Classification Accuracy in the Multilevel Mixture IRT Model

Sedat Sen et al. Front Psychol. .

Abstract

The standard item response theory (IRT) model assumption of a single homogenous population may be violated in real data. Mixture extensions of IRT models have been proposed to account for latent heterogeneous populations, but these models are not designed to handle multilevel data structures. Ignoring the multilevel structure is problematic as it results in lower-level units aggregated with higher-level units and yields less accurate results, because of dependencies in the data. Multilevel data structures cause such dependencies between levels but can be modeled in a straightforward way in multilevel mixture IRT models. An important step in the use of multilevel mixture IRT models is the fit of the model to the data. This fit is often determined based on relative fit indices. Previous research on mixture IRT models has shown that performances of these indices and classification accuracy of these models can be affected by several factors including percentage of class-variant items, number of items, magnitude and size of clusters, and mixing proportions of latent classes. As yet, no studies appear to have been reported examining these issues for multilevel extensions of mixture IRT models. The current study aims to investigate the effects of several features of the data on the accuracy of model selection and parameter recovery. Results are reported on a simulation study designed to examine the following features of the data: percentages of class-variant items (30, 60, and 90%), numbers of latent classes in the data (with from 1 to 3 latent classes at level 1 and 1 and 2 latent classes at level 2), numbers of items (10, 30, and 50), numbers of clusters (50 and 100), cluster size (10 and 50), and mixing proportions [equal (0.5 and 0.5) vs. non-equal (0.25 and 0.75)]. Simulation results indicated that multilevel mixture IRT models resulted in less accurate estimates when the number of clusters and the cluster size were small. In addition, mean Root mean square error (RMSE) values increased as the percentage of class-variant items increased and parameters were recovered more accurately under the 30% class-variant item conditions. Mixing proportion type (i.e., equal vs. unequal latent class sizes) and numbers of items (10, 30, and 50), however, did not show any clear pattern. Sample size dependent fit indices BIC, CAIC, and SABIC performed poorly for the smaller level-1 sample size. For the remaining conditions, the SABIC index performed better than other fit indices.

Keywords: classification accuracy; item response theory; mixture item response model; model selection; multilevel data.

PubMed Disclaimer

References

    1. Akaike H. (1974). A new look at the statistical model identification. IEEE Trans. Autom. Control 19 716–723. 10.1109/tac.1974.1100705 - DOI
    1. Bacci S., Gnaldi M. (2012). Multilevel mixture IRT models: an application to the university teaching evaluation. Anal. Mod. Complex Data Behav. Soc. Sci. 38 2775–2791.
    1. Bacci S., Gnaldi M. (2015). A classification of university courses based on students’ satisfaction: an application of a two-level mixture item response model. Qual. Quant. 49 927–940. 10.1007/s11135-014-0101-0 - DOI
    1. Bennink M., Croon M. A., Keuning J., Vermunt J. K. (2014). Measuring student ability, classifying schools, and detecting item bias at school level based on student-level dichotomous items. J. Educ. Behav. Stat. 39 180–201.
    1. Bock R. D., Zimowski M. F. (1997). “Multiple group IRT,” in Handbook of modern item response theory, eds van der Linden W. J., Hambleton R. K., (New York, NY: Springer-Verlag; ), 433–448. 10.1007/978-1-4757-2691-6_25 - DOI

LinkOut - more resources