Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Oct;40(8):745-760.
doi: 10.1080/13803395.2018.1427699. Epub 2018 Feb 5.

A signal detection-item response theory model for evaluating neuropsychological measures

Affiliations

A signal detection-item response theory model for evaluating neuropsychological measures

Michael L Thomas et al. J Clin Exp Neuropsychol. 2018 Oct.

Abstract

Introduction: Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias.

Method: Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness.

Results: SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education.

Conclusions: SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the development of computerized adaptive tests and integration with mixture and random-effects models.

Keywords: Assessment; item response theory; neuropsychology; recognition memory; signal detection theory; traumatic brain injury.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Equal variance, signal detection theory model. μT = mean of the distribution of familiarity for targets; μF = mean of the distribution of familiarity for foils; d′ = μT minus μF (memory discrimination); C = criterion; Ccenter = value of the criterion relative to the midpoint between μT and μF (bias).
Figure 2
Figure 2
Principle factor analysis scree plot for all recognition memory tests.
Figure 3
Figure 3
Standard error of estimate functions for the signal detection-item response theory models. Face = Penn Face Memory Test. Word = Penn Word Memory Test. Object = Visual Object Learning Test.
Figure 4
Figure 4
Regression of estimates of memory discrimination (θd′) onto self-report of head injury with loss of consciousness with distributions of residuals. SEθ = standard error of estimate. Face = Penn Face Memory Test. Word = Penn Word Memory Test. Object = Visual Object Learning Test.
Figure 5
Figure 5
Test response functions allowing item parameters to vary by groups defined by college versus high school education. Face = Penn Face Memory Test. Word = Penn Word Memory Test. Object = Visual Object Learning Test.

References

    1. American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME) Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.
    1. Basner M, Savitt A, Moore TM, Port AM, McGuire S, Ecker AJ, … Gur RC. Development and validation of the Cognition Test Battery for Spaceflight. Aerospace Medicine and Human Performance. 2015;86(11):942–952. - PMC - PubMed
    1. Batchelder WH. Cognitive psychometrics: Using multinomial processing tree models as measurement tools. In: Embretson SE, editor. Measuring psychological constructs: Advances in model-based approaches. Washington, DC: American Psychological Association; 2010. pp. 71–93.
    1. Batchelder WH, Alexander GE. Discrete-state models: Comment on Pazzaglia, Dube, and Rotello (2013) Psychological Bulletin. 2013;139(6):1204–1212. - PubMed
    1. Brown GG, Lohr J, Notestine R, Turner T, Gamst A, Eyler LT. Performance of schizophrenia and bipolar patients on verbal and figural working memory tasks. Journal of Abnormal Psychology. 2007;116(4):741–753. - PubMed

Publication types