The effect of item screeners on the quality of patient survey data: a randomized experiment of ambulatory care experience measures
- PMID: 22273089
- DOI: 10.2165/01312067-200902020-00009
The effect of item screeners on the quality of patient survey data: a randomized experiment of ambulatory care experience measures
Abstract
Background: The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are 'qualified' to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant.
Objective: This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored 'not applicable' options in response scales.
Methods: Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n = 2240) represent random samples of active established patients from participating physicians' panels.The 'screener' survey version included item screeners for five test items and the 'no screener' version included tailored 'not applicable' options in response scales instead of using screeners.The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups.
Results: Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the 'screener' versions than 'no screener' versions. However, because the 'screener' versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the 'no screener' version. The relative ranking of medical groups did not differ by item version.
Conclusion: Screeners appear to reduce noise by ensuring that respondents who are not 'qualified' to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from 'screener' versions appears to more than offset the higher item non-response rates compared with 'no screener' versions.
Similar articles
-
Development and Validation of a Single-Item Screener for Self-Reporting Sexual Problems in U.S. Adults.J Gen Intern Med. 2015 Oct;30(10):1468-75. doi: 10.1007/s11606-015-3333-3. Epub 2015 Apr 18. J Gen Intern Med. 2015. PMID: 25893421 Free PMC article.
-
Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study.J Med Internet Res. 2023 Nov 1;25:e48236. doi: 10.2196/48236. J Med Internet Res. 2023. PMID: 37910163 Free PMC article.
-
Screening for more with less: Validation of the Global Appraisal of Individual Needs Quick v3 (GAIN-Q3) screeners.J Subst Abuse Treat. 2021 Jul;126:108414. doi: 10.1016/j.jsat.2021.108414. Epub 2021 Apr 15. J Subst Abuse Treat. 2021. PMID: 34116811 Free PMC article.
-
Autism Screening in Early Childhood: Discriminating Autism From Other Developmental Concerns.Front Neurol. 2020 Dec 10;11:594381. doi: 10.3389/fneur.2020.594381. eCollection 2020. Front Neurol. 2020. PMID: 33362696 Free PMC article. Review.
-
Novel Augmentation Strategies in Major Depression.Dan Med J. 2017 Apr;64(4):B5338. Dan Med J. 2017. PMID: 28385173 Review.
Cited by
-
Examining multiple sources of differential item functioning on the Clinician & Group CAHPS® survey.Health Serv Res. 2011 Dec;46(6pt1):1778-802. doi: 10.1111/j.1475-6773.2011.01299.x. Epub 2011 Aug 11. Health Serv Res. 2011. PMID: 22092021 Free PMC article.
References
LinkOut - more resources
Full Text Sources