Interformat reliability of digital psychiatric self-report questionnaires: a systematic review
- PMID: 25472463
- PMCID: PMC4275488
- DOI: 10.2196/jmir.3395
Interformat reliability of digital psychiatric self-report questionnaires: a systematic review
Abstract
Background: Research on Internet-based interventions typically use digital versions of pen and paper self-report symptom scales. However, adaptation into the digital format could affect the psychometric properties of established self-report scales. Several studies have investigated differences between digital and pen and paper versions of instruments, but no systematic review of the results has yet been done.
Objective: This review aims to assess the interformat reliability of self-report symptom scales used in digital or online psychotherapy research.
Methods: Three databases (MEDLINE, Embase, and PsycINFO) were systematically reviewed for studies investigating the reliability between digital and pen and paper versions of psychiatric symptom scales.
Results: From a total of 1504 publications, 33 were included in the review, and interformat reliability of 40 different symptom scales was assessed. Significant differences in mean total scores between formats were found in 10 of 62 analyses. These differences were found in just a few studies, which indicates that the results were due to study effects and sample effects rather than unreliable instruments. The interformat reliability ranged from r=.35 to r=.99; however, the majority of instruments showed a strong correlation between format scores. The quality of the included studies varied, and several studies had insufficient power to detect small differences between formats.
Conclusions: When digital versions of self-report symptom scales are compared to pen and paper versions, most scales show high interformat reliability. This supports the reliability of results obtained in psychotherapy research on the Internet and the comparability of the results to traditional psychotherapy research. There are, however, some instruments that consistently show low interformat reliability, suggesting that these conclusions cannot be generalized to all questionnaires. Most studies had at least some methodological issues with insufficient statistical power being the most common issue. Future studies should preferably provide information about the transformation of the instrument into digital format and the procedure for data collection in more detail.
Keywords: Internet; computer; psychometric; psychotherapy; questionnaire; reliability.
Conflict of interest statement
Conflicts of Interest: None declared.
References
-
- Webb TL, Joseph J, Yardley L, Michie S. Using the internet to promote health behavior change: a systematic review and meta-analysis of the impact of theoretical basis, use of behavior change techniques, and mode of delivery on efficacy. J Med Internet Res. 2010;12(1):e4. doi: 10.2196/jmir.1376. http://www.jmir.org/2010/1/e4/ - DOI - PMC - PubMed
-
- Coons SJ, Gwaltney CJ, Hays RD, Lundy JJ, Sloan JA, Revicki DA, Lenderking WR, Cella D, Basch E, ISPOR ePRO Task Force Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force report. Value Health. 2009 Jun;12(4):419–29. doi: 10.1111/j.1524-4733.2008.00470.x. - DOI - PubMed
-
- Thorndike FP, Carlbring P, Smyth FL, Magee JC, Gonder-Frederick L, Ost L, Ritterband LM. Web-based measurement: Effect of completing single or multiple items per webpage. Computers in Human Behavior. 2009 Mar;25(2):393–401. doi: 10.1016/j.chb.2008.05.006. - DOI
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical
