"Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis": Correction to Oberlader et al. (2016)
- PMID: 30883181
- DOI: 10.1037/lhb0000324
"Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis": Correction to Oberlader et al. (2016)
Abstract
Reports an error in "Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis" by Verena A. Oberlader, Christoph Naefgen, Judith Koppehele-Gossel, Laura Quinten, Rainer Banse and Alexander F. Schmidt (Law and Human Behavior, 2016[Aug], Vol 40[4], 440-457). During an update of this meta-analysis it became apparent that one study was erroneously entered twice. The reduced data of k = 55 studies was reanalyzed after excluding the unpublished study by Scheinberger (1993). The corrected overall effect size changed at the second decimal: d = 1.01 (95% CI [0.77, 1.25], Q = 409.73, p < .001, I² = 92.21) and g = 0.98 (95% CI [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%), k = 55, N = 3,399. This small numerical deviation is negligible and does not change the interpretation of the results. Similarly, results for categorial moderators changed only numerically but not in terms of their statistical significance or direction (see revised Table 4). In the original meta-analysis based on k = 56 studies, unpublished studies had a larger effect size than published studies. Based on k = 55 studies, this difference vanished. Results for continuous moderators also changed only numerically: Q-tests with mixed-effects models still revealed that year of publication (Q = 0.06, p = .807, k = 55) as well as gender ratio in the sample (Q = 1.28, p =.259, k = 43) had no statistically significant influence on effect size. In sum, based on the numerically corrected values our implications for practical advices and boundary conditions for the use of content-based techniques in credibility assessment are still valid. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-21973-001.) Within the scope of judicial decisions, approaches to distinguish between true and fabricated statements have been of particular importance since ancient times. Although methods focusing on "prototypical" deceptive behavior (e.g., psychophysiological phenomena, nonverbal cues) have largely been rejected with regard to validity, content-based techniques constitute a promising approach and are well established within the applied forensic context. The basic idea of this approach is that experience-based and nonexperience-based statements differ in their content-related quality. In order to test the validity of the most prominent content-based techniques, criteria-based content analysis (CBCA) and reality monitoring (RM), we conducted a comprehensive meta-analysis on English- and German-language studies. Based on a variety of decision criteria, 55 studies were included revealing an overall effect size of g = 0.98 (95% confidence interval [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%, N = 3,399). There was no significant difference in the effectiveness of CBCA and RM. Additionally, we investigated a number of moderator variables, such as characteristics of participants, statements, and judgment procedures, as well as general study characteristics. Results showed that the application of all CBCA criteria outperformed any incomplete CBCA criteria set. Furthermore, statement classification based on discriminant functions revealed higher discrimination rates than decisions based on sum scores. All results are discussed in terms of their significance for future research (e.g., developing standardized decision rules) and practical application (e.g., user training, applying complete criteria set). (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Erratum for
-
Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis.Law Hum Behav. 2016 Aug;40(4):440-457. doi: 10.1037/lhb0000193. Epub 2016 May 5. Law Hum Behav. 2016. PMID: 27149290
Similar articles
-
Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis.Law Hum Behav. 2016 Aug;40(4):440-457. doi: 10.1037/lhb0000193. Epub 2016 May 5. Law Hum Behav. 2016. PMID: 27149290
-
Can credibility criteria be assessed reliably? A meta-analysis of criteria-based content analysis.Psychol Assess. 2017 Jun;29(6):819-834. doi: 10.1037/pas0000426. Psychol Assess. 2017. PMID: 28594222
-
Criteria-Based Content Analysis (CBCA) reality criteria in adults: A meta-analytic review.Int J Clin Health Psychol. 2016 May-Aug;16(2):201-210. doi: 10.1016/j.ijchp.2016.01.002. Epub 2016 Mar 16. Int J Clin Health Psychol. 2016. PMID: 30487863 Free PMC article.
-
Proposing immersive virtual reality scenarios for validating verbal content analysis methods in adult samples.Front Psychol. 2024 Feb 19;15:1352091. doi: 10.3389/fpsyg.2024.1352091. eCollection 2024. Front Psychol. 2024. PMID: 38440246 Free PMC article. Review.
-
"Changes in alcohol use during COVID-19 and associations with contextual and individual difference variables: A systematic review and meta-analysis." Correction to Acuff et al. (2022).Psychol Addict Behav. 2022 Jun;36(4):386. doi: 10.1037/adb0000852. Psychol Addict Behav. 2022. PMID: 35758978
Publication types
LinkOut - more resources
Full Text Sources