Article Correctness Is Author's Responsibility: “Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis”: Correction to Oberlader et al. (2016).

The article below may contain offensive and/or incorrect content.

Reports an error in "Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis" by Verena A. Oberlader, Christoph Naefgen, Judith Koppehele-Gossel, Laura Quinten, Rainer Banse and Alexander F. Schmidt (Law and Human Behavior, 2016[Aug], Vol 40[4], 440-457). During an update of this meta-analysis it became apparent that one study was erroneously entered twice. The reduced data of k = 55 studies was reanalyzed after excluding the unpublished study by Scheinberger (1993). The corrected overall effect size changed at the second decimal: d = 1.01 (95% CI [0.77, 1.25], Q = 409.73, p < .001, I² = 92.21) and g = 0.98 (95% CI [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%), k = 55, N = 3,399. This small numerical deviation is negligible and does not change the interpretation of the results. Similarly, results for categorial moderators changed only numerically but not in terms of their statistical significance or direction (see revised Table 4). In the original meta-analysis based on k = 56 studies, unpublished studies had a larger effect size than published studies. Based on k = 55 studies, this difference vanished. Results for continuous moderators also changed only numerically: Q-tests with mixed-effects models still revealed that year of publication (Q = 0.06, p = .807, k = 55) as well as gender ratio in the sample (Q = 1.28, p =.259, k = 43) had no statistically significant influence on effect size. In sum, based on the numerically corrected values our implications for practical advices and boundary conditions for the use of content-based techniques in credibility assessment are still valid. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-21973-001.) Within the scope of judicial decisions, approaches to distinguish between true and fabricated statements have been of particular importance since ancient times. Although methods focusing on "prototypical" deceptive behavior (e.g., psychophysiological phenomena, nonverbal cues) have largely been rejected with regard to validity, content-based techniques constitute a promising approach and are well established within the applied forensic context. The basic idea of this approach is that experience-based and nonexperience-based statements differ in their content-related quality. In order to test the validity of the most prominent content-based techniques, criteria-based content analysis (CBCA) and reality monitoring (RM), we conducted a comprehensive meta-analysis on English- and German-language studies. Based on a variety of decision criteria, 55 studies were included revealing an overall effect size of g = 0.98 (95% confidence interval [0.75, 1.22], Q = 395.49, p < .001, I² = 91.71%, N = 3,399). There was no significant difference in the effectiveness of CBCA and RM. Additionally, we investigated a number of moderator variables, such as characteristics of participants, statements, and judgment procedures, as well as general study characteristics. Results showed that the application of all CBCA criteria outperformed any incomplete CBCA criteria set. Furthermore, statement classification based on discriminant functions revealed higher discrimination rates than decisions based on sum scores. All results are discussed in terms of their significance for future research (e.g., developing standardized decision rules) and practical application (e.g., user training, applying complete criteria set). (PsycINFO Database Record (c) 2019 APA, all rights reserved)