posted on 2019-12-01, 00:00authored byEmily Midori Nakamoto
Pattern evidence forensic scientists use particular evaluative phrases in order to inform triers-of-fact of their confidence levels associated with their evidential findings. Thus, it is critical to know if a jury-eligible lay audience interprets the phrases in alignment with the scientist’s intended meaning. This study aims to assess lay interpretation of the Questioned Document Examiner (QDE) nine-term conclusion scale by polling participants (n=592) across the United States via Amazon Mechanical Turk. Participants were given a brief QDE testimonial blurb inclusive of a randomized verbal term after which their opinions were assessed with a 0-8 Likert scale answer option. Results showed that participants tended to undervalue the strongly conclusive terms at both ends of the conclusion scale (i.e., identified, strong probability, strong probability did not, eliminated) and that the lower end of the scale was poorly resolved (i.e., not indicated, probably did not, strong probability did not). Additionally, participant exposure to the full QDE conclusion scale significantly improved participant’s accuracy for 8 of the 9 terms. Despite improvement, participants’ term interpretation for the questions, both with and without context, were calculated to be significantly inaccurate for 8 of the 9 terms. No weak evidence effects were observed. These findings are in agreement with previous literature in that pattern evidence verbal terms are not well understood by a lay audience. However, the significant improvement of participants’ answers following exposure to the QDE scale leads us to recommend further research on these contextual effects. Specifically, research assessing neutral party educators would be beneficial towards bettering communication between expert witnesses and triers-of-fact.