posted on 2017-11-01, 00:00authored byCarlos Roman Salas
Dual-Process Theories provide a useful framework for exploring the potential constraints and sources of errors on reasoning tasks, such as the classic base rate reasoning task. Three competing accounts for base rate neglect have been offered in the literature (i.e., knowledge-deficit, monitoring-failure, and inhibition-failure). However, efforts to test the underlying processes and sources of errors offered by these accounts have been limited by 1) a lack of proper problem type comparisons, 2) a lack of individual difference measures, and 3) the use of binary selection paradigms, which lacks the sensitivity to detect more nuanced cases of base rate utilization (e.g., when base rate use does not simply manifest itself in terms of which group is selected as “more probable”). The current study addressed these issues by independently manipulating the utility of base rate probabilities (i.e., “Bayesian priors”) and the diagnosticity of feature probabilities. This study also more directly measured deviations from Bayesian inference by collecting subjective posterior probability judgments for each problem and comparing these with objective Bayesian posterior probability estimates. Furthermore, individual difference measures related to each of the three prevailing accounts for base rate neglect were used to predict performance. Findings indicate that base rate neglect is partly due to lack of knowledge in how to apply base rates, even in the absence of a prepotent heuristic response. The results support a multifaceted account of why people neglect base rate information, indicate that forced-choice response paradigms inflate the incidence of base rate neglect, and suggest that base rate application approximates a simple averaging norm more than Bayesian norms.