Journal Article

  • Tables and charts are efficient tools for organizing numbers, but many people give little consideration to the order in which they present the data. This article illustrates the strengths and weaknesses of four criteria for organizing data - empirical, theoretical, alphabetical and a standardized reporting scheme.

  • This article advocates the use of simple data sets to help students gain a good intuitive grasp of ANOVA concepts.

  • We have been oversold on the base rate fallacy in probabilistic judgment from an empirical, normative, and methodological standpoint. First, contrary to the conventional wisdom, a thorough examination of the literature reveals that base rates are almost always used and that their degree of use depends on task structure and internal task representation. Second, few tasks map unambiguously into the simple, narrow framework that is held up as the standard of good decision making. Third, the current approach is criticized for its failure to consider how the ambiguous, unreliable and unstable base rates of the real world should be used in the informationally rich and criterion-complex natural environment. A more ecologically valid research program is called for.

  • Evolutionary approaches to judgment under uncertainty have led to new data showing that untutored subjects reliably produce judgments that conform to many principles of probability theory when (a) they are asked to compute a frequency instead of the probability of a single event and (b) the relevant information is expressed as frequencies. But are the frequency-computation systems implicated in these experiments better at operating over some kinds of input than others? Principles of object perception and principles of adaptive design led us to propose the individuation hypothesis: that these systems are designed to produce well-calibrated statistical inferences when they operate over representations of "whole" objects, events, and locations. In a series of experiments on Bayesian reasoning, we show that human performance can be systematically improved or degraded by varying whether a correct solution requires one to compute hit and false-alarm rates over "natural" units, such as whole objects, as opposed to inseparable aspects, views, and other parsings that violate evolved principles of object construal.

  • The use of significance tests in science has been debated from the invention of thesetests until the present time. Apart from theoretical critiques on their appropriateness forevaluating scientific hypotheses, significance tests also receive criticism for inviting mi-sinterpretations. We presented six common misinterpretations to psychologists whowork in German universities and found out that they are still surprisingly widespread -even among instructors who teach statistics to psychology students. Although these mi-sinterpretations are well documented among students, until now there has been littleresearch on pedagogical methods to remove them. Rather, they are considered "hardfacts" that are impervious to correction. We discuss the roots of these misinterpretationsand propose a pedagogical concept to teach significance tests, which involves explainingthe meaning of statistical significance in an appropriate way

  • Simulation is simpler intellectually than the formulaic<br>method because it does not require that one calculate the<br>number of points in the entire sample space and the number<br>of points in some subset. Instead, one directly samples<br>the ratio. This article presents probabilistic problems<br>that confound even skilled statisticians when attacking the<br>problems deductively, yet are easy to handle correctly, and<br>become clear intuitively, with physical simulation. This<br>analogy demonstrates the usefulness of simulation in the<br>form of resampling methods.

  • The use of frequentist Null Hypothesis Significance Testing (NHST) is so<br>an integral part of scientists' behavior that its uses cannot be discontinued<br>by flinging it out of the window. Faced with this situation, our teaching<br>strategy involves a smooth transition towards the Bayesian paradigm. Its<br>general outlines are as follows. (1) To present natural Bayesian interpretations<br>of NHST outcomes to draw attention to their shortcomings. (2) To create<br>as a result of this the need for a change of emphasis in the presentation and<br>interpretation of results. (3) Finally to equip students with a real possibility of<br>thinking sensibly about statistical inference problems and behaving in a more<br>reasonable manner. Our conclusion is that teaching the Bayesian approach in<br>the context of experimental data analysis appears both desirable and feasible.

  • People tend to approach agreeable propositions with a bias toward confirmation and disagreeable propositions with a bias toward disconfirmation. Because the appropriate strategy for solving the four-card Wason selection task is to seek disconfirmation, the authors predicted that people motivated to reject a task rule should be more likely to solve the task than those without such motivation. In two studies, participants who considered a Wason task rule that implied their own early death (Study 1) or the validity of a threatening stereotype (Study 2) vastly outperformed participants who considered nonthreatening or agreeable rules. Discussion focuses on how a skeptical mindset may help people avoid confirmation bias both in the context of the Wason task and in everyday reasoning.

  • We use functional magnetic resonance imaging (fMRI) and behavioral analyses to study the neural roots of biases in causal reasoning.<br>Fourteen participants were given a task requiring them to interpret data relative to plausible and implausible causal theories. Encountering<br>covariation-based data during the evaluation of a plausible theory as opposed to an implausible theory selectively recruited neural tissue in the<br>prefrontal and occipital cortices. In addition, the plausibility of a causal theory modulated the recruitment of distinct neural tissue depending<br>on the extent to which the data were consistent versus inconsistent with the theory provided. Specifically, evaluation of data consistent with<br>a plausible causal theory recruited neural tissue in the parahippocampal gyrus, whereas evaluating data inconsistent with a plausible theory<br>recruited neural tissue in the anterior cingulate, left dorsolateral prefrontal cortex, and precuneus. We suggest that these findings provide a<br>neural instantiation of the mechanisms by which working hypotheses and evidence are integrated in the brain.

Pages

register