Journal Article

  • The present study used both judgments of strength of relationship and measures of the ability to predict one variable from another to assess subjects' sensitivity to the covariation of two continuous variables. In addition, one group of subjects judged strength of relationship after merely observing the presentation of 60 pairs of two-digit numbers, and a second group made strength judgments after being actively engaged in predicting one member of a pair when given the other. The prediction and judgment data provide different pictures of subjects' sensitivity to covariation. The subjects were quite poor at estimating strength of relationship but, by some measures, good at predicting one variable from another. Judgments were not strongly influenced by whether subjects had previously engaged in overt prediction. The implications of these results for the literature on covariation estimation are discussed.

  • These notes attempt (a) to summarise the development of ideas about probability, and (b) to supply relevant quotations from the probabilists whose theories we consider.

  • All too often mathematics is considered to be the study of certainties: certain truth and certain falsity. We need to overcome this misleading impression and to show that mathematics can describe the uncertainties of everyday life. Much of our daily life is unpredictable and uncertain. A branch of mathematics, probability theory, can help us cope with this aspect of life. However, if the theory of probability is taught formally, then students may not make the connection between the theory and its usefulness in daily life. The following introduction helps ensure that this connection is made.

  • In the first three experiments, we attempted to learn more about subjects' understanding of the importance of sample size by systematically changing aspects of the problems we gave to subjects. In a fourth study, understanding of the effects of sample size was tested as subjects went through a computer-assisted training procedure that dealt with random sampling and the sampling distribution of the mean. Subjects used sample size information more appropriately for problems that were stated in terms of the accuracy of the sample average or the center of the sampling distribution than for problems stated in terms of the tails of the sampling distribution. Apparently, people understand that the means of larger samples are more likely to resemble the population mean but not the implications of this fact for the variability of the mean. The fourth experiment showed that although instruction about the sampling distribution of the mean led to better understanding of the effects of sample size, subjects were still unable to make correct inferences about the variability of the mean. The appreciation that people have for some aspects of the law of large numbers does not seem to result from an in-depth understanding of the relation between sample size and variability.

  • Judgments about relationships or covariation between events are central to several areas of research and theory in social psychology. In this article the normative, or statistically correct, model for making covariation judgments is outlined in detail. Six steps of the normative model, from deciding what data are relevant to the judgment to using the judgment as a basis for predictions and decisions, are specified. Potential sources of error in social perceivers' covariation judgments are identified at each step, and research on social perceivers' ability to follow each step in the normative model is reviewed. It is concluded that statistically naive individuals have a tenuous grasp of the concept of covariation, and circumstances under which covariation judgments tend to be accurate or inaccurate are considered. Finally, implications for research on attribution theory, implicit personality theory, stereotyping, and perceived control are discussed.

  • In this study, 48 subjects who had no previous exposure to probability or statistics read one of three texts that varied in the degree of explanation of basic concepts of elementary probability. All texts contained six formulas, each accompanied by an example as well as definitions and information logically required to solve all problems. The high-explanatory test differed from the low-explanatory and standard texts in that it emphasized the logical basis underlying the construction of the formulas, the relations among formulas, and the relations of variables to real-world objects and events. On both immediate and delayed performance tests, subjects in the low-explanatory and standard-text conditions performed better on formula than on story problems, whereas the subjects in the high-explanatory text condition did equally well on both types of problems. It was concluded that explanation did not improve the learning of formulas but rather facilitated the application of what was learned to story problems.

  • Subjects were given an experimental task in which they had to play the role of a quality-control researcher for a company. They had to consider a hypothetical experiment that involves testing a sample of batteries from a truck load, which may or may not be substandard. In the main experiment, subjects were given information about the prior probability of substandard truck loads (base rate), the degree of variability of battery life, and the mean difference between standard and substandard batteries, all of which are formally relevant to the decision, and they were also told the number of batteries in the truck (population size) that is formally irrelevant. The task was to decide (intuitively) how many batteries to test to achieve a specified error rate using a specified decision rule. In a second study, subjects were given a similar scenario, but asked to rate which pieces of information would be relevant to the decision. Subjects showed themselves to be sensitive to the effects of sample variability and base rate when making intuitive design decisions, though an odd effect of the mean difference factor was observed. There is also clear confirmation of a bias-to-weight sample size by population size as reported in earlier research using different kinds of judgment tasks.

  • This study reviews research on cultural differences in "probabilistic thinking" and presents some intra- and inter-cultural findings. Strong differences are shown to exist between people raised under Asian and British cultures on measures of this ability. These differences were found to out-weigh any influence of subculture, religion, occupation, arts/science orientation and sex. Generally, Asians were found to adopt a less finely differentiated view of uncertainty both numerically and verbally than did the British sample. Possible antecedents of these differences are outlined, and cultural differences in probabilistic thinking are shown to be compatible with descriptions of cultural differences in business decision making. It is argued that there are qualitative cultural differences in ways of dealing with uncertainty.

  • Computers and software can be used not only to analyze data, but also to illustrate essential statistical topics. Methods are shown for using software, particularly with graphics, to teach fundamental topics in linear regression, including underlying model, random error, influence, outliers, interpretation of multiple regression coefficients, and problems with nearly collinear variables. Systat 5.2 for Macintosh, a popular package, is used as the primary vehicle, although the methods shown can be accomplished with many other packages.

  • Graphical, computational, interactive, and simulation capabilities of computers can be successfully employed in the teaching of elementary probability, either as classroom demonstrations or as exploratory exercises in a computer laboratory. In this first paper of a contemplated series, two programs for EGA-equipped IBM-PC compatible machines are included with indications of their pedagogical uses. Concepts illustrated include the law of the large numbers, the frequentist definition of a probability, the Poisson distribution and process, and intuitive approaches to independence and randomness. (Commands for rough equivalents to the programs using Minitab are shown in the Appendix.)

Pages

register