Theory

  • The article attempts to sketch a conceptual and experimental history of the base rate issue. The review distinguishes between a social judgment paradigm and a textbook paradigm. Theoretical explanations of the base rate phenomenon, i.e. the representativeness heuristics, confusing or inverting conditional probabilities, the specific factor, the causality factor, the vividness factor etc. are discussed with respect to these paradigms. On the basis of a report on various studies, the author hypothesizes that the different paradigms elicit different response tendencies, matching base rates on the one hand and judging representativeness on the other hand. She argues that on the causal structures accepted illustrating this by several examples. The consequences of this phenomenon for the role of normativeness and its (in-) determinacy are discussed.

  • This article provides a critical review of psychological theories and research approaches on the ontogenesis of the probability concept. An analysis of the conceptualization and interpretation or probability within developmental research reveals that with only one exception on objectivistic interpretation of probability has been made. The reviewed (theoretical) research approaches, the cognitive developmental theory of PIAGET & INHELDER, FISCHBEIN's learning-developmental approach, and verious information processing model's differ in two main aspects. Firstly on the question whether the development should be considered to be a continuous or discontinuous process, and secondly the role of conceptual versus strategic knowledge for coping with probability problems is disputed. The discussion tries to point out what progress could be gained by skipping a one-sided objectivistic interpretation of the probability concept and turning to a conceptualization of probability encompassing both sides, the objectivistic and subjectivistic view. This integration might also lead to a deeper understanding of the individual's conceptualization of uncertainty.

  • By means of historical investigations, epistemological reflections, and didactical analysis with respect to the notion of independence, we shall try to provide insights into the problem of a theoretical term and its applications. This will be the starting-point for stating some didactical theses about treating the notion of independence in the curriculum of Sekundarstufe I (lower secondary level) and will yield examples of their realization. The difference between intuitive notion and mathematical definition reflects the insoluble tension between mathematics and reality. This should not be seen as a shortcoming, rather this tension has been one of the productive sources for the development of mathematics, and it ought to be the same for mathematics instruction.

  • No! Statistics is no more a branch of mathematics than is economics, and should no more be taught by mathematicians. It is a separate discipline that makes heavy and essential use of mathematical tools, but has origins, subject matter, foundational questions and standards that are distinct from those of mathematics. It is true that many advanced texts and research papers in statistics use formidable mathematics, but this is misleading. After all, many a graduate microeconomics text cites the Kuhn-Tucker theorem on the first page, and many research papers in physics are intensely mathematical. Statistics is as much a distinct discipline as are economics and physics. Its subject matter is data and inference from data. It is unprofessional for mathematicians who lack training and experience in working with data to teach statistics.

  • In this paper we will introduce a new and powerful algorithm which trivializes an extensive class of discrete stochastic processes. The algorithm was discovered by the author in March 1974 when he tried to teach some nontrivial probability to a below average 4th grade in Carbondale, Illinois.

  • The Woods Hole conference of September 1959 was outstanding of its kind as a meeting of about 35 people interested in education - educationists, psychologists, medical men, and mathematicians. The results of the meeting were summarised by J. S. Brunner in a chairman's Report, which, impregnated by his own ideas, evolved into his booklet The Process of Education. In the last decade this work has strongly influenced curriculum development, in particular, in mathematics. Brunner's contribution seems to me of a particular interest for the instruction in stochastics, which is now entering our schools. On the one hand advocates of this subject matter are advancing its fundamental (or central) ideas, on the other hand stress is laid on the importance to tie instruction in stochastics to intuitive experiences. Both points, however, are rarely elaborated or made more concrete. In particular the following questions deserve attention: (a) What would a list of fundamental ideas of stochastical concepts look like? (b) Why should intuition mean so much for stochastics? (c) What does "(stochastic) intuition" mean? (d) How does it develop, and how can it be improved? In the following we will advance some ideas on (a) and will also touch on the other points.

  • These notes attempt (a) to summarise the development of ideas about probability, and (b) to supply relevant quotations from the probabilists whose theories we consider.

  • Judgments about relationships or covariation between events are central to several areas of research and theory in social psychology. In this article the normative, or statistically correct, model for making covariation judgments is outlined in detail. Six steps of the normative model, from deciding what data are relevant to the judgment to using the judgment as a basis for predictions and decisions, are specified. Potential sources of error in social perceivers' covariation judgments are identified at each step, and research on social perceivers' ability to follow each step in the normative model is reviewed. It is concluded that statistically naive individuals have a tenuous grasp of the concept of covariation, and circumstances under which covariation judgments tend to be accurate or inaccurate are considered. Finally, implications for research on attribution theory, implicit personality theory, stereotyping, and perceived control are discussed.

  • Few would question the assertion that the computer has changed the practice of statistics. Fewer still would argue with the claim that the computer, so far, has had little influence on the practice of teaching statistics; in fact, many still claim that the computer should play no significant role in introductory statistics courses. In this article, I describe principles that influenced the design of data analysis software we recently developed specifically for teaching students with little of no prior data analysis experience. I focus on software capabilities that should provide compelling reasons to abandon the argument still heard that introductions to statistics are only made more difficult by simultaneously introducing students to the computer and a new software tool. The micro-computer opens up to beginning students means of viewing and querying data that have the potential to get them quickly beyond technical aspects of data analysis to the meat of the activity: asking interesting questions and constructing plausible answers. The software I describe, DataScope, was designed as part of a 4-year project funded by the National Science Foundation to create materials and software for teaching data analysis at the high-school and introductory college level. We designed DataScope to fill gap we perceived between professional analysis tools (e.g., StatView and Data Desk), which were too complex for use in introductory courses, and existing educational software (e.g., Statistics Workshop, Data Insights) which were not quite powerful enough to support the kind of analyses we had in mind. Certainly, DataScope is not the educational tool we might dream of having (see Biehler, 1994), but it does give students easy access to considerable analysis power.

  • In this article I discuss the fundamental relation between people's ability to do induction and their beliefs about randomness or noise, and I illustrate the special difficulties that psychologists face when they try to evaluate the rationality of these beliefs. The presentation is divided into four sections. The first describes the traditional experimental approach to evaluating people's conceptions of randomness and summarizes the data that have been taken to support the conclusion that people have a very poor conception of randomness. The second contrasts the relatively narrow conception of randomness that one finds in philosophical and mathematical treatments of the topic. The third outlines some benefits for psychologists to be gained from thinking about induction as a problem in signal detection., and the fourth presents the argument that any adequate evaluation of ordinary people's conceptions of randomness must consider the role that these conceptions play an inductive inference, that is, in distinguishing between random and nonrandom events. Originally appeared in the Journal of Experimental Psychology; Learning, Memory, and Cognition, 1982, 8 (6), 626 - 636

Pages

register