Journal Article

  • In inclusion of a double issue devoted to statistical thinking and learning to begin the second volume of "Mathematical Thinking and Learning" reflects major developments within statistics education during recent years. Statistics has entered or gained increased prominence in mainstream mathematics currcicula in many countries.

  • Describes a study of high school students that investigated the use of microcomputers to teach principles relating to the design and interpretation of graphs. Results related to student achievement, higher order thinking skills, and behavioral protocols are discussed. A sample assignment sheet and test questions are appended.

  • Discussion of graphing software for educational purposes focuses on a study in which three versions of an original computer graphing program were used by inner-city high school students to solve scientific data analysis problems. Variations in degrees of flexibility and feedback in the software are explored.

  • Describes a project where computer-assisted graphical data analyses were introduced to inner-city high school students with weak math and science backgrounds. Provides examples of performance of students on open-ended problem-solving tasks.

  • Describes the role of an instructional sequence and two accompanying computer-based tools in supporting students' developing understandings of statistical data analysis. Documents the emergence of the sociomathematical norm of what counts as a mathematical argument in the context of data analysis.

  • Provides an overview of the structure of data analysis, the interrelationship between data analysis and probability, and the connection between data analysis and other components of the mathematics curriculum. Presents a possible order for topics being consistent with modern statistical practice and allows the topics to grow as students move through grade levels.

  • This paper focuses on how notions of inference can be fostered in middle school through the use of carefully designed tasks, open-ended software simulation tools, and social activity that focuses on making data-based arguments. We analyzed interactions between two sixth-grade students who used software tools to formulate and evaluate inferences during a 12-day instructional program that utilized Probability Explorer software as a primary investigation tool. A variety of the software tools enabled students to understand the interplay between empirical and theoretical probability, recognize the importance of using larger samples to make inferences, and justify their claims with data-based evidence.

  • The development of the Probability Explorer is based on pedagogical<br>implications of constructivist learning theory. The tools available in the environment<br>have provided children with opportunities to extend their understandings of random<br>phenomenon and support their development of graphical and numerical data analysis<br>skills. The open-ended design allows users to explore a variety of simple and compound<br>experiments and to extend the limitations of physical devices with dynamic digital<br>experiences.

  • Since the mid-1980's, confidence intervals (CIs) have been standard in medical journals. We sought lessons for psychology from medicine's experience with statistical reform by investigating two attempts by Kenneth Rothman to change statistical practices. We examined 594 American Journal of Public Health (AJPH) articles published between 1982 and 2000 and 110 Epidemiology articles published in 1990 and 2000. Rothman's editorial instruction to report CIs and not p values was largely effective: In AJPH, sole reliance on p values dropped from 63% to 5%, and CI reporting rose from 10% to 54%; Epidemiology showed even stronger compliance. However, compliance was superficial: Very few authors referred to CIs whn discussing results. The results of our survey support what other research has indicated: Editorial policy alone is not a succicient mechanism for statistical reform. Achieving substantial, desirable change will required further guidance regarding use and interpretation of CIs and appropriate effect size measures. Nevessary steps will include studying researchers' understanding of CIs, improving education, and developing empirically justified recommendations for improved statistical practice.

  • Different studies on how well people take sample size into account have found a wide range of solution rates. In a recent review Sedlmeier and Gigerenzer (1997) suggested that a substantial part of the variation in results can be explained by the fact that experimenters have used two different types of sample-size tasks, one involving frequency distributions and the other sampling distributions. This suggestion rested on an analysis of studies that , with one exception, did not systematically manipulate type of distribution versions. In Study 1, a substantial difference between solution rates for the two types of tasks was found. Study 2 replicated this finding and ruled out an alternative explanation for it, namely, that the solution rate for sampling distribution tasks was lower because the information they contained was harder to extract than that in frequency distribution tasks. Finally, in Study 3 an attempt was make to reduce the gap between the solution rates for the two types of tasks by giving participants as many hints, the gap in performance remained. A new computational model of statistical reasoning specifies cognitive processes that might explain why people are better at solving frequency than sampling distribution tasks.

Pages

register