Unpublished Manuscript

  • We present a grading paradigm for student projects that statistics instructors may find useful in assessing written work in a timely manner. This grading system may also help the students and instructor feel more satisfied that papers were graded objectively. The paper also discusses particular conceptual errors that the author has discovered when grading projects. We include several other class-tested suggestions for ways in which projects can be used to reinforce statistical concepts.

  • Two recent developments in statistical education provide the opportunity for significant advances in helping non-statisician judge statistical claims.<br>(1) Research in statistical thinking has begun to yield models of people's conceptions that are detailed enough to have practical, pedagogical implications.<br>(2) Powerful new software tools designed explicitly for statistical education provide new visualizations with enormous potential for making statistical thinking accessible, for the first time, to the wide range of people who need to use it.<br>While these two developments are exciting in and of themselves, a collaboration between researchers and software designers would accelerate the development of both research and software in important ways. Currently both researchers and developers benefit in limited ways from each others' expertise and ongoing work, but taking full advantage of their respective contributions requires an explicit and organized effort with careful planning to achieve the benefits to each. We propose to create a paradignmatic illustration of such a collaboration, one which will have an impact on both software developers and researchers and, ultimately, on the productivity of the statistical education field.

  • We present an instructional sequence on analyzing one- and two-dimensional data sets. This instructional sequence is based on teaching experiments in a 7th-, and 8th-grade classroom in Nashville, TN. The sequence we present here, however is not just a smooth version of the sequence that we have tried out. What we learned during these teaching experiments made us reconsider a lot of the original instructional activities. The aim of this paper is not to present the result of these considerations as a ready-made instructional sequence. Instread, out min objective is to offer a rationale for a revised instructional sequence. This rationale is construed from the following ingredients: original considerations that formed the point of departure for our teaching experiment, experiences in the classroom, re-considerations, and new insights.

  • Research on misconceptions of probability indicates that students' conceptions are difficult to change. A recent review of concept learning in science points to the role of contradiction in achieving conceptual change. A software program and evaluation activity were developed to challenge students' misconceptions of probability. Support was found for the effectiveness of the intervention, but results also indicate that some misconceptions are highly resistant to change.

  • We depend on data to make intelligent decisions, yet the data we see is often "tainted." An old saying on the use and misuse of computers was "garbage in - garbage out" but this has become "garbagge in - gospel out" as more and more people get in to the numbers game. So, what can we do? Part of the answer lies in education. Comsumer and producers of data with sreious unbiased objectives to get at the "truth" must be educated in how surbeys an experiments work, how good surveys and experiments can be designed, and how data can be properly analyzed. Every high school graduate must be educated to be an intelligent consumer of data and to know enough about hte production of data to at least judge the value of data produced by others. This education must be built into the K-12 curriculum, primarily in mathematics and science but with consistent support and application from the social sciences, health and other academic subjects.

  • Few would question the assertion that the computer has changed the practice of statistics. Fewer still would argue with the claim that the computer, so far, has had little influence on the practice of teaching statistics; in fact, many still claim that the computer should play no significant role in introductory statistics courses. In this article, I describe principles that influenced the design of data analysis software we recently developed specifically for teaching students with little of no prior data analysis experience. I focus on software capabilities that should provide compelling reasons to abandon the argument still heard that introductions to statistics are only made more difficult by simultaneously introducing students to the computer and a new software tool. The micro-computer opens up to beginning students means of viewing and querying data that have the potential to get them quickly beyond technical aspects of data analysis to the meat of the activity: asking interesting questions and constructing plausible answers. The software I describe, DataScope, was designed as part of a 4-year project funded by the National Science Foundation to create materials and software for teaching data analysis at the high-school and introductory college level. We designed DataScope to fill gap we perceived between professional analysis tools (e.g., StatView and Data Desk), which were too complex for use in introductory courses, and existing educational software (e.g., Statistics Workshop, Data Insights) which were not quite powerful enough to support the kind of analyses we had in mind. Certainly, DataScope is not the educational tool we might dream of having (see Biehler, 1994), but it does give students easy access to considerable analysis power.

  • This paper is a pilot study for a major investigation into children's understanding of statistical graphics, using a written test paper incorporating a hierarchy of assumed question difficulty. Four provisional levels of understanding of statistical graphics are established from the test paper results. This might permit formulation of recommendations for development of approaches encouraging better understanding of statistical graphics in pupils. The four provisional levels would then be used as a basis for a main investigation with wider scope subject to the adjustments to the sample, test paper, and administration recommended in the conclusions to this pilot study paper. It seems particularly important to put an unambiguous carefully structured test paper to a more stringently selected and representative sample, with more computer support for data analysis.

  • We used a clinical methodology to explore elementary students' reasoning about data modeling and inference in the context of long-term design tasks. Design tasks provide a framework for student-centered inquiry (Perkins, 1986). In one context (Study 1), a class of fifth-grade students worked in six different design teams to develop hypermedia documents about Colonial America. In a second context (Study 2), a class of fifth-grade students designed science experiments to answer questions of personal interest. In the first, a hypermedia design context, students compared the lifestyles of colonists to their own lifestyles. To this end, ten "data analysts" developed a survey, collected and coded data, and used the dynamic notations of a computer-based tool, Tabletop (Hancock, Kaput, &amp; Goldsmith, 1992), to develop and examine patterns of interest in their data. Tabletop's visual displays were an important cornerstone to students' reasoning about patterns and prediction. Analysis of student conversations, including their dialog with the teacher-researcher, indicated that the construction of data was an important preamble to description and inference, as suggested by Hancock et al. (1992). We probed students' ideas about the nature of chance and prediction, and noted close ties between forms of notation and reasoning about chance. In the second study involving the context of experimental design, we consulted with two children and their classroom teacher about the use of a simple randomization distribution to test hypotheses about the nature of extra-sensory perception (ESP). Here, experimentation afforded a framework for teaching about inference grounded by the creation of a randomization distribution of the students' data. We conclude that design contexts may provide fruitful grounds for meaningful data modeling.

  • In this work problem solving procedures about statistical association using microcomputers, are analyzed. The effect of several didactical variables on these procedures is also studied. The sample was formed by 18 trainee teachers who received previous training during a seven week period. The study of the arguments expressed by the students allows us to know the scope and meaning given by them to the concept of association and to infer criteria to design new didactical situations.

  • There is a growing body of evidence indicating that people often overestimate the similarity between characteristics of random samples and those of the populations from which they are drawn. In the first section of the paper, we review some studies that have attempted to determine whether the basic heuristic employed in thinking about random samples is passive and descriptive or whether it is deducible from a belief in active balancing. In the second section, we discuss the importance of sample size on judgments about the characteristics of random samples.

Pages

register