Conference Paper

  • This paper reports a comparison of two separate studies using the same task and simulation software but with different age groups and abilities of students who have had different curricula experiences. One study examined how middle school students used computer simulation tools to reason between empirical data and theoretical probability. The second study replicated the first with secondary school students who had just completed an Advanced Placement statistics course. This comparison includes the similarities and the differences in the way each group approached the task and used the simulation software, given their background and prior knowledge.

  • The last decade has seen a rapid increase in the use of Geographical Information Systems (GIS) and the analysis of spatial data is an important component of this development. Spatial statistics is a relatively young subject and, although there are useful textbooks on spatial statistics theory, there is virtually no literature on how to teach spatial statistical concepts and techniques. This paper suggests ways of teaching some of spatial statistical analysis without recourse to matrix algebra and vectors. By using the graphical features in Excel it is possible to illustrate and explain the concepts behind the statistical techniques in GIS. The interactive and dynamic features of Excel enable students to investigate the effects of changing the spatial location of the data and to develop an understanding of spatial dependence and its impact on Kriging and regression techniques.

  • The paper builds on design-research studies in the domain of probability and statistics. The integration of computers into classroom practice has been established as a complex process involving instrumental genesis (Verillon and Rabardel, 1995), whereby students and teachers need to construct potentialities for the tools as well as techniques for using those tools efficiently (Artigue, 2002). The difficulties of instrumental genesis can perhaps be eased by design methodologies that build the needs of the learner into the fabric of the product. We discuss our interpretation of design research methodology, which has over the last decade guided our own research agenda. Through reference to previous and ongoing studies, we argue that design research allows a sensitive phenomenalisation of a mathematical domain that can capture learners' needs by transforming powerful ideas into situated, meaningful and manipulable phenomena.

  • Designers of educational software tools inevitably struggle with the issue of complexity. In general, a simple tool will minimize the time needed to learn it at the expense of range of applications. On the other hand, designing a tool to handle a wide range of applications risks overwhelming students. I contrast the decisions we made regarding complexity when we developed DataScope 15 years ago with those we recently made in designing TinkerPlots, and describe how our more recent tack has served to increase student engagement at the same time it helps them see critical connections among display types. More generally, I suggest that in the attempt to not overwhelm students, too many educational environments managed instead to under whelm them and thus serve to stifle rather than foster learning.

  • The paper builds on design-research studies in the domain of probability and statistics conducted in middle-school classrooms. The design, ProbLab (Probability Laboratory), which is part of Wilensky's 'Connected Probability' project, integrates: constructionist projects in traditional media; individual work in the NetLogo modeling-and-simulation environment; and networked participatory simulations in HubNet. An emergent theoretical model, 'learning axes and bridging tools,' frames both the design and the data analysis. The model is explicated by discussing a sample episode, in which a student reinvents sampling by connecting 'local' and 'global' perceptions of a population.

  • This paper reports on an attempt to involve mathematics teachers, with a limited previous experience in exploring statistical concepts, in the collaborative design of computational tools that can be used for simulating data sets. It explores the constructionist conjecture that the design of such tools will encourage designers as learners to reflect upon the statistical concepts incorporated in the tools under development, since generating data-sets on the basis of different characteristics, such as average, spread, or skewness, necessitates the making explicit of thinking related to these notions and the construction of some sense of random processes. It describes how involvement in the design process involved participants in coming to see distributions as statistical entities, with aggregate properties that indicate how their data is centred and spread.

  • Many statistics educators use simulation to help students better understand inference. Simulations make the link between statistics and probability explicit through simulating the conditions of the null hypothesis, and then looking at sampling distributions of an appropriate measure. In this paper we review how we use simulation to help understand hypothesis testing, and lay out the relevant steps. We illustrate how using simulation and technology can make these difficult ideas more visible and understandable, through making processes more concrete, through unifying apparently disparate tests, and through letting the learners construct their own measures to study phenomena.

  • We consider the role of technology in learning concepts of modeling univariate functional dependencies. It is argued that simple scatter plot smoothers for univariate regression problems are intuitive concepts that- beyond their intended usefulness in providing a possible answer to more intricate regression problem - may serve as a paradigm for statistical thinking, detecting structure in noisy data. Simulation may play a decisive role in understanding the underlying concepts and acquiring insight into the relationship between structural and random variation.

  • The ability to design experiments in an appropriate and efficient way is an important skill, but students typically have little opportunity to get experience. Most textbooks introduce standard general-purpose designs, and then proceed with the analysis of data already collected. In this paper we explore a tool for gaining design experience: computer based virtual experiments. These are software environments which mimic a real situation of interest, and invite the user to collect data to answer a research question. The following prototype environments will be described: an industrial process that must be optimized, a greenhouse experiment to compare the effect of different treatments on plant growth, and an arcade style applet that illustrates the use of t-tests, regression and analysis of variance. These environments are parts of a collection called env2exp, and can be freely used over the web. They have been used in several courses over the last two years.

  • Almost every day we come across statistics in our newspapers. Understanding these figures correctly not only gives us a better understanding of our social environment but also helps to prevent us from being taken in by misleading advertisements. This project investigates the teaching and learning of statistics through the use of statistical figures commonly found in newspapers and other mass media. These statistics have been used in specially designed courses such as "How to Read Figures in the Newspapers," a general education course offered to undergraduate students with or without a statistics background. Students find the topics interesting and appreciate the wide-ranging applications of statistics in different areas.

Pages