Proceedings

  • This paper reports on a preliminary study conducted for gaining better insight in the complexity of students' misconceptions of representativeness. Data from 156 students (112 high school graduates and 44 students with a university degree) are presented. The overall outcome indicates a lack of ability to refer problems about specific experiments to their correct context. Some results seem to contradict part of the representativeness heuristic described by Kahneman and Tversky (1972). They might also indicate that multiple-choice tests, even with two-part questions, are not able to fully capture the deep complexity of students' misunderstandings.

  • Statistical Science is concerned with the twin aspect of theory of design of experiments and sample surveys and drawing valid inferences there from using various statistical techniques/methods. The art of drawing valid conclusions depends on how the data have been collected and analysed. Depending upon the objective of the study, one has to choose an appropriate statistical procedure to test the hypothesis. When the number of observations is large or when the researcher is interested in multifarious aspects or some time series study, such calculations are very tedious and time consuming on a desk calculator. In this context, it is essential that the manpower engaged in teaching and research is to be trained in the applications of various statistical techniques / methods through the use of computer. An attempt has been made to cover computer aided analysis (using various statistical packages) related to Descriptive Statistics, Test of Significance, Design and Analysis of Experiment, Non parametric method, Forecasting through time-series models and some Financial analysis etc. A healthy group discussion (through practical exercises) can also be held on most commonly used statistical techniques. Computing platform will involve both the environment i.e. DOS as well as Windows 2000.

  • This paper illustrates how Excel can be used by students to develop their statistical understanding. The student can vary data values by simply dragging data points on graphs and charts and seeing how this affects statistical estimates; thus, by visually exploring the effects of changing data values, students can get a feel for statistical concepts. Excel spreadsheets have been developed to explore both univariate, bivariate and inferential statistical topics. It is important when teaching statistics to non-statisticians that new statistical ideas are presented in a familiar and relevant context. The flexibility of Excel spreadsheets means that tutors can download relevant examples into the spreadsheet. The spreadsheets and some sample data sets are available on the World Wide Web.

  • For the last decade, internationally, there have been calls for reform in statistics education. These calls for reform have emphasised that teaching should use real data, active learning, technological tools and statistical thinking. A way of incorporating all these aspects into a statistics course is through the use of projects. This paper will summarise the calls for reform and the use of projects by others along with projects that have been used by the author in courses that he teaches in experimental design and multiple regression. The emphasis here has been to include a full problem solving cycle, from problem definition to communicating findings and reflecting on the process. Feedback from the students will be included.

  • Many college students have difficulties in understanding and making connections among the main concepts of statistics. Compounding the difficulties is the misconception of a variety of statistical concepts that students hold even before taking any statistics course. It is, thus, crucial to investigate how the understanding of statistical concepts is constructed and at which stage students start to lose making connections among various concepts. This article reports some findings from our study of investigating the path of learning statistical concepts, specifically on how students learn the concept of variation. We focus on investigating the missing connections about their understanding of variation. The framework of statistical thinking, PPDAC investigative cycle, is used as our guideline for analyzing our interview data.

  • The Sampling Distributions program and ancillary instructional materials were<br>developed to guide student exploration and discovery. The program provides graphical,<br>visual feedback which allows students to construct their own understanding of sampling<br>distribution behavior. Diagnostic, graphics-based test items were developed to capture<br>students' conceptual understanding before and after use of the program. An activity<br>which asked students to test their predictions and confront their misconceptions was<br>found to be more effective than one based on guided discovery. Our findings demonstrate that while software can provide the means for a rich classroom experience, computer simulations alone do not guarantee conceptual change.

  • Teachers often come to our professional development programs thinking that statistics<br>is about mean, median, and mode. They know how to calculate these statistics, though<br>they don't often have robust images about what they mean or how they're used. When<br>they begin working with computer data analysis tools to explore real data sets, they are<br>able to see and manipulate the data in a way that hasn't really been possible for them<br>before. They identify particular points on graphs and can interpret what their positions<br>mean. They notice clumps and gaps in the data and generally find that there's a lot<br>more to see in the data than they ever imagined before. In addition, those exploring<br>data in this way often ground their interpretations in the range of things they know<br>from the contexts surrounding the data. They discover the richness and complexity of<br>data.<br>Yet all this detail and complexity can be overwhelming. Without some method of<br>focusing or organizing or otherwise reducing the complexity, it can be impossible to say<br>anything at all about a data set. How do people decide what aspects of a data set are<br>important to attend to? What methods do they use to reduce the cognitive load of trying<br>to attend to all the points? This paper will begin to describe some of the ways that<br>people do this while using new, software-driven representational tools. In the process,<br>we will begin to sketch out a framework for the techniques that people develop as they<br>reason in the presence of variability.

  • Responses of 73 students to an interciew protocol based on selecting 10 lollies from a container with 50 red, 20 yellow, and 30 green are categorised with respect to centre and spread of numerial answers and to reasoning expressed in justification of the answers. Results are compared to earlier survey research and small-scale interview studies.

  • Pfannkuch (1997) contends that variation is a critical issue throughout the statistical inquiry process, from posing a question to drawing conclusions. This is particularly true for K-6 teachers when they attempt to use the process of statistical investigation as a means of teaching and learning across the spectrum of the K-6 curricula. In this context statistical concepts and ideas are taught and learned in conjunction with the important content area ideas and concepts. For a K-6 teacher, this means that the investigation must not only be planned in advance, but also aimed at being responsive to students. The potential for surprise questions, unanticipated responses and unintended outcomes is high, and teachers need to "think on their feet" statistically and react immediately in ways that accomplish content objectives, as well as convey correct statistical principles and reasoning. The intellectual demands in this context are no different than in other instances where teachers are trying to teach for understanding (i.e., Cohen, McLaughlin, &amp; Talbert, 1993; Ma, 1999).

  • As part of a research project on students' understanding of variability in statistics, 272 students, (84 middle school and 188 secondary school, grades 6 - 12) were surveyed on a series of tasks involving repeated sampling. Students' reasoning on the tasks predominantly fell into three types: additive, proportional, or distributional, depending on whether their explanations were driven by frequencies, by relative frequencies, or by both expected proportions and spreads. A high percentage of students' predominant form of reasoning was additive on these tasks. When secondary students were presented with a second series of sampling tasks involving a larger mixture and a larger sample size, they were more likely to predict extreme values than for the smaller mixture and sample size. In order for students to develop their intuition for what to expect in dichotomous sampling experiments, teachers and curriculum developers need to draw explicit attention to the power of proportional reasoning in sampling tasks. Likewise, in order for students to develop their sense of expected variation in a sampling experiment, they need a lot of experience in predicting outcomes, and then comparing their predictions to actual data.

Pages

register