Proceedings

  • The objectives of this research were to determine if there were patterns to elementary teachers' development of statistical ideas. Center of the data and typical of the data were the two concepts studied. Comparisons of teacher's responses before and after instruction were made to determine areas of fixation and ideas about measures of center. Before instruction teachers tended to fixate on large graphical features. After instruction teachers focused on measures of center, particularly the median, to explain their ideas of center, rather than graphical features. More teachers focused on data intervals after instruction to explain typical in the histogram, but these ideas were not stable over the two graphs. We conjecture that fixations and stability are two factors in determining the statistical conceptual development of elementary teachers.

  • This study examine 56 elementary school teachers' knowledge of statistics before and after a three-week statistics workshop. Content knowledge was assessed through a paper-and-pencil instrument of twelve, open-ended statistics questions; responses were scored holistically.

  • Research literatures offer discrepant views concerning what understanding of chance entails, its relation to thinking probabilistically, and the nature of alternative interpretations. This study capitalized on the technology of videotapes to closely examine children's interpretations within tasks involving randomness and a qualitative level of differential probabilities.

  • Much has been written about methods of teaching statistics and about how to assess students' knowledge of statistics, but almost nothing on the extent to which assessment procedures measure whether students understand statistical concepts, or whether they understand what is involved in the application of statistical techniques. There has in fact been very little research on the development of instruments designed specifically to measure statistical understanding. There is, however, work in related areas which has some bearing on the assessment of understanding of statistical concepts. In reviewing this work this paper discusses the extent to which understanding is covered by some classification schemes which have been developed for use in mathematics and looks at ways in which attitude scales investigate understanding. Some alternatives to traditional methods of examining brought about by changes in the method of teaching are also considered.

  • Novices and experts rated 18 phenomena as random or non-random and gave justifications for their decisions. Experts rated more of the situations as random than novices. Roughly 90% of the novice justifications were based on reasoning via a) equal likelihood, b) possibility, c) uncertainty, and d) causality.

  • Statistical power is defined as the likelihood that a particular test will correctly determine a false null hypothesis. This paper describes an ongoing research program focused on the teaching, learning, and understanding of ideas related to statistical power. The research described includes investigations of the effectiveness of instruction using a specially designed interactive software program (the Power Simulator), and the development and use of assessment instruments to measure students' informal understandings of power prior to instruction.

  • Some of you may not be familiar with the Quantitative Literacy Project (QLP) with which I have been involved, and this is expected. There are several copies of the 62 pages QLP Sampler available for you to look at. This publication provides some idea about the instructional material we have developed. It might be helpful to give you an overview of the QLP first, and then have a discussion and interchange of ideas.

  • For introductory statistics education, several types of software are relevant and in use: custom designed educational programs for a specific educational goal, statistical systems for data analysis (in full professional version, in student version or as a specifically designed tool for students), statistical programming environments, spreadsheets and general purpose programming languages. We can perceive a double dilemma on a practical and on a theoretical level, which is the worse the lower the educational level we have in mind. On the one hand, we have professional statistical systems that are very complex and call for high cognitive entry costs, although they flexibly assist experts. On the other hand, custom designed educational software is of necessity constrained to enable students to concentrate on essential aspects of a learning situation and to make likely certain intended cognitive processes. Nevertheless, as these microworlds, as we will call them here, for short, are often not adaptable to teachers' needs they are often criticized as being too constrained. Their support for flexible data analysis is limited, and to satisfy the variety of demands one would need a collection of them. However, coping with uncoordinated interfaces, notations and ideas in one course would overtax the average teacher and student. This practical dilemma is reflected on a theoretical level. It is not yet clear enough what kind of software is required and helpful for statistics education. We need a critical evaluation and analysis of the design and use of existing educational and professional programs. The identification of key elements of software that are likely to survive the next quantum leap of technological development and that are fundamental for introductory statistics is an important research topic. Results should guide new "home grown" developments of educational programs or, facing the difficulty of such developments, should influence the adaptation and elaboration of existing statistical systems toward systems that are also more adequate for purposes or, facing the difficulty of such developments, should influence the adaptation and elaboration of existing statistical systems toward systems that are also more adequate for purposes of introducing and learning statistics. We will give some ideas and directions that are partly based on results of two projects.

  • Individual thinking is driven by intuitions which have nearly no counterpart in the concepts which one learns from theory. This is especially true for stochastics teaching and is the main cause that is not very effective. The author reports about powerful strategies that might bridge the gap between individual intuitions and formal concepts. This gives a clear insight into difficult concepts and changes the behaviour of the learners.

  • Rather than elaborate Simon's argument here, I briefly describe two software tools we've developed, highlighting aspects that emphasize the relation between probability and data analysis. I also report some results from our primary test site, a high school in Holyoke, Massachusetts.

Pages

register