Journal Article

  • The results are taken from a much larger study on the strategies that pupils in the 2nd, 3rd and 4th stages at secondary school (ages 14-16) use for solving problems concerning the mean. In this paper the solutions of three problems are analysed. These problems have been formulated to be of such a kind that we can distinguish between the ability of pupils to calculate a mean, and that of realising the effect of a change in the number of observations or in the value of an observation, on the mean. The problems were also seen to test the influence of a value equal to zero on the mean, drawn attention to in earlier research studies. The results of the current study show us, in the chosen context, the type and sense of the modifications exerting influence on the manipulations of the pupils, and that inadequate conceptions or a change of meaning appeared in certain situations and not in others. Note: An extended summary in English is provided at the beginning of this paper, which is written in French.

  • This article describes aspects of the statistical content knowledge of 46 preservice elementary school teachers. The preservice teachers responded to a written item designed to assess their knowledge of mean, median, and mode. The data produced in response to the written item were examined in light of the Structure of the Observed Learning Outcome (SOLO) taxonomy (Biggs & Collis, 1982, 1991) and Ma's (1999) conception of Profound Understanding of Fundamental Mathematics (PUFM). The article describes 4 levels of thinking in regard to comparing and contrasting mean, median, and mode. Several different categories of written definitions for each measure of central tendency are also described. Connections to previous statistical thinking literature are discussed, implications for teacher education are given, and directions for further research are suggested.

  • The study describes students' patterns of thinking for statistical problems set in two different contexts. Fifteen students representing a wide range of experiences with high school mathematics participated in problem-solving clinical interview sessions. At one point during the interviews, each solved a problem that involved determining the typical value within a set of incomes. At another point, they solved a problem set in a signal-versus-noise context [Konold, C., & Pollatsek, A. (2002). Data analysis as the search for signals in noisy processes. Journal for Research in Mathematics Education, 33, 259-289]. Several patterns of thinking emerged in the responses to each task. In responding to the two tasks, some students attempted to incorporate formal measures, while others used informal estimating strategies. The different types of thinking employed in using formal measures and informal estimates are described. The types of thinking exhibited in the signal-versus-noise context are then compared against those in the typical value context. Students displayed varying amounts of attention to both data and context in formulating responses to both problems. Suggestions for teachers in regard to helping students attend to both data and context when analyzing statistical data are given.

  • Professional probabilists have long argued over what probability means, with, for example, Bayesians arguing that probabilities refer to subjective degrees of confidence and frequentists arguing that probabilities refer to the frequencies of events in the world. Recently, Gigerenzer and his colleagues have argued that these same distinctions are made by untutored subjects, and that, for many domains, the human mind represents probabilistic information as frequencies. We analyze several reasons why, from an ecological and evolutionary perspective, certain classes of problem-solving mechanism in the human mind should be expected to represent probabilistic information as frequencies. Then, using a problem famous in the "heuristics and biases" literature for eliciting base rate neglect, we show that correct Bayesian reasoning can be elicited in 76% of subjects - indeed, 92% in the most ecologically valid condition - simply by expressing the problem in frequentist terms. This result adds to the growing body of literature showing that frequentist representations cause various cognitive biases to disappear, including overconfidence, the conjunction fallacy, and base-rate neglect. Taken together, these new findings indicate that the conclusion most common in the literature on judgment under uncertainty - that our inductive reasoning mechanisms do not embody a calculus of probability - will have to be re-examined. From an ecological and evolutionary perspective, humans may turn out to be good intuitive statisticians after all.

  • With the growth in the availability of inexpensive computing the emphasis in teaching introductory statistics must shift from the mechanics of performing statistical procedure n a calculator or by hand to the interpretation of results easily obtained by computer. Textbooks in statistics provide ample instruction on using technology to perform statistical analysis but provide little in the way of hands on activity or simulation to teach abstract concepts. Many sites on the World Wide Web have Java applets that allow users to simulate sampling but these are subject to change both in how they work and where they are located which makes planning instruction with these programs difficult. Several software programs have been specially developed to allow for simulation of sampling distribution but some colleges may be unwilling or unable to purchase or support such specialized packages. In spite of lacking the graphics of commercial packages and web-based simulations, simulation exercises in Excel have the advantage of utilizing a program that is available and supported on most campuses.<br>In this paper I describe two exercises which use repeated simulated sampling in Microsoft Excel to teach about sampling distributions, particularly the Central Limit Theorem and the interpretation of confidence intervals. First, I will give a brief overview of the key concepts in the Central Limit Theorem and confidence interval estimation, describe the difficulties students have in learning these concepts and describe the potential of simulation to add clarity to these concepts. Next, I will describe how Excel is used to simulate sampling in my courses. Next I will describe the assignments I use in my classes to teach Central Limit Theorem and confidence interval estimation. Finally I will discuss possible exercises in repeated simulated sampling to teach other concepts in statistical inference as well as ways in which the effectiveness of these simulations in promotion g student learning can be formally evaluated.

  • Activities that promote student invention can appear inefficient, because students do not generate canonical solutions, and therefore the students may perform badly on standard assessments. Two studies on teaching descriptive statistics to 9th-grade students examined whether invention activities may prepare students to learn. Study 1 found that invention activities, when coupled with subsequent learning resources like lectures, led to strong gains in procedural skills, insight into formulas, and abilities to evaluate data from an argument. Additionally, an embedded assessment experiment crossed the factors of instructional method by type of transfer test, with 1 test including resources for learning and 1 not. A "tell-and-practice" instructional conditioned to the same transfer results as an invention condition when there was no learning resource, but the invention condition did better than the tell-and-practice condition when there was a learning resource. This demonstrates the value of invention activities for future learning from resources, and the value of assessments that include opportunities to learn during a test. In Study 2, classroom teachers implemented the instruction and replicated the results. The studies demonstrate that intuitively compelling student-centered activities can be both pedagogically tractable and effective at preparing students learn.

  • Explores the need to teach undergraduate sociology majors about statistical methods. Identifies student based obstacles to the learning of statistics. Offers an instructional model that includes (1) warm up sessions; (2) organizational models; (3) application exercises; (4) pattern recognition; and (5) sociological meaning. Recommends the model as a basic design for the introductory statistics course.

  • The teaching of mathematics is not confined to a single approach. Because of the increasing diversity of knowledge and the quantitative data involved, students must make sense of the numerical data they are constantly facing. Seventeen people offer their ideas on how to teach numbers to students. Some espouse relating science teaching to mathematics. Otheres propose math education in relation to the work place. The value of logical thinking and analysis is also upheld as essential to the learning of numbers.

  • The need for higher standards of quantitative literacy, or numeracy, in America schools is discussed. Topics include the value of numeracy in political, economic and business life, differences between standard math and quantitative literacy, and suggestions for implementing numeracy in schools.

  • All sectors of society must have a basic knowledge of statistically sound concepts in order to make optimal use of research data and statistically significant information. The encouragement of statistical thinking can be facilitated through the federal statistical system, schools and universities and the media. Lastly, professional statisticians must strive to elucidate the whole statistical process whenever an appropriate oppourtunity arises.

Pages

register