Journal Article

  • Using Freudenthal's method of historical phenomenology, the history of statistics was investigated as a source of inspiration for instructional design. Based on systematically selected historical examples, hypotheses were formulated about how students could be supported in learning to reason with particular statistical concepts and grpahs. Such a historical study helped to distinguish different aspects and levels of understanding of concepts and helped us as instructional designers to look through the eyes of students. In this paper, we focus on an historical phenomenology of mean and median, and give examples of how hypotheses stemming from the historical phenomenology led to the design of instructional activities used for teaching experiments in grades 7 and 8 (12-14-years old).

  • We identify the student characteristics most associated with success in an introductory business statistics class, placing special focus on the relationship between student math skills and course performance, as measured by student grade in the course. To determine which math skills are important for student success, we examine (1) whether the student has taken calculus or business calculus, (2) whether the student has been required to take remedial mathematics, (3) the student's score on a test of very basic mathematical concepts, (4) student scores on the mathematics portion of the ACT exam, and (5) science/reasoning portion of the ACT exam. The score on the science portion of the ACT exam and the math-quiz score are significantly related to performance in an introductory statistics course, as are student GPA and gender. This result is robust across course formats and instructors. These results have implications for curriculum development, course content, and course prerequisites.

  • Least squares regression is the most common method of fitting a straight line to a set of bivariate data. Another less known method that is available on Texas Instruments graphing calculators is median-median regression. This method is proposed as a simple method that may be used with middle and high school students to motivate the idea of fitting a straight line to data. The median-median line may also be viewed as a method that is not greatly affected by outliers (robust to outliers). Our paper briefly reviews the median-median regression method, considers various examples to compare the median-median line to the least squares line, and investigates the properties of the median-median line versus the least squares line using a simulation study.

  • Discussions of quantitative literacy have become increasingly important, and statistics educators are well aware of the link between statistics education and quantitative literacy. Both the statistics education and quantitative literacy movements have emphasized the importance of students practicing skills in multiple contexts - a goal also consistent with a quantitative reasoning across-the-curriculum approach. In this paper, we consider two sources of information: 1) Our data from statistics courses and other quantitative-intensive courses at Lawrence University and 2) a review of the research literature on transfer of quantitative concepts across contexts. Through analysis of these sources, we further explore the link between statistics education and quantitative literacy, and argue for an across-the-curriculum approach to teaching quantitative reasoning. Moreover, we make specific suggestions to statistics educators on their role in the quantitative literacy movement.

  • Statistical terms are accurate and powerful but can sometimes lead to misleading impressions among beginning students. Discrepancies between the popular and statistical meanings of "conditional" are discussed, and suggestions are made for the use of different vocabulary when teaching beginners in applied introductory courses.

  • Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the normality assumption might be incorrect. As the physical coin production process is subject to a multitude of (very small) variability sources, it seems reasonable to expect that the empirical distribution of the weight of euro coins does agree with the normal distribution. Goodness of fit tests however show that this is not the case. Moreover, some outliers complicate the analysis. As alternative approaches, mixtures of normal distributions and skew normal distributions are fitted to the data and reveal that the distribution of the weight of euro coins is not as normal as expected.

  • The dataset presented here illustrates to students the utility of logistic regression. Its analysis results in a fit that explains much of how senators vote on a particular bill, and allows for quantification of the effects of ideology and money on the vote. A number of interesting quantitative interpretations follow from a good fit. A successful analysis makes use of a number of ideas discussed in applied courses: descriptive statistics, inferential methods, transformation of variables, and the handling of outliers and special cases. All these issues arise in the context of data on variables that require of students no specialized knowledge. Students have strong qualitative preconceptions about the relationships among the variables. The final results quantify, and nicely confirm, many of those conceptions.

  • This article discusses a real-life example of statistics in gambling.

  • This article presents an activity which simulates the linear regression model in order to verify the probabilistic behaviour of the resulting least-squares statistics in practice.

  • This article critically explores several issues related to requirements for probability teaching in various national curricula.

Pages

register