This article describes a probability situation that arose naturally in a high school and analyzes two technological approaches to its solution.
This article describes a probability situation that arose naturally in a high school and analyzes two technological approaches to its solution.
We often look for a best-fit function to a set of data. This article describes how a "pretty good" fit might be better than a "best" fit when it comes to promoting conceptual understanding of functions. In a pretty good fit, students design the function themselves rather than choosing it from a menu; they use appropriate variable names; and they vary parameter values by hand - sometimes with the aid of residuals and other tools - to help their function fit the data.
Fantasy baseball serves as a vehicle for students to perform various data-gathering and statistical analyses.
To create an environment in which all students have opportunities to notice, describe, and wonder about variability, this article takes a context familiar to many teacher - sampling colored chips from a jar - and shows how this context was used to explicitly focus on variation in the classroom. The sampling activity includes physical as well as computer simulations and has proven to generate lively discussion that highlights the tension between expectation on one hand and variation on the other.
This article describes an activity for developing the notion of association between two quantitative variables. By exploring a collection of scatter plots, the authors propose a nonstandard "intuitive" measure of association; and by examining properties of this measure, they develop the more standard measure, Pearson's Correlation Coefficient. The activity is designed to help students better understand how statistical measures are "invented" and why certain measures are preferred.
This paper explores the use of the dynamic software package, TinkerPlots, as a research tool to assist in assessing students' understanding of aspects of beginning inference. Two interview protocols used previously with middle school students in printed format without computer software were introduced to a new sample of students through data sets entered in TinkerPlots. The later group of students had experienced a series of lessons using TinkerPlots but the activities were based on different data sets. Of interest in this exploratory study is an analysis of the affordances provided by TinkerPlots to researchers in their quest to assist students in explaining their thinking about the data sets. These are considered in relation to those provided by the format of the earlier interviews.
StatCrunch (www.statcrunch.com) is an online data analysis package that can be used as a low cost alternative to traditional statistical software for introductory statistics courses. StatCrunch offers a wide array of numerical and graphical routines for analyzing data along with several features such as interactive graphics which can be used for pedagogical purposes. StatCrunch has a number of new features related to social data analysis where users may share data sets and associated analysis results via the StatCrunch site. Users may also interact via online discussions related to shared items. This manuscript provides a brief description of the mechanics of uploading and sharing information via the StatCrunch site and then discusses some of the potential benefits that these social data analysis capabilities offer to both students and instructors.
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).
Nearly all introductory statistics textbooks include a chapter on data collection methods that includes a detailed discussion of both random sampling methods and randomized experiments. But when statistical inference is introduced in subsequent chapters, its justification is nearly always based on principles of random sampling methods. From the language and notation that is used to the conditions that students are told to check, there is usually no mention of randomized experiments until an example that is a randomized experiment is encountered, at which point the author(s) may offer a statement to the effect of "the randomization allows us to view the groups as independent random samples." But a good student (or even an average one) should ask, "Why?"<br><br>This paper shows, in a way easily accessible to students, why the usual inference procedures that are taught in an introductory course are often an appropriate approximation for randomized experiments even though the justification (the Central Limit Theorem) is based entirely on a random sampling model.
Psychologists have discovered a phenomenon called "Belief Bias" in which subjects rate the strength of arguments based on the believability of the conclusions. This paper reports the results of a small qualitative pilot study of undergraduate students who had previously taken an algebra-based introduction to statistics class. The subjects in this study exhibited a form of Belief Bias when reasoning about statistical inference. In particular, the subjects in this study were more likely to question the experimental design of a study when they did not believe the conclusions reached by the study. While these results are based on a small sample, if replicated, the results have implications for the teaching of statistics. Specifically, when teaching hypothesis testing, statistics instructors should be mindful about the context of example problems used in class, make explicit links between inference to experimental design and actively engage students in discussions of both believability of conclusions and the types of arguments they find convincing.