Journal Article

  • Statistics in the Community (STATCOM) is a student-run statistical consulting program that has been serving its local community since 2001. Directed and staffed by graduate students from Purdue University's Department of Statistics, it provides professional consulting services to governmental and nonprofit groups free of charge. Students work in teams to help community clients address specific problems and needs. Past clients include school corporations, libraries, community assistance programs, and the city of West Lafayette. Participation in STATCOM allows students to apply statistical concepts and classroom material to solve real problems. It also develops skills in leadership, management, and written and oral communication of results to the general public. Though important for any future career in statistics, these skills are not typically emphasized in graduate courses, research, or the on-campus academic consulting service. The university and academic department also benefit through increased interaction and visibility in the local community. STATCOM can serve as a model for integrating service learning into graduate statistical education at other colleges and universities.

  • Service-learning projects are a useful method for students to learn both the practice and value of statistical methods. Effective service learning, however, depends on several factors and can be implemented according to a variety of models. In this article, different models for incorporating service-learning in statistics courses are presented along with example statistics courses. Principles for good service-learning practice will also be presented as a means for assessing the quality of a service-learning course component.

  • Introductory statistics textbooks rarely discuss the concept of variability for a categorical variable and thus, in this case, do not provide a measure of variability. The impression is thus given that there is no measurement of variability for a categorical variable. A measure of variability depends on the concept of variability. Research has shown that "unalikeability" is a more natural concept than "variation about the mean" for many students. A "coefficient of unalikeablity" can be used to measure this type of variability. Variability in categorical data is different from variability in quantitative data. This paper develops the coefficient of unalikeability as a measure of categorical variability.

  • Kalamazoo College is a selective, liberal arts college located in Kalamazoo, Michigan with total enrollment of approximately 1200 students. The academic calendar is comprised of three 10-week quarters, each of which is followed by one week for final examinations. Kalamazoo College is distinguished by its four-fold academic program known as the "K-Plan": (1) Rigorous liberal arts coursework, (2) study abroad, (3) career development, and (4) the senior individualized project. With the inception of the K-Plan over 40 years ago, experiential education has long characterized the College student experience, especially with respect to the last three components listed above. Over the past ten years, the on-campus experience of Kalamazoo College students has also become more experiential in nature as a substantial proportion of courses now have significant service-learning components.

  • This article reports on a subset of results from a larger study which examined middle and high school students' probabilistic reasoning. Students in grades 5, 7, 9, and 11 at a boys' school (n=173) completed a Probability Inventory, which required students to answer and justify their responses to ten items. Supplemental clinical interviews were conducted with 33 of the students. This article describes students' specific reasoning strategies to a task familiar from the literature (Tversky and Kahneman, 1973). The results call into question the dominance of the availability heuristic among school students and present other frameworks of student reasoning.

  • This paper provides an overview of current research on teaching and learning statistics, summarizing studies that have been conducted by researchers from different disciplines and focused on students at all levels. The review is organized by general research questions addressed, and suggests what can be learned from the results of each of these questions. The implications of the research are described in terms of eight principles for learning statistics from Garfield (1995) which are revisited in the light of results from current studies.

  • The ability to design experiments in an appropriate and efficient way is an important skill, but students typically have little opportunity to get that experience. Most textbooks introduce standard general-purpose designs, and then proceed with the analysis of data already collected. In this paper we explore a tool for gaining design experience: computer-based virtual experiments. These are software environments which mimic a real situation of interest and invite the user to collect data to answer a research question. Two prototype environments are described. The first one is suitable for a course that deals with screening or response surface designs, the second one allows experimenting with block and row-column designs. They are parts of a collection we developed called ENV2EXP, and can be freely used over the web. We also describe our experience in using them in several courses over the last few years.

  • Darius et al. (2007) and Nolan & Temple Lang (2007) give examples of virtual environments that can, for specific purposes, substitute for the real world. We are in the early stages of developments that could revolutionize statistics education by making it possible to capture efficiently important aspects of the thinking and practice of professional statisticians previously learned only from long years of experience. The ability of virtual environments to automate processes provides a potent weapon for tackling the tyranny that Time exercises over such modes of learning. We discuss the many new possibilities that are opened up by virtual environments together with cognitive and pedagogical imperatives to be addressed to ensure that environments actually do teach the lessons they were designed to teach. We echo Nolan and Temple Lang's call for the development of environments to be modular and open source. Taking the R-project as a model, this can lead to a growing repository of building blocks that make the construction of future environments less costly, thus facilitating the realization of more and more ambitious conceptions.

  • I argue that teaching statistical thinking is harder than teaching mathematics, that experimental design is particularly well suited to teaching statistical thinking and that in teaching statistics, variation is good. We need a mix of archival data, simulations and activities, of varying degrees of complexity. Within this context, I applaud the important contributions to our profession represented by Darius et al. (2007), and Nolan & Temple Lang (2007), the first for showing us how to make simulation-based learning simultaneously more flexible and more realistic than ever before, and the second for showing us a path-breaking technology that can make archival data the basis for active learning at an impressively high level of sophistication, embedding statistical thinking within real scientific and practical investigations.

  • Significant efforts have been made to overhaul the introductory statistics courses by placing greater emphasis on statistical thinking and literacy and less on rules, methods and procedures. We advocate broadening and increasing this effort to all levels of students and, importantly, using topical, interesting, substantive problems that come from the actual practice of statistics. We want students to understand the thought process of the "masters" in context, seeing their choices, different approaches and explorations. Similar to Open Source software, we think it is vital that the work of the community of researchers is accessible to the community of educators so that students can experience statistical applications and learn how to approach analyses themselves. We describe a mechanism by which one can collect all aspects or fragments of an analysis or simulation into a "document" so that the computations and results are reproducible, reusable and amenable to extensions. These documents contain various pieces of information (e.g. text, code, data, exploration paths) and can be processed to create regular descriptive papers in various formats (e.g. PDF, HTML), as well as acting as a database of the analysis which we can explore in rich new ways. Researchers, instructors and readers can control the various steps in the processing and rendering of the document. For example, this type of document supports interactive components with which a student can easily control and alter the inputs to the computations in a semi-guided fashion, gradually delve deeper into the details, and go on to her own free-form analysis. Our implementation for this system is based on widely used and standardized frameworks and readily supports multiple and different programming languages. Also, it is highly extensible which allows adaptation and future developments.

Pages