Teaching

  • We demonstrate that one can teach conditional probability in a manner consistent with many features of the statistics education reform movement. Presenting a variety of applications of conditional probability to realistic problems, we propose that interactive activities and the use of technology make conditional probability understandable, interactive, and interesting for students at a wide range of levels of mathematical ability. Along with specific examples, we provide guidelines for implementation of the activities in the classroom and instructional cues for promoting curiosity and discussion among students.

  • The process of statistical investigation may be conceptualized as having four components: Posing the question, collecting the data, analyzing the data, and interpreting the data. Graphical representations of data are a critical part of the analysis phase, since the use of different representations communicates information in different ways. This chapter discusses instructional strategies for moving between three different pairs of representations: bar graphs showing ungrouped data and standard bar graphs, line plots and bar graphs, and stem-and-leaf plots and histograms. These strategies are designed to optimize the accuracy of interpretations and avoid common pitfalls in making sense of the data; the strategies focus on reading rather than on making the representations themselves. Students' attempts to make the translations between representations are discussed within the framework of the instructional suggestions.

  • How can ideas, techniques, and applications taken from Exploratory Data Analysis (EDA) enrich mathematics instruction? How do students and teachers respond to ideas of EDA? How must EDA be transformed in order to reach a pedagogically useful position in the mathematics curriculum within general education? The paper describes some results of a teaching experiment concerning ideas of EDA. It was explored how basic new displays such as stem-and-leaf and boxplots can be taught and learned and how they have to be regarded in the context of more traditional statistical displays and newer computer supported displays. A new structuring of the cognitive tool kit for elementary data analysis is sketched. EDA is communicated to teachers and students as detective work. The paper describes ways and problems to do this and how such ideas were transformed in the classroom. Difficulties that arose with using open material and complex data sets in the classroom are discussed with an example concerning deaths in traffic accidents in west Germany from 1953-1987.

  • A dataset concerning the relationship between respiratory function (measured by forced expiratory volume, FEV) and smoking provides a powerful tool for investigating a wide variety of statistical matters. This paper gives a brief description of the problem, the data, and several issues and analyses suggested by the problem of quantifying the relationship between FEV and smoking.

  • Meta-analysis (MA) is the quantitative integration of empirical studies that address the same or similar issues. It provides overall estimates of effect size, and can thus guide practical application of research findings. It can also identify moderating variables, and thus contribute to theory-building and research planning. It overcomes many of the disadvantages of null hypothesis significance testing. MA is a highly valuable way to review and summarise a research literature, and is now widely used in medicine and the social sciences. It is scarcely mentioned, however, in introductory statistics textbooks. I argue that MA should appear in the introductory statistics course, and I explain how software that provides diagrams based on confidence intervals can make many of the key concepts of MA readily accessible to beginning students.

  • Once upon a time there was only one t distribution, the familiar central t, and it was monopolised by the Null Hypotheses (the Nulls), the high priests in Significance Land. The Alternative Hypotheses (the Alts) felt unjustly neglected, so they developed the noncentral t distribution to break the monopoly, and provide useful functions for researchers-calculation of statistical power, and confidence intervals on the standardised effect size Cohen's d. I present pictures from interactive software to explain how the noncentral t distribution arises in simple sampling, and how and why it differs from familiar, central t. Noncentral t deserves to be more widely appreciated, and such pictures and software should help make it more accessible to teachers and students.

  • For teaching purposes it is sometimes useful to be able to provide the students in a class with different sets of regression data which, nevertheless give exactly the same estimated regression functions. In this paper we describe a method showing how this can be done, with a simple example. We also note that the method can be generalized for situations where the regression errors are not independently distributed with a constant covariance matrix.

  • The GAISE College Report recommends that introductory applied statistics courses should place greater stress on statistical concepts and less stress on definitions, computations, and procedures. The report also urges instructors to align assessments with learning goals. In this paper the authors explain how instructors can implement these two recommendations. They first review the extent to which questions directly related to concepts are found in popular texts and on websites created by the publishers of these texts. Following this review they provide examples of such questions in a variety of formats (multiple choice, fill-in-the blank, open-ended, etc.). The examples will be classified by the approximate level of educational objective contained in Bloom's Taxonomy. Finally, the paper will discuss the advantages and disadvantages associated with having students answer such questions electronically.

  • Nowadays we can not ignore the use of computers in statistics calculations and the main reason for its use is that computations become faster and trustworthy. Almost all statistical software computes p-values so students and researchers can take their decisions only based on its "usual" value. If the p-value is lower than 0.05 then the null hypothesis for a statistical test is "simply" rejected. Do statistical tests users ask about the meaning of this software output? If we are testing statistical hypothesis we have the null hypothesis tested against an alternative hypothesis. Do statistical tests users think about them? Since the decisions are based on sampling, the statistical tests decisions involve uncertainty and so two types of errors can be made. Do statistical tests users think about them? A questionnaire was constructed and administered to students and researchers in order to make a first approach about those subjects.

  • In college courses that use group work to aid learning and evaluation, class groups are often selected randomly or by allowing students to organize groups themselves. This article describes how to control some aspect of the group structure, such as increasing schedule compatibility within groups, by forming the groups using multidimensional scaling. Applying this method in an undergraduate statistics course has resulted in groups that have been more homogeneous with respect to student schedules than groups selected randomly. For example, correlations between student schedules increased from a mean of 0.29 before grouping to a within-group mean of 0.50. Further, the exercise motivates class discussion of a number of statistical concepts, including surveys, association measures, multidimensional scaling, and statistical graphics.

Pages