Journal Article

  • Counterbalanced designs are ubiquitous in cognitive psychology. Researchers, however, rarely perform optimal analyses of these designs and, as a result, reduce the power of their experiments. In the context of a simple priming experiment, several idealized data sets are used to illustrate the possible costs of ignoring counterbalancing, and recommendations are made for more appropriate analyses. These recommendations apply to assessment of both reliability of effects over subjects and reliability of effects over stimulus items.

  • Simulation data are used to test a student's beliefs about the relative probabilities of two sequences obtained by flipping a fair coin. The episode is used to illustrate general issues in using simulations instructionally.

  • Twenty-two university students who did not initially know the quantitative rule for predicting whether a configuration of weights placed on a balance beam would cause the mean to balance, tip left , or tip right were asked to induce the rule in a training procedure adapted form Siegler (1976). For each of a series of balance beam problems, subjects predicted the action of the beam and explained how they arrived at their prediction. Protocols revealed that although all subjects realized early on that both weight and distance were relevant to their predictions, they used a variety of heuristics prior to inducing the correct quantitative rule. There heuristic included instance-based reasoning, qualitative estimation of istance, and the use of quantitative rules of limited generality. The commohn use of instance-based reasoning suggests that learning to understand the balance beam cannot be described completely in terms of a simple rule acquisition theory. Also, the variability in the use of heuristics across subjects suggests that no simple theory that depicts subjects as linearly progressing through a hierarchy of levels can adquately describe the development of balance understanding.

  • Measures of biologic and behavioural variables on a patient often estimate longer term latent values, with the two connected by a simple response error model. For example, a subject's measured total cholesterol is an estimate (equal to the best linear unbiased estimate (BLUE)) of a subject's latent total cholesterol. With known (or estimated) variances, an alternative estimate is the best linear unbiased predictor (BLUP). We illustrate and discuss when the BLUE or BLUP will be a better estimate of a subject's latent value given a single measure on a subject, concluding that the BLUP estimator should be routinely used for total cholesterol and per cent kcal from fat, with a modified BLUP estimator used for large observed values of leisure time activity. Data from a large longitudinal study of seasonal variation in serum cholesterol forms the backdrop for the illustrations. Simulations which mimic the empirical and response error distributions are used to guide choice of an estimator. We use the simulations to describe criteria for estimator choice, to identify parameter ranges where BLUE or BLUP estimates are superior, and discuss key ideas that underlie the results.

  • Higher education faces an environment of financial constraints, changing customer demands, and loss of public confidence. Technological advances may at last bring widespread change to college teaching. The movement for education reform also urges widespread change. What will be the state of statistics teaching at the university level at the end of the century? This article attempts to imagine plausible futures as stimuli to discussion. It takes the form of provocations by the first author, with responses from the others on three themes: the impact of technology, the reform of teaching, and challenges to the internal culture of higher education.

  • This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students' construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students' ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching and learning statistics by emphasizing how the computer lends itself to supporting cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources); and teachers' metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.

  • The community of statisticians and statistics educators should take responsibility for the evaluation and improvement of software quality from the perspective of education. The paper will develop a perspective, an ideal system of requirements to critically evaluate existing software and to prodcue future software more adequate both for learning and doing statistics in introductory courses. Different kinds of tools and microworlds are needed. After discussion general requirements for such programs, a prototypical ideal software system will be presented in detail. It will be illustrated how such a system could be used to construct learning environments and to support elementary data analysis with exploratory working style.

  • Based on a review of research and a cognitive devleopment model (Bigg & Collis, 1991), we formulated a framework for characterizing elementary children's statistical thinking and refined it through a validation process. The 4 constructs in this framework were describing, organizing, representing, and analyzing and interpreting data. For each construct, we hypothesized 4 thinking levels, which represnt a continuum from idiosyncratic to analytic reasoning. We developed statistical thinking descriptors for each level and construct and used these to design an interview protocol. We refined and validated the framework using data from portocols of 20 target students in Grades 1 through 5. Results of the study confirm that children's statistical thinking can be described according to the 4 framework levels and that the framework provides a coherent picture of children's thinking, in that 80% of them exhibited thinking that was stable on at lest 3 constructs. The framework contributes domain-specific theory fo characterising chilren't statistical thinding and for planning instruction in data handling.

  • Statistical literacy is a key ability expected of citizens in information-laden societies, and is often touted as an expected outcome of schooling and as a necessary component of adults' numeracy and literacy. Yet, its meaning and building blocks have received little explicit attention. This chapter proposes a conceptualization of statistical literacy and describes its key components. Statistical literacy is portrayed as the ability to interpret, critically evaluate, and communicate about statistical information and messages. It is argued that statistically literate behavior is predicated on the joint activation of five interrelated knowledge bases (literacy, statistical, mathematical, context, and critical), together with a cluster of supporting dispositions and enabling beliefs. Educational and research implications are discussed, and responsibilities facing educators, statisticians, and other stakeholders are outlined.

  • Statistical questions suffuse the favric of our society at almost all folds. When Bill Krusak and I offered that ovservation in an Amstat News artical just over a decade ago, the universe of our immediate concern was the federal statistical system--a universe that to some may have seemed rather parochial. Our principal intent in sharing out views with the members of ASA was to underscore the pervasiveness of statistics produced by the federal govenment in out professional and personal lives. The urgency in our voices stemmed from what we perceived to be "penny-wise but poiund0foolish decision" that would undermine the quality of data for research, program planning, allocation of resources, and policy evaluation--by academics, buiness leaders, government officials, and citizens--for years to come (Druskal and Wallman 1982).<br>It is not my mission tonight to revisit either historic or recent tragedies and triumphs of the federal statisital system. Many have written and spoken on these matters; several of my predecessors have discussed these issues, and how the ASA might respond to them, intheir presidential addresses to out membershp. I will, hoever, use the milieu of federal statistic as the opening scene for elaborating my hope that by enhancing statistical literacy we may succeed in enriching our society. My aims for the remarks I will share with you this evening are three:<br>--to underscore the importance of strengthening understanding of statistics and statistical thinking among all sectors of our population;<br>--to highlight some avenue we can pursue to enhance our citizens' statistical literacy; and<br>--to suggest some ways that individual statisticians and the American Statistical Association can enrich our society.

Pages