Research

  • Teachers often come to our professional development programs thinking that statistics<br>is about mean, median, and mode. They know how to calculate these statistics, though<br>they don't often have robust images about what they mean or how they're used. When<br>they begin working with computer data analysis tools to explore real data sets, they are<br>able to see and manipulate the data in a way that hasn't really been possible for them<br>before. They identify particular points on graphs and can interpret what their positions<br>mean. They notice clumps and gaps in the data and generally find that there's a lot<br>more to see in the data than they ever imagined before. In addition, those exploring<br>data in this way often ground their interpretations in the range of things they know<br>from the contexts surrounding the data. They discover the richness and complexity of<br>data.<br>Yet all this detail and complexity can be overwhelming. Without some method of<br>focusing or organizing or otherwise reducing the complexity, it can be impossible to say<br>anything at all about a data set. How do people decide what aspects of a data set are<br>important to attend to? What methods do they use to reduce the cognitive load of trying<br>to attend to all the points? This paper will begin to describe some of the ways that<br>people do this while using new, software-driven representational tools. In the process,<br>we will begin to sketch out a framework for the techniques that people develop as they<br>reason in the presence of variability.

  • Two studies investigated upper elementary school students' informal understanding of sampling issues in the context of interpreting and evaluating survey results. The specific focus was on the children's evaluation of sampling methods and means of drawing conclusions from multiple surveys. In Study 1, 17 children were individually interviewed to categorize children's conceptions. In Study 2, 110 children completed paper-and-pencil tasks to confirm the response categories identified in Study 1 and to determine the prevalence of the response categories in a larger sample. Children evaluated sampling methods focusing on potential for bias, fairness, practical issues, or results. All children used multiple types of evaluation rationales, and the focus of their evaluations varied somewhat by context and type of sampling method (restricted, self-selected, or random). Children used affective (fairness) rationales more often in school contexts and rationales focused on results more often in out-of-school contexts. Children had more difficulty detecting bias with self-selected sampling methods than with restricted sampling methods because self-selection was initially the most fair (i.e., everyone had a chance to participate). Children preferred stratified random sampling to simple random sampling because they wanted to ensure that all types of individuals were included. When drawing conclusions from multiple surveys, children: (1) considered survey quality; (2) aggregated all surveys regardless of quality; (3) used their own opinions and ignored all survey data; or (4) refused to draw conclusions. Even when children were able to identify potential bias, they often ignored survey quality when drawing conclusions from multiple surveys.

  • Responses of 73 students to an interciew protocol based on selecting 10 lollies from a container with 50 red, 20 yellow, and 30 green are categorised with respect to centre and spread of numerial answers and to reasoning expressed in justification of the answers. Results are compared to earlier survey research and small-scale interview studies.

  • In December 1995, the Organisation for Economic Co-Operation and Development (OECD) and Statistics Canada jointly published the results of the first International Adult Literacy Survey (IALS). For this survey, representative samples of adults aged 16 to 65 were interviewed and tested in their homes in Canada, France, Germany, the Netherlands, Poland, Sweden, Switzerland, and the United States. This report describes how the survey was conducted in each country and presents all available evidence on the extent of bias in each country's data. Potential sources of bias, including sampling error, non-sampling error, and the cultural appropriateness and construct validity of the assessment instruments, are discussed. The chapters are; (1) "Introduction" (Irwin S. Kirsch and T. Scott Murray); (2) "Sample Design" (Nancy Darcovich); (3) "Survey Response and Weighting" (Nancy Darcovich); (4) "Non-Response Bias" (Nancy Darcovich, Marilyn Binkley, Jon Cohen, Mats Myrberg, and Stefan Persson); (5) "Data Collection and Processing" (Nancy Darcovich and T. Scott Murray); (6) "Incentives and the Motivation To Perform Well" (Stan Jones); (7) "The Measurement of Adult Literacy" (Irwin S. Kirsch, Ann Jungeblut, and Peter B. Mosenthal); (8) "Validity Generalization of the Assessment across Countries" (Don Rock); (9) "An Analysis of Items with Different Parameters across Countries" (Marilyn R. Binkley and Jean R. Pignal); (10) "Scaling and Scale Linking" (Kentaro Yamamoto); (11) "Proficiency Estimation" (Kentaro Yamamoto and Irwin S. Kirsch); (12) "Plausibility of Proficiency Estimates" (Richard Shillington); and (13) "Nested-Factor Models for the Swedish IALS Data" (Bo Palaszewski). Fourteen appendixes contain supplemental information, some survey questionnaires, and additional documentation for various chapters.

  • There are two parts to this literature review. The first part includes bibliography directly focusing on variation,: meaning of variation, role of variation in statsitcal reasoing, researchon conceptions of variation, as well as literature discussing the neglect of variation. The second part lists references belonging to four bodies of literature which, although not having the study of intuitions abaout variation as their main object of study, do offer rich insights into people's thinking about variation: literarure on sampling and centers, on intutioons about the stochastic, on the role of technology, and on the effect of the formalist mathematics tradition on statistics education.

  • The conjecture driving this study is that if statistics curricula were to put<br>more emphasis on helping students improve their intuitions about variation and its<br>relevance to statistics, we would be able to witness improved comprehension of<br>statistical concepts (Ballman, 1997). Both the research literature and previously<br>conducted research by the author indicate that variation is often neglected, and its<br>critical role in statistical reasoning is under-recognized.<br>A nontraditional approach to statistics instruction that has variation as its<br>central tenet, and perceives learning as a dynamic process subject to development<br>for a long period of time and through a variety of contexts and tools, is laid out in<br>this thesis. The experiences and insights gained from adopting such an approach<br>in a college level, introductory statistics classroom are reported.

  • People make use of quantitative information on a daily basis. Professional education organizations for mathematics, science, social studies, and geography recommend that students, as early as middle school, have experience collecting, organizing, representing, and interpreting data. However, research on middle school students' statistical thinking is sparse. A cohesive picture of middle school students' statistical thinking is needed to better inform curriculum developers and classroom teachers. The purpose of this study was to develop and validate a framework for characterizing middle school students' thinking across 4 processes: describing data, organizing and reducing data, representing data, and analyzing and interpreting data. The validation process involved interviewing, individually, 12 students across Grades 6 through 8. Results of the study indicate that students progress through 4 levels of thinking within each statistical process. These levels of thinking were consistent with the cognitive levels postulated in a general developmental model by Biggs and Collis (1991).

  • Graph comprehension is considered a basic skill in the curriculum, and essential fo rstatistical literacy in an information society. How do students interpret a graph in an authentic context? Are misleading features apparent? Responses to questions about a graph-based advertisement suggest that students commonly did not appreicate scaling difficulties, relate a grph as relecant int he context of a standard interpretation task , or apply numeracy skills for calculations based on data in graphical represtations.

  • Advancing computer technology is allowing us to downplay instruction in mechanical procedures and shift emphasis towards teaching the "art" of statistics. This paper is based upon interviews with six professional statisticians about statistical thinking and statistical practice. It presents themes emerging from their professional experience, emphasizing dimensions that were surprising to them and were not part of their statistical training. Emerging themes included components of statistical thinking, pointers to good statistical practices and the subtleties of interacting with the thinking of others, particularly coworkers and clients. The main purpose of the research is to uncover basic elements of applied statistical practice and statistical thinking for the use of teachers of statistics.

  • Variation is a key concept in the study of statistics and the understanding of variation is a crucial aspect of most statistically related tasks. To express this understanding students need to be able to describe variation. Students aged 13 to 17 engaged in an inference task set in a real world context that necessitated the description of both rainfall and temperature data. This research qualitatively analysed the student responses with respect to the descriptions of variation that were incorporated. A Data Description hierarchy, previously developed for describing variation in a sampling task, was found to be appropriate to code the better responses and was extended to accommodate a range of less statistically sophisticated responses identified. The SOLO Taxonomy was used as a framework for the hierarchy. Two cycles of U-M-R levels, one for more qualitatively descriptions and the other for more quantitative descriptions, were identified in the responses. Task and implementation issues that may have affected the descriptions, as well as implications for research, teaching and assessment, are outlined.

Pages