Literature Index

Displaying 501 - 510 of 3326
  • Author(s):
    SANCHEZ, Juana
    Year:
    2007
    Abstract:
    Statistical literacy assessment tools developed in one part of the world or for a particular constituency<br>may not be the best tools for others. The wording, the level, the context, the objects mentioned may be<br>foreign and thus render the assessment tool useless. This is a predicament that the countries involved in<br>the CensusAtSchool and other international projects know very well. Statistical literacy instruments<br>must be customized and therefore the tools to assess statistical literacy must be customized too. The<br>International Statistical Literacy Project of the IASE contains a variety of learning and assessment<br>tools developed by many different international sources for a variety of groups. In this paper and<br>related documents, we illustrate with examples how to take advantage of the numerous resources in the<br>ISLP web page to build tools to assess statistical literacy suitable for different models and<br>constituencies in the statistics spectrum.
  • Author(s):
    Batanero, C., Godino, J. D., &amp; Estepa, A.
    Year:
    1999
    Abstract:
    In this research forum we present results from a research project concerning students' understanding of statitical association and its evolution after teaching experiments using computers. This research has been carried out at the Universities of Granada and Jaen over the years 1991-98. We have identified different incorrect preconceptions and strategies to assess statistical association and performed two different teaching experiments designed to overcome these difficulties and to identify the critical points, which arise in attempting to do this.
  • Author(s):
    Boger, P.
    Editors:
    Stephenson, W. R.
    Year:
    2005
    Abstract:
    This paper describes a project with the goal of exposing both elementary school and undergraduate students to the concepts associated with the experimental method, from the formulation of a researchable question to the analysis and interpretation of the results. Under the guidance of their university mentors, fourth and fifth grade students formulated a research question, designed an experiment to answer that inquiry, recorded the appropriate measurements, calculated the necessary statistics, created visual displays of their results, and interpreted their findings at a student-centered Numeracy Conference.
  • Author(s):
    Boger, P.
    Editors:
    Phillips, B.
    Year:
    2002
    Abstract:
    This paper will describe a project designed to enhance the numeracy skills of students at two educational levels - elementary and undergraduate. Under the guidance of the university students, students in grades four through six will formulate a research question, gather the appropriate data and summarize the data using graphs. The graphs along with a written summary of the project will be displayed in a poster, which will be sent to the national poster competition sponsored by the American Statistical Association.
  • Author(s):
    Tyreel, S.
    Editors:
    Goodall, G.
    Year:
    2003
    Abstract:
    Summary The Family Expenditure Survey provides details of household incomes. This article looks at income distribution afresh and what is meant by the mean.
  • Author(s):
    Sanders, W. V.
    Year:
    1988
    Abstract:
    COMPENSTAT, a menu-driven statistical program for IBM-compatible microcomputers, has two distinct versions: instructional and computational. The instructional version can be used by instructors as a classroom resource, and the computational version is used directly by students to calculate answers to problems. The software package is primarily used as an assignment generating and problem solving tool. Each student in a class is assigned unique data for a problem type. Since all data sets generate different answers, students can help each other learn but cannot simply copy answers. The instructor is not burdened with extra work, since each student's assignment is followed by a personalized answer key on which the student's answer is computed. An answer sheet is even provided to organize students' responses for easy checking. This paper provides instructions for using the menu-driven features of COMPENSTAT in a business statistics course including diagrams of the menu options, which facilitate building, viewing, or modifying data sets; generating individualized assignments for students in a class; and performing statistical calculations (e.g., frequency distributions, descriptive statistics, probability distributions, confidence intervals, hypothesis testing, chi-square and ANOVA, index numbers, regression and correlation, and nonparametric statistics). Examples of a COMPENSTAT answer sheet and statistical problems on regression, correlation, and ANOVA are also provided. (GL)
  • Author(s):
    Varnhagen, C. K., &amp; Zumbo, B. D.
    Year:
    1990
    Abstract:
    Evaluates the effectiveness of 2 different computer assisted instruction (CAI) formats compared with traditional in-class instruction as a laboratory supplement to lectures in introductory statistics. 134 students enrolled in an introductory statistics course in psychology were assigned to 1 of the 3 lab sessions. The evaluation consisted of affective responses to a questionnaire concerning the lab session, as well as student performance on 3 homework assignments and on a midterm examination. Lab format had a significant effect on attitude toward the lab; it did not, however, significantly influence performance. Path analysis revealed relations between lab format, student attitude toward the lab, and performance on the midterm exam. (PsycLIT Database Copyright 1990 American Psychological Assn, all rights reserved)
  • Author(s):
    Felicity B. Enders, Sarah Jenkins, and Verna Hoverman
    Year:
    2010
    Abstract:
    Biostatistics is traditionally a difficult subject for students to learn. While the mathematical aspects are challenging, it can also be demanding for students to learn the exact language to use to correctly interpret statistical results. In particular, correctly interpreting the parameters from linear regression is both a vital tool and a potentially taxing topic. We have developed a Calibrated Peer Review (CPR) module to aid in learning the intricacies of correct interpretation for continuous, binary, and categorical predictors. Student results in interpreting regression parameters for a continuous predictor on midterm exams were compared between students who had used CPR and historical controls from the prior course offering. The risk of mistakenly interpreting a regression parameter was 6.2 times greater before the introduction of the CPR module (p=0.04). We also assessed when learning took place for a specific item for three students of differing capabilities at the start of the assignment. All three demonstrated achievement of the goal of this assignment; that they learn to correctly evaluate their written work to identify mistakes, though one did so without understanding the concept. For each student, we were able to qualitatively identify a time during their CPR assignment in which they demonstrated this understanding.
  • Author(s):
    Lichtenstein, S., Fischhoff, B., &amp; Phillips, L. D.
    Editors:
    Kahneman, D., Slovic, P., &amp; Tversky, A.
    Year:
    1982
    Abstract:
    The experimental literature on the calibration of assessors making probability judgments about discrete propositions is reviewed in the first section of this chapter. The second section looks at the calibration of probability density functions assessed for uncertain numerical quantities. Although calibration is essentially a property of individuals, most of the studies reviewed here have reported data grouped across assessors in order to secure the large quantities of data needed for stable estimates of calibration.
  • Author(s):
    Whitney A. Zimmerman and Deborah D. Goins
    Year:
    2015
    Abstract:
    Self-efficacy and knowledge, both concerning the chi-squared test of independence, were examined in education graduate students. Participants rated statements concerning self-efficacy and completed a related knowledge assessment. After completing a demographic survey, participants completed the self-efficacy and knowledge scales a second time. Individuals with and without prior experiences with the topic were compared; those with prior experiences gave significantly higher self efficacy ratings and had higher demonstrated knowledge scores, although the latter difference was not statistically significant. While self-efficacy and knowledge scores did not differ significantly between the two administrations, individuals without prior topic experience saw greater improvements in self-efficacy calibration. Findings suggest that self-efficacy calibration may be improved through completing an assessment.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education