Literature Index

Displaying 391 - 400 of 3326
  • Author(s):
    Anelise G. Sabbag and Andrew Zieffler
    Year:
    2015
    Abstract:
    The test instrument GOALS-2 was designed primarily to evaluate the effectiveness of the CATALST curriculum. The purpose of this study was to perform a psychometric analysis of this instrument. Undergraduate students from six universities in the United States (n=289) were administered the instrument. Three measurement models were fit and compared: the two-parameter logistic model, the mixed model (comprised of both the two-parameter logistic and the graded-response model), and the bi-factor model. The mixed model was found to most appropriately model students’ responses. The results suggested the revision of some items and the addition of more discriminating items to improve total test information.
  • Author(s):
    Joliffe, F.
    Editors:
    Jones, G. A.
    Year:
    2005
    Abstract:
    This chapter considers the assessment of probabilistic thinking and reasoning via informal monitoring as well as through formal tasks. Such assessment are considered in the context of the purposes of, and frameworks for, assessment. Specific tasks are examined as to whether they assess thinking and reasoning. Suggestions on improvement to the quality of the assessment instruments are made in light of research studies on the understanding of probability concepts. Alternative assessment strategies are suggested.
  • Author(s):
    Holmes, P.
    Editors:
    Gal, I., & Garfield, J. B.
    Year:
    1997
    Abstract:
    Over a number of years the external examiners of a regional statistics course for 18-year-old students in schools and colleges in the United Kingdom became aware that the method of assessment was distorting the teaching and learning process, that the things being assessed were not the things that the examiners thought most important for the students to know. This chapter shows how the assessment methods were changed by adding in a compulsory project and reflects on the impact of this change on the teaching and learning of statistics.
  • Author(s):
    Díaz, C., & de la Fuente, I.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    Conditional probability and Bayesian reasoning are important to psychology students because they are involved in the understanding of classical and Bayesian inference, regression and correlation, linear models, multivariate analysis and other statistical procedures that are often used in psychological research. A study of previous literature showed that there is considerable research on this topic, but no comprehensive questionnaires have been developed to globally assess students' understanding and misconceptions on these topics. At the University of Granada we started building a questionnaire, which takes into account the content of conditional probability taught in the Spanish universities to psychology students, as well as the biases and misconceptions described in the literature. In this work we will describe the process of developing the questionnaire and will report the results from a sample of 206 psychology students.
  • Author(s):
    Gal, I.
    Editors:
    Lajoie, Susanne P.
    Year:
    1998
    Abstract:
    The chapter examines the nature of interpretive skills that students need to acquire in statistics education, with a special focus on the role of students' opinions about data. Issues in the elicitation and evaluation of students' opinions are examined, and implications for assessment practices and teacher training are discussed.
  • Author(s):
    Hawkins, A., Jolliffe, F., & Glickman, L.
    Year:
    1992
    Abstract:
    This chapter describes various methods for assessing statistical knowledge, understanding and skills. It covers traditional written forms of assessment, multiple-choice vs. open-ended questions, essays, practical work and projects, and oral assessments. It also discusses issues pertaining to the monitoring of on-going progress, groups that may be disadvantaged as the result of particular assessment methods, experiences of changing assessment methods, and the assessment of teachers.
  • Author(s):
    Callingham, R.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    Current school curriculum documents stress the need for assessment to support learning. Teachers use assessment information to infer students' development and plan appropriate intervention. In order to do this, a framework is needed within which the assessment can be developed and interpreted, and a suitable task is required to obtain the necessary information about students' performances. The responses of 586 students to performance assessment tasks developed for the purpose of assessing a numeracy construct, rather than statistical understanding, were analysed against a previously identified hierarchy of Statistical Literacy. The findings suggest that the tasks provided reliable and interpretable evidence of performance in Statistical Literacy, using a classroom-based process rather than a traditional test.
  • Author(s):
    GIBSON, Liz, MARRIOTT, John, and DAVIES, Neville
    Year:
    2007
    Abstract:
    In this paper we report the results from a major UK government-funded project, started in 2005, to review statistics within the school mathematics curriculum for students up to age 16. New teaching materials that explicitly use a problem-solving approach through other subjects have been developed. We will report extensive trialling of these, the development of corresponding new assessment regimes and how these work in the classroom. The new ways of assessing are particularly poignant since, in September 2006, the UK government announced that coursework is to be dropped for mathematics exams sat by 16-year-old. As a consequence, areas of the curriculum previously assessed by coursework, are now being ignored. We will provide some new and useful ways of assessing this content. Our findings have implications for teaching, learning and assessing statistics for students of the subject at all ages.
  • Author(s):
    Garfield, J. B.
    Editors:
    Batanero, C., & Joliffe, F.
    Year:
    2003
    Abstract:
    This paper begins with a discussion of the nature of statistical reasoning, and then describes the development and validation of the Statistical Reasoning Assessment (SRA), an instrument consisting of 20 multiple-choice items involving probability and statistics concepts. Each item offers several choices of responses, both correct and incorrect, which include statements of reasoning explaining the rationale for a particular choice. Students are instructed to select the response that best matches their own thinking about each problem. The SRA provides 16 scores which indicate the level of students' correct reasoning in eight different areas and the extent of their incorrect reasoning in eight related areas. Results are presented of a cross-cultural study using the SRA to compare the reasoning of males and females in two countries.
  • Author(s):
    Gal, I., & Ebby, C. B.
    Year:
    1995
    Abstract:
    This brief guide provides some practical guidelines for the assessment of students' statistical knowledge and reasoning about data. It is intended for teachers who are just beginning to teach statistics (usually as part of the mathematics curriculum) and who have relatively little experience in this area.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education