Literature Index

Displaying 401 - 410 of 3326
  • Author(s):
    Pimenta, R.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    New technologies involve a reformulation of statistical teaching contents and methodology and, at the same time, reinforce the need for a deeper training of students, including developing students' statistical reasoning. In this work we evaluate the statistical reasoning ability acquired by health sciences students from the analysis of their final undergraduate projects.
  • Author(s):
    Watson, J. M.
    Editors:
    Gal, I., & Garfield, J. B.
    Year:
    1997
    Abstract:
    The goals of this chapter are (a) to address the need to assess statistical thinking as it occurs in social settings outside the classroom, (b) to suggest a hierarchy for judging outcomes, (c) to provide examples of viable assessment based on items from the media, and (d) to discuss the implications for classroom practice.
  • Author(s):
    Callingham, Rosemary
    Year:
    2011
    Abstract:
    The increased importance of developing statistical understanding in school education is recognised in curriculum documents across the world. The role of technology in enhancing the teaching of statistics is emphasised in these documents and the emergence of quality computer software and websites provides teachers with access to unprecedented resources for teaching statistics to young students. Assessment processes, however, have not kept pace with the advances in technology. This paper highlights some emerging and existing issues in the assessment of statistical understanding at the school level, and includes discussion of the implications for teachers and researchers
  • Author(s):
    Forbes, S.
    Year:
    1994
    Abstract:
    The traditional assessment of students' learning in statistics courses has followed the model used for mathematics and many other subject areas; that is time constrained written examinations. In New Zealand a large proportion of statistics assessment is still of this form. In order to determine whether this is the most appropriate form of assessing statistics learning consideration needs to be given to the following: - fundamental differences between the content of statistics and other courses, - the skills required of statistics learners, - the purpose(s) of assessment, and - whether particular forms of assessment advantage or disadvantage some groups of learners. While this paper raises issues related to the first three points above, the major focus is on the last. Performance in the national examinations sat by secondary school students in New Zealand is analysed for gender and ethnic differences in two different forms of assessment: project based internal assessment and traditional written examination.
  • Author(s):
    McKENZIE, John D., Jr. and RYBOLT, William H.
    Year:
    2007
    Abstract:
    This paper reports on the implementation of the experiments we designed to assess how well our first-year students learn statistics and mathematics with electronic quizzes. It describes many issues that arose in our attempts to study the impact of such quizzes. An explanation of these issues should assist others who wish to assess the impact of technology in the classroom. The paper concludes with a preliminary analysis of the data from our experiments in fall semester of 2006 and a description of similar experiments planned for the spring semester of 2007.
  • Author(s):
    RYBOLT, William H. and MCKENZIE Jr., John D.
    Year:
    2007
    Abstract:
    From 2001-2006 we used a number of approaches to assess how well our first-year students learn<br>statistics and mathematics when introduced to different teaching methods. The topics introduced in<br>their two courses include those found in a standard applied probability and statistics course. For<br>example, descriptive linear regression. Most of these assessments have been based upon analyses of<br>opinions and examination results from the students. This paper reports on designing experiments to<br>determine whether electronic quizzes enhance student learning. A second paper presents the<br>implementation of these experiments and a preliminary analysis of the data from these experiments.
  • Author(s):
    Starkings, S.
    Editors:
    Gal, I., &amp; Garfield, J. B.
    Year:
    1997
    Abstract:
    The main purpose of this chapter is to offer practical advice to teachers who want to use projects in their courses. In this chapter some examples of projects are given, two assessment models are explained, and teachers' experiences are described. The project models and examples described have been used with students of 14 to 18 years of age, but can be adapted for younger or older students as well.
  • Author(s):
    Berenson, Mark L.; Utts, Jessica; Kinard, Karen A.; Rumsey, Deborah J.; Jones, Albyn; Gaines, Leonard M.
    Year:
    2008
    Abstract:
    Assessment has become the "buzzword" in academia; a demonstration of criteria used for the assessment of retention of what was learned is now mandated by various accrediting agencies. Whether we want our students to be good users of statistics who make better decisions, or good consumers of statistics who are better informed citizens, we must reflect on how key statistical concepts can be ingrained in the students' knowledge base. This article seeks to address the overall issue of assessing the retention of essential statistical ideas that transcend various disciplines.
  • Author(s):
    Curcio, F. R., Artzt, A. F.
    Year:
    1996
    Abstract:
    The intent of this article is to suggest ways to enrich typical data-related assessment tasks for upper-middle and secondary school students. Enriching such tasks supports the notion of assessment as an essential, ongoing component of instruction, affording students the opportunity to engage in higher-order thinking that may lead to making discoveries about data that are of interest to them. Furthermore, presenting students with such tasks may contribute to their appreciating and valuing the graphical representation of data.
  • Author(s):
    Robert delMas, Joan Garfield, Ann Ooms, and Beth Chance
    Editors:
    Iddo Gal<br>Tom Short
    Year:
    2007
    Abstract:
    This paper describes the development of the CAOS test, designed to measure students' conceptual understanding of important statistical ideas, across three years of revision and testing, content validation, and realiability analysis. Results are reported from a large scale class testing and item responses are compared from pretest to posttest in order to learn more about areas in which students demonstrated improved performance from beginning to end of the course, as well as areas that showed no improvement or decreased performance. Items that showed an increase in students' misconceptions about particular statistical concepts were also examined. The paper concludes with a discussion of implications for students' understanding of different statistical topics, followed by suggestions for further research.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education