Proceedings

  • Alternative assessment methods are becoming increasingly common in higher education with the aim of increasing the potential learning of students. This paper presents an application of an alternative assessment method: peer assessment of oral presentations for postgraduate students within a statistics department. Even though the assessment of peers is a valuable workplace skill, such an activity is rarely an integrated part of university education. With a new emphasis in universities on the development of generic skills, it is appropriate to explore means of assessment that are valued in the marketplace. The aim of the peer assessment intervention reported here was to increase the critical thinking skills of students and enable them to develop their ability as independent decision makers. The advantages and disadvantages of the intervention and peer assessment in general, are discussed and suggestions are made for possible improvements.

  • The theory of statistics is composed of highly abstract propositions that are linked in multiple ways. Both the abstraction level and the cumulative nature of the subject make statistics a difficult subject. A diversity of didactic methods has been devised to aid the student in the effort to master statistics, one of which is the method of propositional manipulation (MPM). Based on this didactic method, a corresponding assessment method has been developed. Basically, in using MPM for assessment purposes, the student is instructed to construct arguments using subsets of elementary propositions. In effect, the assessment procedure demands the student to display knowledge of the interrelationships between the propositions in a particular subset. Analysis of the student responses allows for scoring purely propositional knowledge, as well as conceptual understanding. In this paper we discuss research on the effectiveness of this assessment method, relative to assessment of conceptual understanding using concept mapping.

  • This paper describes the methods and challenges in assessing an undergraduate course entitled Lies, Damned Lies, and Statistics. The course has three main aims: to instill in students the ability to "think statistically", to enable students to critically evaluate statistically based reports and to teach students to construct statistically sound reports. The assessment is examined in terms of these aims and the criteria developed by Gal (2002) for statistical literacy.

  • Providing tasks that enable teachers to understand how students are processing concepts allows teachers to shape instruction, plan, adapt, and differentiate, depending on what students need to learn. What does this look like when teaching statistics? This paper presents background on formative assessment and describes a framework for thinking about how it can be enacted in practice. The framework is illustrated by focusing on the nature of statistical tasks that can elicit information about student thinking and on instructional strategies that deliberately provoke such information. The discussion is grounded in work with students in middle school, pre-service mathematics education, and in-service elementary teachers, describing the challenges and dilemmas that arose and the strategies employed to overcome these challenges.

  • This paper presents a case study of assessing an applied statistics course. The assessment includes 10 computer lab activities, that students may voluntary complete during the fall semester. An individual voluntary project is also incorporated as part of the assessment. At the end of the semester there is a final examination about the course contents. These assessment methods and the difficulties to implement them are discussed. Students have heterogeneous backgrounds. Some of them show fear and anxiety towards the subject. This interferes with the implementation of the assessment and implies discipline problems.

  • One way of examining forecasting methods via assignments is to give each student a real or simulated set of data, with a requirement to forecast future values. However checking the accuracy of calculations for the host of possible methods can be onerous. One solution is to make part or the entire assessment dependent on the accuracy of the forecasts obtained. This mirrors real life, where forecasts are judged not by the method used but by how accurate the predictions turn out. This paper investigates how this might work with an actual example. Using simulated data from a model which incorporates trend, seasonality, Easter effect and randomness, we use a function of the mean square error of the forecasts to determine the final mark for a variety of methods. Results indicate that the students who have put in more work, and/or fitted the better models, would obtain the better marks.

  • We believe that without the participation from students, effective learning will not be achieved. So, last year (2005/2006) we have tried to improve students learning in statistics using different assessment tools. Here we will present the proposal we have made to the students and we will discuss the results of one of the tasks we have proposed to them. Finally, we will also discuss the effect of our proposal to improve student's learning in statistics

  • This paper describes three original assessments developed for use in undergraduate- and graduate-level mathematics and statistics courses: (a) the context-dependent item set (undergraduate); (b) the visual data display project (undergraduate); and (c) the statistics notebook (graduate). The goal of each assessment was to measure student learning of statistical concepts and methodology, gauge the student's ability to apply those concepts in context, and provide an opportunity for students to appreciate statistics as a way to investigate, summarize, and explain phenomena of interest. Examples of all three assessments and rubrics are available for presentation.

  • Creating assessments for introductory statistics courses is not easy, particularly when the goal is to evaluate students' conceptual understanding of statistical concepts. "Understanding" is difficult to measure, but we do know it involves more than just memorization of facts or blindly carrying out mechanical data analysis procedures. This paper presents a framework for developing an assessment system in introductory statistics culminating in a series of comprehensive writing assessments that evaluate students' understandings of larger statistical concepts such as distribution and variability. The purpose of this paper is to help current and future instructors evaluate the assessment systems of their courses, where students are typically most concerned. I will discuss essays on two topics (distribution and variability) from my own introductory statistics course. In these essays, students reflect upon what they have learned, explain it to someone else, and generate examples to support their explanations. This discussion includes how I developed the assessment, how I incorporate it into the overall course structure, and student reactions to the assessment. Excerpts from over 300 student essays highlight (a) how the students reveal their conceptual understandings through writing,<br>(b) common misunderstandings that emerge, and (c) ways I have adapted my course to better develop students' understandings of these concepts.

  • The guiding principle in the assessment of any course is that we must assess what we teach. We begin by outlining the assessment instruments we use and discuss how we use these instruments to assess what we teach. We then look at the following assessment considerations: firstly, two specific types of questions we use and why we use them, then equity for students across semesters, the time and cost associated with assessment, some strategies and administrative tools and finally, one of the biggest challenges, finding enough suitable data sets to use.

Pages

register