P1-12: Impact of Quiz Assignment and Scoring Methods on Course Completion Rates and Exam Scores in an Online Introductory Statistics Course


By Stefanie R. Austin and Whitney A. Zimmerman (The Pennsylvania State University)


Information

Data were collected from students enrolled in 12 sections of a 12-week online undergraduate introductory statistics course taught through Penn State World Campus during Summer 2016. The course notes and lessons were the same for all sections. A component of this course included weekly quizzes, which were identical across sections except for the scoring method. Instructors used one of three methods: (1) two attempts, keep highest score; (2) two attempts, average both scores; (3) 1 practice (no score) followed by 1 scored attempt. In all cases, quizzes presented questions randomly drawn from a question bank designed to cover objectives within the week’s lesson plan, so students never received the same quiz in both attempts.

The primary goal of the analysis is to determine whether student performance (as measured by final exam score) varied significantly across the quiz settings, or whether the correlation between the final exam score and the average quiz score differed across settings. A secondary goal of the analysis is to determine whether completion rates differed across quiz settings. For all analyses we find no significant differences, after controlling for demographic and enrollment variables. As an exploratory measure, we did find significant differences in the average quiz scores across settings, but due to the nature of the scoring methods, this is to be expected.

The findings raise questions about the purpose and impact of online quizzes as an aspect of pedagogy because they contradict the expectations of many. Instructors often select a scoring method for assessments based on their beliefs about the fundamentals of student learning, but our results that the difference in learning is minimal across these three methods for this course. The hope is that this encourages a broader discussion of how to best engage and promote understanding of online learners via web-based assessment tools. Does this suggest that students study the same for these quizzes, regardless of scoring method?  Do students use these weekly assessments as a gauge for their individual performance, independent of scoring method?  Are the experience, results, and feedback from the quizzes used the same for preparing for exam assessments for students?  What should the role of online quizzes be in regards to scoring method, final grade weighting, exam preparation, and overall student learning?  How can this research be extended to other assessments, subjects, and audiences?

To fuel these discussions we provide a brief overview of literature on this topic, including Bangert-Downs et. al (1991), Luebben (2010), and Schneider & Preckel (2016). Because of the narrow focus of this research, we provide some suggestions for further research, including an expansion or modification of scope, a list of other possible metrics to explore, and suggestions for improvement on methodology, such as reducing sample bias and identifying other lurking variables. This area has great opportunity for further exploration for both research and personal implementation.


register