Creating a standardized assessment to measure learning in introductory data science courses


Evan Dragich (Duke University), Matt Beckman (Penn State) Mine Çetinkaya-Rundel (Duke), Mine Dogucu (University College London & UC Irvine), Chelsey Legacy (University of Minnesota), Maria Tackett (Duke), Andy Zieffler (University of Minnesota)


Abstract

Background. An essential component of any educational research is a validated, relevant instrument to measure students’ learning outcomes. In 2023, as more high schools and universities continue to support the emerging field of data science (DS) via specific courses, concentrations, or even majors, iit is becoming crucial to successfully monitor learning in introductory courses. To do this requires an assessment designed to which could additionally be used to evaluate pedagogical techniques or curriculum interventions in DS curriculum. In the field of statistics, previous work on measuring students’ reasoning skills led to the development of the Comprehensive Assessment of Outcomes in Statistics (CAOS). The revised CAOS 4, comprising 40 multiple-choice items on a variety of commonly-taught first-semester introductory concepts, was first administered in 2005. However, in the case of DS which lacks a clearly-defined scope, the wide variety of topics covered in introductory courses motivates the need for a language-agnostic, broad-scope DS assessment that can be tailored further to best meet the needs of specific programs

Methods. To develop a blueprint for the assessment, a multi-institutional team of statistics and data science education researchers identified common DS content (e.g., data wrangling, interpreting visualizations), drawing from published guidelines/recommendations and introductory DS syllabi. A draft of the assessment was written and used to conduct three think-aloud interviews with field-relevant faculty members, and subsequently introductory DS teaching assistants. All interviews consisted of individual examinations of each item for relevance, clarity, and efficacy in measuring the desired learning objective, while faculty members were also asked to freely brainstorm the assessment's scope. Having completed the interviews, a 26-item prototype assessment is now ready for large-scale distribution. After several rounds of refinement, we are confident in the scope and pacing (~45-60 minutes) for those with DS experience, and hope to corroborate these findings with results from introductory students. Collaborating with IRB to finalize a study design with the instructors of Duke’s introductory DS course, the assessment will likely be released to the 200+ students this Spring 2023 acting as a summative assessment. With this pilot, we hope to arrive on a final set of items and begin exploring subscales.

Implications. By sharing the work, we hope that participants will become familiar with an assessment they may use for designing introductory DS curriculum or researching classroom innovations. We also hope that this instrument can serve as an inspiration or a starting point to be tailored by future researchers more specifically to their courses or to another discipline (eg. by adding more programming concepts instead of data visualization to better serve a computing-focused introductory data science class, etc.).


register