By Joshua Sparks (University of California, Los Angeles)
Information
With the onset of OpenAI’s GPT and other Large Language Model (LLM) software in public access, universities continue to navigate the roadmap for student use (and misuse). While this development produces hurdles in academic integrity and education, numerous disciplines – many outlined in Wang et al. (2024) – have begun to implement activities and resources for students to learn how to use these technological models responsibly, ethically, and effectively. Furthermore, survey results show that students also desire to learn how to ethically incorporate this new technology into their education, as many learners have yet to receive proper training within this landscape. Implemented while at a medium-sized private research university, training and assessment was integrated into its writing-intensive, second-year undergraduate course in statistical computing. Topics such as the LLM process, hallucination detection and adjustment, ethical impacts, prompt engineering, and reverse outlining are addressed to refine coding tasks, enhance statistical writing and narrative-building, and shape best-use practices. Through pre-/post-course surveys of n=30 students, end-of-semester institutional evaluations, comprehension improvements via rubric assessment, and student feedback, results offer insight into student understanding and use of LLMs in statistical coding practices as well as statistical writing and assessment (with data and examples available for interested readers). Aligned within the pre-coding framework, these tasks shift students to view themselves as “product managers rather than software engineers,” as emphasized in Tu et al. (2024).