Julia Polak (University of Melbourne) & Di Cook (Monash University)
Tuesday, November 16, 2021 - 5:00pm ET
In the November CAUSE/Journal of Statistics and Data Science Education webinar series, we have invited the authors of this recently published paper to share their experiences in running data competitions as part of classes on statistical learning. Kaggle is a data modeling competition service, where participants compete to build a model with lower predictive error than other participants. Several years ago Kaggle released a simplified service that is ideal for instructors to run competitions in a classroom setting. This webinar describes the results of an experiment to determine if participating in a predictive modeling competition enhances learning. The evidence suggests it does. In addition, students were surveyed to examine if the competition improved engagement and interest in the class. The authors will also discuss the main issues to consider when setting up a data competition in a class, including the technical aspects of using the Kaggle InClass platform.
Julia Polak is a lecturer in Statistics at the University of Melbourne. She has a broad range of research interests including nonparametric methods, forecasting and data visualisation. In addition, Julia has many years of experience in teaching statistics and data science for different audience.
Di Cook is a Professor in Econometrics and Business Statistics at Monash University in Melbourne. Her research is in the area of data visualisation, especially the visualisation of high-dimensional data using tours with low-dimensional projections, and projection pursuit. A current focus is on bridging the gap between exploratory graphics and statistical inference.
Tim Arnold (SAS Institute); Joan Garfield (Professor Emeritus of University of Minnesota); Jeff Witmer (Oberlin College)
Tuesday, October 19, 2021 - 4:00pm ET
In the October CAUSE/Journal of Statistics and Data Science Education webinar series, we will take a step back in time to talk with some of the founders of what was initially the "Journal of Statistics Education" and will be publishing its 30th volume in 2022.
In 1992, Daniel Solomon and colleagues organized a conference at North Carolina State University to explore the idea of an “Electronic Journal: Journal of Statistics Education”. Many ideas and considerable enthusiasm flowed.
The first issue of JSE was published in 1993 under the editorship of the late Jackie (E. Jacquelin) Dietz and managing editorship of J. Tim Arnold. Other contributing editors included Joan Garfield, Robin Lock, and Jeff Witmer. The inaugural issue included, among other things, an interview with Fred Mosteller, the structure and philosophy of the journal, and Joan Garfield’s widely cited paper “Teaching statistics using small-group cooperative learning”.
In this webinar, we will have a chance to hear from some of the founders about their vision for the journal from three decades ago, their reflections on what has transpired since then, and their prognostications for the future.
Tim Arnold is a Principal Software Developer at the SAS Institute. He served as the founding managing editor of JSE.
Joan Garfield is Professor Emeritus of the Department of Educational Psychology at the University of Minnesota. Joan served alongside the late J. Laurie Snell as co-editor of JSE’s “Teaching Bits, a Resource for Teachers of Statistics”.
Jeff Witmer is Professor of Mathematics at Oberlin College and is the current editor of the Journal of Statistics and Data Science Education. Jeff was a founding Associate Editor for JSE.
Useful (but not required) background reading includes:
Arnold: Structure and philosophy of the Journal of Statistics Education, https://www.tandfonline.com/doi/full/10.1080/10691898.1993.11910456
Rossman and Dietz: Interview with Jackie Dietz, https://www.tandfonline.com/doi/abs/10.1080/10691898.2011.11889616
Emily Griffith (North Carolina State University), Megan Higgs (Critical Inference LLC), and Julia Sharp (Colorado State University)
Tuesday, September 21, 2021 - 4:00pm ET
In the September CAUSE/Journal of Statistics and Data Science Education webinar series, we talk with Julia Sharp, Emily Griffith, and Megan Higgs, the co-authors of a forthcoming JSDSE paper entitled "Setting the stage: Statistical collaboration videos for training the next generation of applied statisticians" (https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1934202).
Collaborative work is inherent to being a statistician or data scientist, yet opportunities for training and exposure to real-world scenarios are often only a small part of a student’s academic program. Resources to facilitate effective and meaningful instruction in communication and collaboration are limited, particularly when compared to the abundant resources available to support traditional statistical training in theory and methods. This paper helps fill the need for resources by providing ten modern, freely-available videos of mock collaborative interactions, with supporting discussion questions, scripts, and other resources. Videos are particularly helpful for teaching communication dynamics. These videos are set in the context of academic research discussions, though the scenarios are broad enough to facilitate discussions for other collaborative contexts as well. The videos and associated resources are designed to be incorporated into existing curricula related to collaboration.
Julia Sharp is an associate professor of statistics and the Director of the Graybill Statistics and Data Science Laboratory at Colorado State University. Julia is a widely recognized expert in statistical collaboration and recently was awarded the Outstanding Mentor Award from ASA's Section on Statistical Consulting. When she is not working, Julia enjoys baking, hiking, and enjoying the company of family and friends.
Emily Griffith is an associate research professor of statistics at North Carolina State University. She is also a Fellow in the Office of Research Innovation working on development and strategy to further innovation in the university’s data sciences initiatives. In her free time, Emily enjoys running (even in the summer in NC), cooking, and hanging out with her family.
Megan Higgs has worked as a collaborative statistician in academia and private industry, and is now working independently as Critical Inference LLC and writing posts for a blog of the same name. She currently volunteers as editor of the International Statistical Institute’s “Statisticians React to the News” blog and serves on the ASA’s Climate Change Committee. Megan loves spending time with her family and pets in Montana.
Pip Arnold (New Zealand) & Chris Franklin (ASA K-12 Statistics Ambassador/ASA Fellow/UGA Emerita)
Tuesday, May 25, 2021 - 4:00pm ET
In the April CAUSE/Journal of Statistics and Data Science Education webinar series, we discuss "What Makes a Good Statistical Question?" with Pip Arnold & Christine Franklin, the co-authors of a forthcoming paper in JSDSE (https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1877582).
The statistical problem-solving process is key to the statistics curriculum at the school level, post-secondary, and in statistical practice. The process has four main components: Formulate questions, collect data, analyze data, and interpret results. The Pre-K-12 Guidelines for Assessment and Instruction in Statistics Education (GAISE) emphasizes the importance of distinguishing between a question that anticipates a deterministic answer and a question that anticipates an answer based on data that will vary, referred to as a statistical question. This paper expands upon the Pre-K-12 GAISE distinction of a statistical question by addressing and identifying the different types of statistical questions used across the four components of the statistical problem-solving process and the importance of interrogating these different statistical question types. Since the publication of the original Pre-K-12 GAISE document, research has helped to clarify the purposes of questioning at each component of the process, to clarify the language of questioning, and to develop criteria for answering the question, "What makes a good statistical question?"
Pip Arnold is a statistics educator who also sometimes masquerades as a mathematics educator. Her continuing interests include statistical questions, working to support with K-10 teachers in developing their statistical content knowledge and looking at ways to authentically integrate statistics across the curriculum. Pip has been developing a teacher's resource to support the teaching of statistics from K-10 for New Zealand teachers, based on the PPDAC statistical enquiry cycle that is the basis of statistical problem-solving in New Zealand.
Christine (Chris) Franklin is the ASA K-12 Statistics Ambassador, an ASA Fellow, and UGA Emerita Statistics faculty. She is the co-author of two introductory statistics textbooks, chair for the ASA policy documents Pre-K-12 GAISE (2005) and Statistical Education of Teachers (2015), and co-chair for the recently published Pre-K-12 GAISE II. She is a former AP Statistics Chief Reader and a past Fulbright scholar to NZ, where she and Pip began having many conversations about the role of questioning in the statistical problem-solving process.
Andrew Zieffler (University of Minnesota) & Nicola Justice (Pacific Lutheran University)
Tuesday, April 27, 2021 - 4:00pm ET
Classification trees and other algorithmic models are an increasingly important part of statistics and data science education. In the April CAUSE/Journal of Statistics and Data Science Education webinar series, we will talk with Andrew Zieffler and Nicola Justice, two of the co-authors of the forthcoming JSDSE paper entitled “The Use of Algorithmic Models to Develop Secondary Teachers' Understanding of the Statistical Modeling Process”: https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1900759
Statistical modeling continues to gain prominence in the secondary curriculum, and recent recommendations to emphasize data science and computational thinking may soon position algorithmic models into the school curriculum. Many teachers’ preparation for and experiences teaching statistical modeling have focused on probabilistic models. Subsequently, much of the research literature related to teachers’ understanding has focused on probabilistic models. This study explores the extent to which secondary statistics teachers appear to understand ideas of statistical modeling, specifically the processes of model building and evaluation, when introduced using classification trees, a type of algorithmic model. Results of this study suggest that while teachers were able to read and build classification tree models, they experienced more difficulty when evaluating models. Further research could continue to explore possible learning trajectories, technology tools, and pedagogical approaches for using classification trees to introduce ideas of statistical modeling.
Andrew Zieffler is a Senior Lecturer and researcher in the Quantitative Methods in Education program within the Department of Educational Psychology at the University of Minnesota. His scholarship focuses on statistics education. His research interests have recently focused on teacher education and on how data science is transforming the statistics curriculum. You can read more about his work and interests at https://www.datadreaming.org/.
Nicola Justice studies how students and teachers learn statistics. As an assistant professor at Pacific Lutheran University, her passion is to help students develop into skillful and ethical data storytellers. When not teaching or learning, she likes to get outside with her family: hiking, exploring, and throwing rocks in water.
Mine Dogucu (UC Irvine) & Albert Y. Kim (Smith College)
Tuesday, March 23, 2021 - 4:00pm ET
In the March CAUSE/Journal of Statistics and Data Science Education webinar series we will discuss two related papers on data ingestation, data collection, and data analysis.
Mine Dogucu (UC Irvine) will discuss her paper "Web Scraping in the Statistics and Data Science Curriculum: Challenges and Opportunities" (https://github.com/mdogucu/web-scrape).
Albert Y. Kim (Smith College) will discuss his paper "'Playing the Whole Game': A Data Collection and Analysis Exercise With Google Calendar" (https://smithcollege-sds.github.io/sds-www/JSE_calendar.html)
Mine Dogucu is an Assistant Professor of Teaching in the Department of Statistics at University of California Irvine. Her work focuses on modern pedagogical approaches in the statistics curriculum, making data science education accessible, and undergraduate Bayesian education. She is the coauthor of the upcoming book Bayes Rules! An Introduction to Bayesian Modeling with R. She co-chairs the Undergraduate Statistics Project Competition and the Electronic Undergraduate Statistics Research Conference (USPROC+eUSR). She shares her thoughts about data science education on her Data Pedagogy blog.
Albert Kim is an Assistant Professor of Statistical & Data Sciences at Smith College as well as a Visiting Scholar at the ForestGEO network's Smithsonian Conservation Biology Institute (SCBI) large forest dynamics plot. His research centers on forest ecology, in particular modeling the impact of climate change on the growth of trees as well as ecological forecasting. He is a co-author of "Statistical Inference via Data Science: A ModernDive into R and the Tidyverse" (see moderndive.com).
Jingchen (Monika) Hu (Vassar College), Kevin Ross (Cal Poly - San Luis Obispo), & Colin Rundel (University of Edinburgh/Duke)
Tuesday, February 23, 2021 - 4:00pm ET
The Journal of Statistics and Data Science Education recently published a cluster of papers on Bayesian methods (https://www.tandfonline.com/toc/ujse20/28/3?nav=tocList). The Bayesian cluster includes a presentation of how and why Bayesian ideas should be added to the curriculum; guidance on how to structure a semester-long Bayesian course for undergraduates; a discussion of the ever-evolving nature of Bayesian computing; a book review; and a panel interview of several Bayesian educators.
For the February CAUSE/JSDSE webinar series, we’ve invited several authors of these provocative and informative articles to describe their work and its implications for statistics and data science education.
Jingchen (Monika) Hu is an Assistant Professor of Mathematics and Statistics at Vassar College. She teaches an undergraduate-level Bayesian Statistics course at Vassar, which is shared online across several liberal arts colleges. Her research focuses on dealing with data privacy issues by releasing synthetic data.
Kevin Ross is an Associate Professor of Statistics at Cal Poly San Luis Obispo. His research interests include probability, stochastic processes and applications as well as probability and statistics education.
Colin Rundel is a lecturer at the University of Edinburgh and an assistant professor of the practice in Statistical Science at Duke University. His research interests include applied spatial statistics with an emphasis on Bayesian statistics and computational methods.
Mine Çetinkaya-Rundel (University of Edinburgh/RStudio) & Alex Reinhart (Carnegie Mellon University)
Tuesday, January 26, 2021 - 4:00pm ET
The Journal of Statistics and Data Science Education special issue on “Computing in the Statistics and Data Science Curriculum” features a set of papers that provide a mosaic of curricular innovations and approaches that embrace computing. The collected papers (1) suggest creative structures to integrate computing, (2) describe novel data science skills and habits, and (3) propose ways to teach computational thinking.
In this webinar, we've invited two authors of papers in the special issue to talk about their work and to answer questions originally posed by Nolan and Temple Lang in their 2010 TAS paper "Computing in the Statistics Curriculum":
When they graduate, what ought our students be able to do computationally, and are we preparing them adequately in this regard?
Do we provide students the essential skills needed to engage in statistical problem solving and keep abreast of new technologies as they evolve?
Do our students build the confidence needed to overcome computational challenges to, for example, reliably design and run a synthetic experiment or carry out a comprehensive data analysis?
Overall, are we doing a good job preparing students who are ready to engage in and succeed at statistical inquiry?
Amy Nowacki, Cleveland Clinic and Cleveland Clinic Lerner College of Medicine
Wednesday, November 18, 2015 - 12:00pm ET
Statistics courses that focus on data analysis in isolation, discounting the scientific inquiry process, may not motivate students to learn the subject. By involving students in other steps of the inquiry process, such as generating hypotheses and data, students may become more interested and vested in the analysis step. Additionally, such an approach might better prepare students to tackle real research questions outside of the statistics classroom. Presented here is a classroom activity utilizing the popular Hasbro board game Operation, which requires student involvement in the entire research process. Highlighted are ways this activity uncovers a number of research issues. A number of categorical and continuous variables are collected, making the activity amenable to a variety of statistical investigations and thus easy to imbed into any curriculum. Designed to mimic a real-world research scenario, this fun activity provides a guided yet flexible research experience from start to finish.
Leigh M. Harrell-Williams, University of Memphis and Rebecca L. Pierce, Ball State University
Wednesday, October 21, 2015 - 12:00pm ET
Based on our March 2015 JSE paper "Identifying Statistical Concepts Associated with High and Low Levels of Self-Efficacy to Teach Statistics in Middle Grades,” we discuss the results of a Rasch modeling analysis of pre-service mathematics teacher responses to the middle grades Self-Efficacy to Teach Statistics (SETS) instrument. We share how we used Rasch measurement theory to develop the middle grades SETS instrument to measure pre-service teachers’ self-efficacy to teach topics at GAISE levels A and B as well as K–8 CCSSM statistics topics. SETS items ask teachers to rate their self-efficacy to teach a particular concept on a Likert scale from 1 (“not confident at all”) to 6 (“completely confident”). From data collected at four public institutions of higher education in the United States, we discuss what statistics topics pre-service teachers felt the most (or least) efficacious about and how that informs our continuing work.