Adam Loy (Carleton College)
Tuesday, February 15, 2022 - 4:00pm ET
This month, we highlight the article Bringing Visual Inference to the Classroom by Adam Loy in our Journal of Statistics and Data Science Education webinar series. In the classroom, educators traditionally visualize inferential concepts using static graphics or interactive apps. For example, there is a long history of using apps to visualize sampling distributions. The lineup protocol for visual inference is a recent development in statistical graphics that has created an opportunity to build student understanding. Lineups are created by embedding plots of observed data into a field of null (noise) plots. This arrangement facilitates comparison and helps build student intuition about the difference between signal and noise. Lineups can be used to visualize randomization/permutation tests, diagnose models, and even conduct valid inference when distributional assumptions break down. In this webinar, Adam will introduce lineups and discuss how he uses it in his introductory statistics class.
https://aloy.github.io/classroom-vizinf/
Eric Vance (University of Colorado Boulder)
Tuesday, January 25, 2022 - 4:00pm ET
This month, we highlight the article Using Team-Based Learning to Teach Data Science by Eric Vance in our Journal of Statistics and Data Science Education webinar series. Team-Based Learning (TBL) is a pedagogical strategy that can help educators teach data science better by flipping the classroom to employ small-group collaborative learning to actively engage students in doing data science. A consequence of this teaching method is helping students achieve the workforce-relevant data science learning goals of effective communication, teamwork, and collaboration. In this webinar, he will describe the essential elements of TBL and answer questions about this appealing pedagogical strategy.
Eric A. Vance is an Associate Professor of Applied Mathematics, the Director of the Laboratory for Interdisciplinary Statistical Analysis (LISA) at the University of Colorado Boulder, and the Global Director of the LISA 2020 Network, which comprises 35 statistics and data science collaboration laboratories in 10 developing countries. He is a Fellow of the American Statistical Association (ASA) and winner of the 2020 ASA Jackie Dietz Award for the best paper in the (then) Journal of Statistics Education for "The ASCCR Frame for Learning Essential Collaboration Skills."
Dr. Philip M. Sedgwick, St. George’s, University of London, London UK
Wednesday, November 17, 2021 - 1:00pm ET
Null hypothesis significance testing (NHST) with a critical level of significance of 5% (P<0.05) has become the cornerstone of research in the health sciences, underpinning decision making. However, considerable debate exists about its value with claims it is misused and misunderstood. It has been suggested it is because NHST and P-values are too difficult to teach, and encourage dichotomous thinking in students. Consequently, as part of statistics reform it has been proposed NHST should no longer be taught in introductory courses. However, this presentation will consider if the misuse of NHST principally results from it being taught in a mechanistic way, along with claims to knowledge in teaching and erosion of good practice. Whilst hypothesis testing has shortcomings, it is advocated it is an essential component of the undergraduate curriculum. Students’ understanding can be enhanced by providing philosophical perspectives to statistics, supplemented by overviews of Fisher’s and Neyman-Pearson’s theories. This helps the appreciation of the underlying principles of statistics based on uncertainty and probability, plus the contrast of statistical with contextual significance. Moreover, students need to appreciate when to use NHST rather than being taught it as the definitive approach of drawing inferences from data.
Julia Polak (University of Melbourne) & Di Cook (Monash University)
Tuesday, November 16, 2021 - 5:00pm ET
In the November CAUSE/Journal of Statistics and Data Science Education webinar series, we have invited the authors of this recently published paper to share their experiences in running data competitions as part of classes on statistical learning. Kaggle is a data modeling competition service, where participants compete to build a model with lower predictive error than other participants. Several years ago Kaggle released a simplified service that is ideal for instructors to run competitions in a classroom setting. This webinar describes the results of an experiment to determine if participating in a predictive modeling competition enhances learning. The evidence suggests it does. In addition, students were surveyed to examine if the competition improved engagement and interest in the class. The authors will also discuss the main issues to consider when setting up a data competition in a class, including the technical aspects of using the Kaggle InClass platform.
Julia Polak is a lecturer in Statistics at the University of Melbourne. She has a broad range of research interests including nonparametric methods, forecasting and data visualisation. In addition, Julia has many years of experience in teaching statistics and data science for different audience.
Di Cook is a Professor in Econometrics and Business Statistics at Monash University in Melbourne. Her research is in the area of data visualisation, especially the visualisation of high-dimensional data using tours with low-dimensional projections, and projection pursuit. A current focus is on bridging the gap between exploratory graphics and statistical inference.
Syllabus: https://handbook.unimelb.edu.au/2017/subjects/mast90083
Tim Arnold (SAS Institute); Joan Garfield (Professor Emeritus of University of Minnesota); Jeff Witmer (Oberlin College)
Tuesday, October 19, 2021 - 4:00pm ET
In the October CAUSE/Journal of Statistics and Data Science Education webinar series, we will take a step back in time to talk with some of the founders of what was initially the "Journal of Statistics Education" and will be publishing its 30th volume in 2022.
In 1992, Daniel Solomon and colleagues organized a conference at North Carolina State University to explore the idea of an “Electronic Journal: Journal of Statistics Education”. Many ideas and considerable enthusiasm flowed.
The first issue of JSE was published in 1993 under the editorship of the late Jackie (E. Jacquelin) Dietz and managing editorship of J. Tim Arnold. Other contributing editors included Joan Garfield, Robin Lock, and Jeff Witmer. The inaugural issue included, among other things, an interview with Fred Mosteller, the structure and philosophy of the journal, and Joan Garfield’s widely cited paper “Teaching statistics using small-group cooperative learning”.
In this webinar, we will have a chance to hear from some of the founders about their vision for the journal from three decades ago, their reflections on what has transpired since then, and their prognostications for the future.
Tim Arnold is a Principal Software Developer at the SAS Institute. He served as the founding managing editor of JSE.
Joan Garfield is Professor Emeritus of the Department of Educational Psychology at the University of Minnesota. Joan served alongside the late J. Laurie Snell as co-editor of JSE’s “Teaching Bits, a Resource for Teachers of Statistics”.
Jeff Witmer is Professor of Mathematics at Oberlin College and is the current editor of the Journal of Statistics and Data Science Education. Jeff was a founding Associate Editor for JSE.
Useful (but not required) background reading includes:
Arnold: Structure and philosophy of the Journal of Statistics Education, https://www.tandfonline.com/doi/full/10.1080/10691898.1993.11910456
Rossman and Dietz: Interview with Jackie Dietz, https://www.tandfonline.com/doi/abs/10.1080/10691898.2011.11889616
Emily Griffith (North Carolina State University), Megan Higgs (Critical Inference LLC), and Julia Sharp (Colorado State University)
Tuesday, September 21, 2021 - 4:00pm ET
In the September CAUSE/Journal of Statistics and Data Science Education webinar series, we talk with Julia Sharp, Emily Griffith, and Megan Higgs, the co-authors of a forthcoming JSDSE paper entitled "Setting the stage: Statistical collaboration videos for training the next generation of applied statisticians" (https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1934202).
Collaborative work is inherent to being a statistician or data scientist, yet opportunities for training and exposure to real-world scenarios are often only a small part of a student’s academic program. Resources to facilitate effective and meaningful instruction in communication and collaboration are limited, particularly when compared to the abundant resources available to support traditional statistical training in theory and methods. This paper helps fill the need for resources by providing ten modern, freely-available videos of mock collaborative interactions, with supporting discussion questions, scripts, and other resources. Videos are particularly helpful for teaching communication dynamics. These videos are set in the context of academic research discussions, though the scenarios are broad enough to facilitate discussions for other collaborative contexts as well. The videos and associated resources are designed to be incorporated into existing curricula related to collaboration.
Julia Sharp is an associate professor of statistics and the Director of the Graybill Statistics and Data Science Laboratory at Colorado State University. Julia is a widely recognized expert in statistical collaboration and recently was awarded the Outstanding Mentor Award from ASA's Section on Statistical Consulting. When she is not working, Julia enjoys baking, hiking, and enjoying the company of family and friends.
Emily Griffith is an associate research professor of statistics at North Carolina State University. She is also a Fellow in the Office of Research Innovation working on development and strategy to further innovation in the university’s data sciences initiatives. In her free time, Emily enjoys running (even in the summer in NC), cooking, and hanging out with her family.
Megan Higgs has worked as a collaborative statistician in academia and private industry, and is now working independently as Critical Inference LLC and writing posts for a blog of the same name. She currently volunteers as editor of the International Statistical Institute’s “Statisticians React to the News” blog and serves on the ASA’s Climate Change Committee. Megan loves spending time with her family and pets in Montana.
Richelle Blair (Lakeland Community College); Ellen Kirkman (Wake Forest University); Dennis Pearl (Pennsylvania State University)
Tuesday, August 24, 2021 - 2:00pm ET
Every five years since 1965, on behalf of the Conference Board of the Mathematical Sciences (CBMS), the American Mathematical Society (AMS) has conducted a national survey of undergraduate mathematics and statistics programs and published reports detailing characteristics of curricula, course delivery, enrollments, instructional staff, student outcomes, and more. The planned-for 2020 Survey has turned out to be a departure from the past, taking place in two parts—late last year in a mid-pandemic survey focused on departments’ experiences with the effects of COVID-19, and then later in 2021 as a continuation of the larger longitudinal study begun decades ago. The panelists will discuss the objectives of the study, relate a few data stories emanating from prior iterations, share some of the COVID survey findings, and provide a look forward to the upcoming Survey and its follow-up.
Michael D. Swartz, PhD – Department of Biostatistics and Data Science at the University of Texas Health Science Center at Houston
Friday, June 11, 2021 - 2:00pm ET
The idea of developing a rubric for assessments or flipping lectures in an Applied Biostatistics (or even Applied Statistics) classroom can be overwhelming, but it does not have to be. I will lead a discussion introducing several ideas for building a rubric for statistics assignments and exams, and flipping parts of a lecture to combine traditional lecture with interactive components to fully engage students to enhance their learning in the classroom or live synchronous sessions (like teaching through Webex or Zoom) using polling software like PollEverywhere. The polling software strategy I introduce will also provide instructors real-time feedback regarding students’ current comprehension of the material. One of the techniques can also be modified to increase engagement for an online only format (pre-recorded lectures). Attendees who consider themselves beginners with respect to rubrics or flipped classrooms as well as those who consider themselves more experienced are welcome to this webinar.
Pip Arnold (New Zealand) & Chris Franklin (ASA K-12 Statistics Ambassador/ASA Fellow/UGA Emerita)
Tuesday, May 25, 2021 - 4:00pm ET
In the April CAUSE/Journal of Statistics and Data Science Education webinar series, we discuss "What Makes a Good Statistical Question?" with Pip Arnold & Christine Franklin, the co-authors of a forthcoming paper in JSDSE (https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1877582).
The statistical problem-solving process is key to the statistics curriculum at the school level, post-secondary, and in statistical practice. The process has four main components: Formulate questions, collect data, analyze data, and interpret results. The Pre-K-12 Guidelines for Assessment and Instruction in Statistics Education (GAISE) emphasizes the importance of distinguishing between a question that anticipates a deterministic answer and a question that anticipates an answer based on data that will vary, referred to as a statistical question. This paper expands upon the Pre-K-12 GAISE distinction of a statistical question by addressing and identifying the different types of statistical questions used across the four components of the statistical problem-solving process and the importance of interrogating these different statistical question types. Since the publication of the original Pre-K-12 GAISE document, research has helped to clarify the purposes of questioning at each component of the process, to clarify the language of questioning, and to develop criteria for answering the question, "What makes a good statistical question?"
Pip Arnold is a statistics educator who also sometimes masquerades as a mathematics educator. Her continuing interests include statistical questions, working to support with K-10 teachers in developing their statistical content knowledge and looking at ways to authentically integrate statistics across the curriculum. Pip has been developing a teacher's resource to support the teaching of statistics from K-10 for New Zealand teachers, based on the PPDAC statistical enquiry cycle that is the basis of statistical problem-solving in New Zealand.
Christine (Chris) Franklin is the ASA K-12 Statistics Ambassador, an ASA Fellow, and UGA Emerita Statistics faculty. She is the co-author of two introductory statistics textbooks, chair for the ASA policy documents Pre-K-12 GAISE (2005) and Statistical Education of Teachers (2015), and co-chair for the recently published Pre-K-12 GAISE II. She is a former AP Statistics Chief Reader and a past Fulbright scholar to NZ, where she and Pip began having many conversations about the role of questioning in the statistical problem-solving process.
Andrew Zieffler (University of Minnesota) & Nicola Justice (Pacific Lutheran University)
Tuesday, April 27, 2021 - 4:00pm ET
Classification trees and other algorithmic models are an increasingly important part of statistics and data science education. In the April CAUSE/Journal of Statistics and Data Science Education webinar series, we will talk with Andrew Zieffler and Nicola Justice, two of the co-authors of the forthcoming JSDSE paper entitled “The Use of Algorithmic Models to Develop Secondary Teachers' Understanding of the Statistical Modeling Process”: https://www.tandfonline.com/doi/full/10.1080/26939169.2021.1900759
Statistical modeling continues to gain prominence in the secondary curriculum, and recent recommendations to emphasize data science and computational thinking may soon position algorithmic models into the school curriculum. Many teachers’ preparation for and experiences teaching statistical modeling have focused on probabilistic models. Subsequently, much of the research literature related to teachers’ understanding has focused on probabilistic models. This study explores the extent to which secondary statistics teachers appear to understand ideas of statistical modeling, specifically the processes of model building and evaluation, when introduced using classification trees, a type of algorithmic model. Results of this study suggest that while teachers were able to read and build classification tree models, they experienced more difficulty when evaluating models. Further research could continue to explore possible learning trajectories, technology tools, and pedagogical approaches for using classification trees to introduce ideas of statistical modeling.
Andrew Zieffler is a Senior Lecturer and researcher in the Quantitative Methods in Education program within the Department of Educational Psychology at the University of Minnesota. His scholarship focuses on statistics education. His research interests have recently focused on teacher education and on how data science is transforming the statistics curriculum. You can read more about his work and interests at https://www.datadreaming.org/.
Nicola Justice studies how students and teachers learn statistics. As an assistant professor at Pacific Lutheran University, her passion is to help students develop into skillful and ethical data storytellers. When not teaching or learning, she likes to get outside with her family: hiking, exploring, and throwing rocks in water.