eCOTS 2014 - Virtual Poster #16

"Comparing Active Learning and Traditional Lecture Introductory Statistics Classes at Montana State University (Fall 2013)"
Katie Banner & Jim Robison-Cox, Montana State University


Two versions of an introductory statistics course are available to students at MSU. One involves a lecture approach based on DeVeaux's Stats: Data & Models while the other is built around group activities and computer simulation with minimal lecture and is a modified version of the CATALST curriculum (Garfield,J and DelMas,R et. al. 2008-2012). Learning objectives are the same for both delivery modes in that students should understand the basics of statistical inference. Our study was designed to assess and compare content knowledge across these two curricula. The traditional introductory statistics class includes basic probability, the concept of sampling distributions, and z and t tests for means, proportions, and comparisons of means and proportions. Students in the traditional class attend lecture where they sit in individual desks and spend the majority of class time taking notes. In contrast, MSU has recently remodeled two classrooms to be Technology Enhanced, Active Learning (TEAL) environments that seat nine students (three groups of three) at a seven foot circular table with a connector to a wall mounted flat screen monitor. TEAL and traditional class sizes are very similar, running between 35 and 40 students. Students in the TEAL version of the course engage in computer simulation to explore random events, simulate null distributions for hypothesis testing, and use bootstrapping software for building confidence intervals. Two instructors present the activity and do a wrap-up, but spend most of their time circulating through the room checking on how groups are problem solving. We have modified the CATLST curriculum to use different software and to prepare students for a subsequent statistics course. To assess differences in content knowledge between these two versions of the course we asked all students similar questions on their final and compared success rates. The questions were adapted from the GOALS assessment (delMas 2012). We briefly summarize the questions chosen and categorize them into content knowledge groups (scope of inference, power, variability, interpretation of confidence intervals, understanding of a hypothesis test, and interpretation of a p-value). We compare success rates for each question and discuss our inferences about differences in content knowledge between the two versions of introductory statistics offered at MSU in the Fall 2013 semester.



(Tip: click the fullscreen control)

Having trouble viewing? Try: Download (.mp4)

(Tip: right-click and choose "Save As...")


Nicholas Horton:

This is a nice comparison of a traditional vs. active learning intro stats course. It would have been nice to have some additional qualitative data from the students about their impressions of the course (though the comparison of the percent correct on the MOST assessment is useful). @askdrstats

Katharine Banner:

Thank you Nicholas. That is a very good suggestion, it may be nice to do an attitude survey both
before and after to see if on average, attitude changes more in one version of the course. We
will definitely consider it for the fall.

Travis Weiland:

Did you do any kind of pre-assessment for both classes to determine if there were any significant differences to begin with?

Katharine Banner:

We did not do any pre-assessment per say, but we do have access to pre-requisite data for all
students (such as SAT, ACT, MPLEX (our math placement exam), pre-requisite courses, all previous math courses and their grades, GPA, age, year in school, etc. ) and we could build some of these summary measures into a model or look at some of these variables across both courses using exploratory techniques. Thank you for your question.

Dennis Pearl:

Nice presentation. Thanks. Continuing to follow these courses from semester to semester and assessing each change made (as mentioned at the end) will make this work increasingly interesting as time rolls on.

I was curious about the question on power where the TEAL students did much worse. It seems to me that the question required both a knowledge of what Type I and Type II errors mean as well as knowledge of the idea that a larger sample size can help reduce both. It is very possible that students in the TEAL classes are doing worse just on the definitional part of the problem and not necessarily on the statistical concept regarding the effect of sample size.

Jim Robison-Cox:

Thanks, Dennis,

We think that neither curriculum provided enough time and practice to absorb the concepts of types of error. An instructor this spring pointed out that we don't reduce probability of type I error with larger sample size because we set it to our favorite significance level. It might be better to ask how we can reduce p-values and probability of type II error simultaneously.

Dennis Pearl:

God point. If you are going to teach hypothesis testing, I vote for teaching how to interpret the p-value rather than fixing the alpha and then using it strictly so that 0.049 is interpreted differently than 0.051.

Katharine Banner: