Chocolate: tactile simulations without data collection

jo_headshot8Jo Hardin – Pomona College, Claremont, CA

Many of us will agree that using tactile demonstrations is super fun and can also be an excellent way to teach a particular concept.  Students engage with the material differently when they can touch, smell, or taste the objects as opposed to only seeing or listening to a demonstration.  The SBI blog has had many excellent articles describing in-class tactile simulations, see here and here and here.

However, sometimes the logistical constraints setting up the demonstration take away too much from an already packed 50 minute class session.  And those details get even harder with large classes.  One of the biggest challenges comes from collecting data or getting results back from the students.  Although some classes have sophisticated clickers that make data collection easier, setting up and using clickers is also a logistical challenge (well worth it for using all semester, but not for a one day class demonstration).

[pullquote]The conversation that ensues about the experimental design is incredibly valuable for understanding paired design (and the motivation for the pairing) or survival analysis (and the need for tools to analyze censored data). [/pullquote]
My view on classroom demonstrations is that doing most of the tactile demonstration can communicate the vast majority of the pedagogical ideas.  I will demonstrate what I mean with examples using chocolate chips.  I have used chocolate chips in class many times to teach two different concepts: (1) censored data analyzed with survival analysis (example taken from Practicing Statistics by Kuiper and Sklar) and (2) paired tests (example taken from Investigating Statistical Concepts, Applications, and Methods by Chance and Rossman).

In both classroom experiences, I provide small Dixie cups with two visibly different types of chocolate chips (typically two of either white chocolate, milk chocolate, semi-sweet chocolate, or peanut-butter chips).

Then we spend some time as a class talking about the experiment and the goal of the experiment.  For both of the statistical methods I cover in class, the goal is to compare the length of time taken to melt the two different types of chips (the length of time is described by: the average if doing t-tests; the distribution if doing signed-rank test; the survival curve if doing survival analysis).   I will give some details of the class interactions below, but many of these ideas are also discussed in the texts mentioned above and in the references.  For simplicity, I will describe using a paired t-test to determine if milk or white chocolate chips melt faster, on average.

I always start with a basic question: What will we do to determine which chocolate chip melts faster on average?

Inevitably, I get a basic answer in return: Put chocolate chips in mouth, see how long it takes to melt, record data, decide which one melts faster.

And then I stop and ask them how in the world they’ll know how to do what they just suggested?  There is a tremendous amount of additional information necessary before running a viable experiment.  Among the important decisions to be made (by class consensus) are:

  • How is the chocolate chip going to reside in the mouth? Can you chew?  Can you use your tongue?  Can you move the chip?
  • Who gets which color? (Everyone gets both if paired!)
  • In what order do the chips get melted? Does everyone melt the same chip first?
  • How will the melting be timed?
  • How long do we wait between chips?

The conversation that ensues about the experimental design is incredibly valuable for understanding paired design (and the motivation for the pairing) or survival analysis (and the need for tools to analyze censored data).  The process of coming to a class consensus about the chip experiment requires the students to justify their decisions to their classmates.  For example, the class notes that if you hold the chip at the top of your mouth, it’ll melt faster.  Oh yeah, and that brings up the fact that some people will have mouth environments where chips melt faster (which is why we pair)!

I almost never collect the data that my students generate.  Partly because melting times for chips aren’t substantially different (so power is pretty low).  But also because collecting the data, entering it into the computer, and running the appropriate test does not provide the same pedagogical-idea-per-class-minute value that the earlier discussion on design did.  I can make up data, use the textbook’s data, or use student data from a previous year (if I have it).  And the students can then quickly see the analysis done on screen.

Don’t get me wrong, there are large benefits to collecting class data so that students can understand first hand how variability plays a role.  I believe that collecting student data is particularly valuable when each student gets a different statistic (for example, number of successes in a binomial trial) which can be put together as a class generated sampling distribution.  But in the chip example, the vast majority of the learning happens with understanding the experimental design.  And because we must continually make choices in our classes, we should not be afraid to cut out the parts of the demonstration (here collecting data) we believe are less important to the learning.

References:

Investigating Statistical Concepts, Applications, and Methods by Beth Chance and Allan Rossman, http://www.rossmanchance.com/iscam3/

Practicing Statistics by Shonda Kuiper and Jeff Sklar, http://web.grinnell.edu/individuals/kuipers/stat2labs/

Leave a Reply

Your email address will not be published. Required fields are marked *