The Introductory Statistics Course: A Ptolemaic Curriculum?


Authors: 
George W. Cobb
Editors: 
Robert Gould
Volume: 
1(1)
Pages: 
online
Year: 
2007
Publisher: 
Technology Innovations in Statistics Education (TISE)
URL: 
http://repositories.cdlib.org/uclastat/cts/tise/
Abstract: 

As we begin the 21st century, the introductory statistics course appears healthy, with its emphasis on real examples, data production, and graphics for exploration and assumption-checking. Without doubt this emphasis marks a major improvement over introductory courses of the 1960s, an improvement made possible by the vaunted "computer revolution." Nevertheless, I argue that despite broad acceptance and rapid growth in enrollments, the consensus curriculum is still an unwitting prisoner of history. What we teach is largely the technical machinery of numerical approximations based on the normal distribution and its many subsidiary cogs. This machinery was once necessary, because the conceptually simpler alternative based on permutations was computationally beyond our reach. Before computers statisticians had no choice. These days we have no excuse. Randomization-based inference makes a direct connection between data production and the logic of inference that deserves to be at the core of every introductory course. Technology allows us to do more with less: more ideas, less technique. We need to recognize that the computer revolution in statistics education is far from over.

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education

register