Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong


Authors: 
Patrick White and Stephen Gorard
Year: 
2017
URL: 
http://iase-web.org/documents/SERJ/SERJ16(1)_White.pdf
Abstract: 

Recent concerns about a shortage of capacity for statistical and numerical
analysis skills among social science students and researchers have prompted a range
of initiatives aiming to improve teaching in this area. However, these projects have
rarely re-evaluated the content of what is taught to students and have instead
focussed primarily on delivery. The emphasis has generally been on increased use of
complex techniques, specialist software and, most importantly in the context of this
paper, a continued focus on inferential statistical tests, often at the expense of other
types of analysis. We argue that this ‘business as usual’ approach to the content of
statistics teaching is problematic for several reasons. First, the assumptions
underlying inferential statistical tests are rarely met, meaning that students are being
taught analyses that should only be used very rarely. Secondly, all of the most
common outputs of inferential statistical tests – p-values, standard errors and
confidence intervals – suffer from a similar logical problem that renders them at best
useless and at worst misleading. Eliminating inferential statistical tests from statistics
teaching (and practice) would avoid the creation of another generation of
researchers who either do not understand, or knowingly misuse, these techniques. It
would also have the benefit of removing one of the key barriers to students’
understanding of statistical analysis
 

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education

register