Journal Article

  • In their response to our paper, Nicholson and Ridgway agree with the majority of
    what we wrote. They echo our concerns about the misuse of inferential statistics and
    NHST in particular. Very little of their response explicitly challenges the points we
    made but where it does their defence of the use of inferential techniques does not
    stand up to scrutiny. Their statements are either contradictory, agreement ‘dressed
    up’ as disagreement, appeals to authority, semantic slights of hand, or irrelevant to
    our original claims. It is not clear why such a response was needed.

  • White and Gorard make important and relevant criticisms of some of the methods
    commonly used in social science research, but go further by criticising the logical
    basis for inferential statistical tests. This paper comments briefly on matters we
    broadly agree on with them and more fully on matters where we disagree. We agree
    that too little attention is paid to the assumptions underlying inferential statistical
    tests, to the design of studies, and that p-values are often misinterpreted. We show
    why we believe their argument concerning the logic of inferential statistical tests is
    flawed, and how White and Gorard misrepresent the protocols of inferential
    statistical tests, and make brief suggestions for rebalancing the statistics curriculum.

  • Recent concerns about a shortage of capacity for statistical and numerical
    analysis skills among social science students and researchers have prompted a range
    of initiatives aiming to improve teaching in this area. However, these projects have
    rarely re-evaluated the content of what is taught to students and have instead
    focussed primarily on delivery. The emphasis has generally been on increased use of
    complex techniques, specialist software and, most importantly in the context of this
    paper, a continued focus on inferential statistical tests, often at the expense of other
    types of analysis. We argue that this ‘business as usual’ approach to the content of
    statistics teaching is problematic for several reasons. First, the assumptions
    underlying inferential statistical tests are rarely met, meaning that students are being
    taught analyses that should only be used very rarely. Secondly, all of the most
    common outputs of inferential statistical tests – p-values, standard errors and
    confidence intervals – suffer from a similar logical problem that renders them at best
    useless and at worst misleading. Eliminating inferential statistical tests from statistics
    teaching (and practice) would avoid the creation of another generation of
    researchers who either do not understand, or knowingly misuse, these techniques. It
    would also have the benefit of removing one of the key barriers to students’
    understanding of statistical analysis
     

  • Data are abundant, quantitative information about the state of society and the
    wider world is around us more than ever. Paradoxically, recent trends in the public
    discourse point towards a post-factual world that seems content to ignore or
    misrepresent empirical evidence. As statistics educators we are challenged to
    promote understanding of statistics about society. In order to re-root public debate to
    be based on facts instead of emotions and to promote evidence-based policy
    decisions, statistics education needs to embrace two areas widely neglected in
    secondary and tertiary education: understanding of multivariate phenomena and the
    thinking with and learning from complex data.

  • The data revolution has given citizens access to enormous large-scale open
    databases. In order to take into account the full complexity of data, we have to
    change the way we think in terms of the nature of data and its availability, the ways in
    which it is displayed and used, and the skills that are required for its interpretation.
    Substantial changes in the content and processes involved in statistics education are
    needed. This paper calls for the introduction of new pedagogical constructs and
    principles needed in the age of the data revolution. The paper deals with a new
    construct of statistical literacy. We describe principles and dispositions that will
    become the building blocks of our pedagogical model. Our model suggests that
    effective engagement with large-scale data, modelling and interpretation situations
    requires the presence of knowledge-bases as well as supporting dispositions.

  • “The Times They Are a-Changin’” says the old Bob Dylan song. But it is not just
    the times that are a-changin’. For statistical literacy, the very earth is moving under
    our feet (apologies to Carole King). The seismic forces are (i) new forms of
    communication and discourse and (ii) new forms of data, data display and human
    interaction with data. These upheavals in the worlds of communication and data are
    ongoing. If anything, the pace of change is accelerating. And with it, what it means to
    be statistically literate is also changing. So how can we tell what is important? We
    will air some enduring themes and guiding principles.

  • Statistical literacy involves engagement with the data one encounters. New forms
    of data and new ways to engage with data – notably via interactive data
    visualisations – are emerging. Some of the skills required to work effectively with
    these new visualisation tools are described. We argue that interactive data
    visualisations will have as profound an effect on statistical literacy as the
    introduction of statistics packages had on statistics in social science in the 1960s.
    Current conceptualisations of statistical literacy are too passive, lacking the
    exploration part in data analysis. Statistical literacy should be conceived of as
    empowerment to engage effectively with evidence, and educators should seek to move
    students along a pathway from using interactive data visualisations to building them
    and interpreting what they see

     

  • Mnemonics (memory aids) are often viewed as useful in helping students recall information, and thereby possibly reducing stress and freeing up more cognitive resources for higher-order thinking. However, there has been little research on statistics mnemonics, especially for large classes. This article reports on the results of a study conducted during two consecutive fall semesters at a large U.S. university. In 2014, a large sample (n = 1487) of college students were asked about the usefulness of a set of 19 published statistics mnemonics presented in class, and in 2015, the students (n = 1468) were presented 12 mnemonics related to inference and then asked whether or not they used mnemonics on that exam. This article discusses how students assess the usefulness of mnemonics and evaluates the relationship between using mnemonics and reducing anxiety. Additionally, the relationship between mnemonic usage and learning outcomes achievement will be discussed, along with this study's limitations and implications for teaching.

     

  • Technology plays a critical role in supporting statistics education, and student comprehension is improved when simulations accompanied by dynamic visualizations are employed. Many web-based teaching tool applets programmed in Java/Javascript are publicly available (e.g., www.rossmanchance.com, www.socr.ucla.edu). These provide a user-friendly interface which is accessible and appealing to students in introductory statistics courses. However, not all statistics educators are fluent in Java/Javascript and may not be able to tailor these apps or develop their own. Shiny, a web application framework for R created by RStudio, facilitates applet development for educators who are familiar with R. We illustrate the utility, convenience, and versatility of Shiny through our collection of 17 freely available apps covering a range of topics and levels (found at www.statistics.calpoly.edu/shiny). Our Shiny source code is publicly available so that anyone may tailor our apps as desired. We provide feedback on how our apps have been used in statistics classes including some challenges that were encountered. We also discuss feasibility on building, launching, and deploying Shiny apps. A brief tutorial on installing and using Shiny is provided in the appendix. Some teaching materials based on our Shiny apps are also included in the appendix.

  • Statistical literacy skills and technological literacy skills are becoming increasingly entwined as the practice of statistics shifts toward more reliance on the power of technology. More and more, statistics educators suggest reforming introductory college statistics courses to include more emphasis on technology and modeling. But what is the impact of such a focus on student learning? This research examines a small sample of students. The students received a reform-oriented curriculum focused on modeling and simulation using TinkerPlotsTM technology. The data reported here is from students written work at the end of the term on their final assessment. They had access to TinkerPlotsTM for the assessment and we share the ways they used the technology to create statistical models. This work provides insights into the ways students’ construct models and how they interpret the models they construct within the context of the original statistical problem they were given. We describe how the technology used in this reform class appeared to frame students’ ways of constructing a statistical model. We also discuss challenges of this approach for student thinking and share implications for teaching and future research.

Pages

register