Literature Index

Displaying 2861 - 2870 of 3326
  • Author(s):
    Noether, G. E.
    Year:
    1980
    Abstract:
    It is argued that a nonparametric framework for the introductory statistics course is both mathematically and conceptually simpler than the customary normal-theory framework. Two examples are considered: the Kendall rank correlation coefficient versus the Pearson product-moment correlation coefficient, and the confidence interval for the median versus the confidence interval based on the one sample t statistic.
  • Author(s):
    Lee, A. J., & Seber, G. A. F.
    Editors:
    Vere-Jones, D., Carlyle, S., & Dawkins, B. P.
    Year:
    1991
    Abstract:
    In 1974, one of the authors (GAFS) introduced a terminating first year service course in statistics, paper 26.181, for non-mathematics students. In the following years we observed that the service course 26.181 catered not only for students majoring in other subjects, but also for a substantial number of mathematics students who preferred a more practical approach than the traditional one. The numbers were also steadily growing (about 1500 in 1990). At that stage we realised that 26.181 provided a potential source of advancing statistics students who might be interested in taking a second year follow-up course. In the 1980s Alan Lee (and probably most of us that vintage) had been strongly influenced by works on exploratory data analysis by such people as Tukey , McNeil, Velleman and Hoaglin. Under Lee's direction a second year data analysis course 26.281 was launched in 1981. His aim was to provide further training in practical statistics and data analysis without requiring too much mathematical knowledge or statistical theory. He realised that the students needed easy computer access for a realistic approach to data analysis. Suitable access to a mainframe was out of the question at the time, but micros seemed a viable alternative. A suitable statistical package was therefore developed called STATCALC (Lee et al., 1984) which had partly evolved from several programs on exploratory data analysis adapted from McNeil (1977) by Ross Ihaka. The package runs on IBM and Macintosh personal computers, the latter being currently used in our department. Lee and Peter Mullins also wrote a manual to go with the package. The manual, with its extensive tutorial section, also serves as the text for the course.
  • Author(s):
    Martin Dempster and Noleen K. McCorry
    Year:
    2009
    Abstract:
    Previous research has demonstrated that students' cognitions about statistics are related to their performance in statistics assessments. The purpose of this research is to examine the nature of the relationships between undergraduate psychology students' previous experiences of maths, statistics and computing; their attitudes toward statistics; and assessment on a statistics course. Of the variables examined, the strongest predictor of assessment outcome was students' attitude about their intellectual knowledge and skills in relation to statistics at the end of the statistics curriculum. This attitude was related to students' perceptions of their maths ability at the beginning of the statistics curriculum. Interventions could be designed to change such attitudes with the aim of improving students' learning of statistics.
  • Author(s):
    Hollylynne S. Lee, Helen M. Doerr, Dung Tran, and Jennifer N. Lovett
    Year:
    2016
    Abstract:
    Repeated sampling approaches to inference that rely on simulations have recently gained prominence in statistics education, and probabilistic concepts are at the core of this approach. In this approach, learners need to develop a mapping among the problem situation, a physical enactment, computer representations, and the underlying randomization and sampling processes. We explicate the role of probability in this approach and draw upon a models and modeling perspective to support the development of teachers’ models for using a repeated sampling approach for inference. We explicate the model development task sequence and examine the teachers’ representations of their conceptualizations of a repeated sampling approach for inference. We propose key conceptualizations that can guide instruction when using simulations and repeated sampling for drawing inferences.
  • Author(s):
    Bar-Hillel, M.
    Year:
    1979
    Abstract:
    This study presents a series of experiments showing that it is possible to elicit judgments indicating that perceived sample accuracy increases with sample size.
  • Author(s):
    Sedlmeier, P.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    Statistical analysis for social scientists very often means statistical analysis of some questionnaire data. The meaning of the numbers obtained in such analyses depends very much on the kind of scales used. In this paper it is shown that the meaning of numbers can also depend on how exactly the scales are constructed. First, some background information about how scales of the same type (e.g., interval scales) can considerably differ in meaning is given and then a series of study results with interval, ordinal, and nominal scales that demonstrate these differential effects are reported. It is argued that such results can easily be replicated in statistics classes. It is further argued that due to the preponderance of scales in social science research, statistics courses should put an emphasis on teaching the correct use and interpretation of scales. Here, demonstrations such as the ones described in this paper can play a helpful role.
  • Author(s):
    Wood, M.
    Editors:
    Stephenson, W. R.
    Year:
    2005
    Abstract:
    This article explores the uses of a simulation model (the two bucket story)?implemented by a stand-alone computer program, or an Excel workbook (both on the web)?that can be used for deriving bootstrap confidence intervals, and simulating various probability distributions. The strengths of the model are its generality, the fact that it provides a powerful approach that can be fully understood with very little technical background, and the fact that it encourages an active approach to statistics?the user can see the method being acted out either physically, or in imagination, or by a computer. The article argues that this model and other similar models provide an alternative to conventional approaches to deriving probabilities and making statistical inferences. These simulation approaches have a number of advantages compared with conventional approaches: their generality and robustness; the amount of technical background knowledge is much reduced; and, because the methods are essentially sequences of physical actions, it is likely to be easier to understand their interpretation and limitations.
  • Author(s):
    Garfield, J. B.
    Year:
    2000
    Abstract:
    This paper defines statistical reasoning, provides a model of statistical reasoning, and discusses the assessment of statistical reasoning.
  • Author(s):
    Jordan, J., & Haines, B.
    Editors:
    Stephenson, W. R.
    Year:
    2006
    Abstract:
    Discussions of quantitative literacy have become increasingly important, and statistics educators are well aware of the link between statistics education and quantitative literacy. Both the statistics education and quantitative literacy movements have emphasized the importance of students practicing skills in multiple contexts - a goal also consistent with a quantitative reasoning across-the-curriculum approach. In this paper, we consider two sources of information: 1) Our data from statistics courses and other quantitative-intensive courses at Lawrence University and 2) a review of the research literature on transfer of quantitative concepts across contexts. Through analysis of these sources, we further explore the link between statistics education and quantitative literacy, and argue for an across-the-curriculum approach to teaching quantitative reasoning. Moreover, we make specific suggestions to statistics educators on their role in the quantitative literacy movement.
  • Author(s):
    Thompson, C. J.
    Editors:
    Vere-Jones, D., Carlyle, S., & Dawkins, B. P.
    Year:
    1991
    Abstract:
    Statistics is the collection, arrangement and interpretation of numerical facts or data. Here we have the ideal vehicle for this transformation, the means by which we can demonstrate the relevance of numeracy skills instead of just calling for them. Note here that I am not talking about theoretical statistics, but about the sensible use of numbers, the use of display techniques such as graphs and charts, and the extraction of information from numbers. These ideas can and should be applied in all subject areas.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education