Literature Index

Displaying 1221 - 1230 of 3326
  • Author(s):
    Marcin Kozak
    Year:
    2009
    Abstract:
    This article suggests how to explain a problem of small sample size when considering correlation between two Normal variables. Two techniques are shown: one based on graphs and the other on simulation
  • Author(s):
    Abele, A.
    Editors:
    Davidson, R., & Swift, J.
    Year:
    1986
    Abstract:
    The so-called operative method has its roots in the work of J. Piaget. His concept of the "operation", both concrete and formal operation, and the psychological approach. We have learned from psychology that the learning of operational concepts has to be linked with the attributes of the operative method above mentioned. It is the experience of many teachers that pupils accept and make use o the operative attributes when problems are posed according to the operative principles, that pupils deal - sometimes - consciously with these attributes of tasks or problems, and that a special sort of active behavior is developed during a long term learning process according to this didactical design. Pupils between 12 and 15 years of age - especially the "slow-learners" - have to solve a large number of assignments before they can begin and carry out a process of generalization. One of the goals is to concretize learning through discovery; therefore formal solutions remain somewhat in the background, whereas more emphasis is placed upon the activities of the pupils and their dealing with variation in situation and tasks.
  • Author(s):
    Anna E. Bargagliotti
    Year:
    2012
    Abstract:
    Statistics and probability have become an integral part of mathematics education. Therefore it is important to understand whether curricular materials adequately represent statistical ideas. The Guidelines for Assessment and Instruction in Statistics Education (GAISE) report (Franklin, Kader, Mewborn, Moreno, Peck, Perry, & Scheaffer, 2007), endorsed by the American Statistical Association, provides a two-dimensional (process and level) framework for statistical learning. This paper examines whether the statistics content contained in the NSF funded elementary curricula Investigations in Number, Data, and Space, Math Trailblazers, and Everyday Mathematics aligns with the GAISE recommendations. Results indicate that there are differences in the approaches used as well as the GAISE components emphasized among the curricula. In light of the fact that the new Common Core State Standards have placed little emphasis in statistics in the elementary grades, it is important to ensure that the minimal amount of statistics that is presented aligns well with the recommendations put forth by the statistics community. The results in this paper provide insight as to the type of statistical preparation students receive when using the NSF funded elementary curricula. As the Common Core places great emphasis on statistics in the middle grades, these results can be used to inform whether students will be prepared for the middle school Common Core goals.
  • Author(s):
    R Nisbett, L Ross
    Year:
    1980
  • Author(s):
    Fischoff, B., & Beyth-Marom, R.
    Year:
    1983
    Abstract:
    The use of statistics and probabilities as legal evidence has recently come under increased scrutiny. Judges' and jurors' ability to understand and use this type of evidence has been of special concern. Finkelstein and Fairley (1970) proposed introducing Bayes' theorem into the courtroom to aid the fact-finder evaluate this type of evidence. The present study addressed individuals' ability to use statistical information as well as their ability to understand and use an expert's Bayesian explanation of that evidence. One hundred and eighty continuing education students were presented with a transcript purportedly taken from an actual trial and were asked to make several subjective probabiliy judgments regarding blood-grouping evidence. The results extend to the trial process previous psychological research suggesting that individuals generally underutilize statistical information, as compared to a Bayesian model. In addition, subjects in this study generally ignored the expert's Bayesian explanation of the statistical evidence.
  • Author(s):
    Granaas, M.
    Editors:
    Phillips, B.
    Year:
    2002
    Abstract:
    For many years null hypothesis testing (NHT) has been the dominant form of statistical analysis in psychology. It has also been subject to periodic criticisms from within the field of psychology. In the past decade these occasional criticisms have turned into a more or less steady stream which have lead some to call for an outright ban on NHT in psychology, while others have called for greater use of alternative procedures. The solution lies neither in banning NHT nor in relying solely on alternative procedures, but in "reforming" NHT, replacing a-theoretical null hypotheses with theoretically meaningful hypotheses. Such reform requires that training of researchers emphasize parameter estimation and the testing of theoretical models, an approach that exists in some areas of psychology and appears to be common in other sciences. Such an emphasis will help ensure that the statistical hypothesis being tested matches the substantive hypothesis of interest. I will discuss the changes that are occurring in psychology and propose further changes that are still needed.
  • Author(s):
    Robert Gardner, Robert Davidson
    Year:
    2010
    Abstract:
    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.
  • Author(s):
    O'Brien, T. E.
    Editors:
    Phillips, B.
    Year:
    2002
    Abstract:
    Applied researchers are often interested in obtaining confidence intervals for key nonlinear model parameters so as to answer important research questions, and the usual "plus and minus 2 SE's" confidence interval leads easily into the usual Wald hypothesis test covered in most introductory statistics courses. However, since information about a specific parameter is often asymmetric, a skewed confidence interval is often more appropriate and reasonable in practice. This leads to the use of likelihood-based tests, typically introduced in intermediate undergraduate and basic graduate course. This paper argues that the superiority (in terms of for example increased power) of likelihood-based and score hypothesis tests over the Wald test is most easily conveyed and appreciated by first providing a reasonable motivation (as well as examples) using confidence intervals, and then exploiting the confidence interval-hypothesis test equivalence.
  • Author(s):
    Levine, D. & Rockhill, B.
    Editors:
    Goodall, G.
    Year:
    2006
    Abstract:
    We focus on the problem of ignoring statistical independence. A binomial experiment is used to determine whether judges could match, based on looks alone, dogs to their owners. The experimental design introduces dependencies such that the probability of a given judge correctly matching a dog and an owner changes from trial to trial. We show how this dependence alters the probability of a successful match of dog to owner, and thus alters the expected number of successful matches and the variance of this expected quantity. Finally, we show that a false assumption of independence that results in incorrect probability calculations changes the probability of incorrectly rejecting the null hypothesis (i.e. the Type I error).
  • Author(s):
    Yu, C. H., & Behrens, J. T.
    Year:
    1995
    Abstract:
    The objective of this study was to catalog undergraduate and graduate students' misconceptions in the area of power analysis, and to examine the efficacy of a computer simulation to remedy these misconceptions.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education

register