Literature Index

Displaying 1611 - 1620 of 3326
  • Author(s):
    Watnik, M., & Levine, R. A.
    Year:
    2001
    Abstract:
    The dataset associated with this paper is from the 2000 regular season of the National Football League (NFL). We use principal components techniques to evaluate team "strength." In some of our analyses, the first two principal components can be interpreted as measure of "offensive" and "defensive" strengths, respectively. In other circumstances, the first principal component compares a team against its opponents.
  • Author(s):
    Callaert, H.
    Year:
    1999
    Abstract:
    Students in an applied statistics course offering some nonparametric methods are often (subconsciously) restricted in modeling their research problems by what they have learned from the t-test. When moving from parametric to nonparametric models, they do not have a good idea of the variety and richness of general location models. In this paper, the simple context of the Wilcoxon-Mann-Whitney (WMW) test is used to illustrate alternatives where "one distribution is to the right of the other." For those situations, it is also argued (and demonstrated by examples) that a plausible research question about a real-world experiment needs a precise formulation, and that hypotheses about a single parameter may need additional assumptions. A full and explicit description of underlying models is not always available in standard textbooks.
  • Author(s):
    Rouanet, H., Bernard, J. M., & Lecoutre, B.
    Year:
    1986
    Abstract:
    The familiar sampling procedures of statistical inference can be recast within a purely set-theoretic (ST) framework, without resorting to probabilistic prerequisites. This article is an introduction to the ST approach of statistical inference, with emphasis on its attractiveness for teaching. The main points treated are unsophisticated ST significance testing and ST inference for a relative frequency (proportion).
  • Author(s):
    Starr, N.
    Year:
    1997
    Abstract:
    The 1970 draft lottery for birthdates is reviewed as an example of a government effort at randomization whose inadequacy can be exhibited by a wide variety of statistical approaches. Several methods of analyzing these data -- which were of life-and-death importance to those concerned -- are given explicitly and numerous others are cited. In addition, the corresponding data for 1971 and for 1972 are included, as are the alphabetic lottery data, which were used to select draftees by the first letters of their names. Questions for class discussion are provided. The article ends with a survey of primary and secondary sources in print.
  • Author(s):
    Yost, P. A., Siegel, A. E., & Andrews, J. M.
    Year:
    1962
    Abstract:
    Although few adults would be able to define probability with any precision, and in fact definitions of probability are a matter for dispute among logicians and mathematicians, most adults are able to behave effectively in probabilistic situations involving quantitative proportions of independent elements. Piaget has studied the behavior of children in a probabilistic situation and from their behavior has concluded that young children (say up to age 7) are unable to utilize a concept of probability. The present study is a demonstration that young children are able to behave in terms of the probability concept under appropriate conditions. It is an experiment in which Piaget's technique for assessing the probability concept in young children is compared with a decision-making technique.
  • Author(s):
    Carine A. Bellera, Marilyse Julien, and James A. Hanley
    Year:
    2010
    Abstract:
    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2- sample Student-t statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were presented in tables, often laid out in a way that saves space but makes them confusing to look up. Recently, a number of textbooks have bypassed the tables altogether, and suggested using normal approximations to these distributions, but these texts are inconsistent as to the sample size n at which the standard normal distribution becomes more accurate as an approximation. In the context of non-normal data, students can find the use of this approximation confusing. This is unfortunate given that the reasoning behind - and even the derivation of - the exact distributions can be so easy to teach but also help students understand the logic behind rank tests. This note describes a heuristic approach to the Wilcoxon statistics. Going back to first principles, we represent graphically their exact distributions. To our knowledge (and surprise) these pictorial representations have not been shown earlier. These plots illustrate very well the approximate normality of the statistics with increasing sample sizes, and importantly, their remarkably fast convergence.
  • Author(s):
    Zahn, D. A.
    Year:
    1992
    Abstract:
    In this article I will describe: - how my minute paper questions have evolved, - how my minute paper process has evolved, and - things I have learned from and about minute papers.
    Location:
  • Author(s):
    Williams, A.M.
    Year:
    1999
    Abstract:
    Examination of the statistical literature shows that consensus on definition, terminology, and interpretation of some hypothesis testing concepts is elusive. This makes hypothesis testing a difficult topic to teach and learn. This paper reports on the results of a study of novice students' conceptual knowledge of four hypothesis testing concepts through talking aloud and interview methods. While some students seemed to have a reasonable understanding of some concepts, many students seemed to have more limited understanding. The study explores students' faulty conceptual knowledge.
  • Author(s):
    Konold, C., Lohmeier, J., Pollatsek, A., Well, A., Falk, R., & Lipson, A.
    Year:
    1991
    Abstract:
    Novices and experts rated 18 phenomena as random or non-random and gave justifications for their decisions. Experts rated more of the situations as random than novices. Roughly 90% of the novice justifications were based on reasoning via a) equal likelihood, b) possibility, c) uncertainty, and d) causality.
  • Author(s):
    Raymond S. Nickerson
    Year:
    2000
    Abstract:
    Null hypothesis significance testing (NHST) is arguably the most widely used<br>approach to hypothesis evaluation among behavioral and social scientists. It is also<br>very controversial. A major concern expressed by critics is that such testing is<br>misunderstood by many of those who use it. Several other objections to its use have<br>also been raised. In this article the author reviews and comments on the claimed<br>misunderstandings as well as on other criticisms of the approach, and he notes<br>arguments that have been advanced in support of NHST. Alternatives and supplements<br>to NHST are considered, as are several related recommendations regarding<br>the interpretation of experimental data. The concluding opinion is that NHST is<br>easily misunderstood and misused but that when applied with good judgment it can<br>be an effective aid to the interpretation of experimental data.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education