Literature Index

Displaying 2051 - 2060 of 3326
  • Author(s):
    Tabachnick, B. G., & Fidell, L. S.
    Year:
    1991
    Abstract:
    Survey data were collected from 139 undergraduate and graduate faculty regarding their commercial software preferences for advanced analysis of variance (ANOVA) courses. Despite the availability of many new statistical software packages, the top 3 packages in popularity for ANOVA applications were SPSS, SSA, and BMDP, all of which appeared originally as mainframe software. The 4th package in terms of popularity was SYSTAT, originally developed for IBM-compatible microcomputers. Although ease of use was the most important criterion for choosing a software package, the most frequently cited reason for not using 1 of the 4 most popular programs was difficulty for students. (PsycLIT Database Copyright 1991 American Psychological Assn, all rights reserved)
  • Author(s):
    Dolbear, F. T., Jr.
    Year:
    1988
    Abstract:
    Three aspects to be considered when teaching a one-semester beginning economics statistics course are coverage, mastery, and applications. There is a difference between coverage and mastery. Moreover, mastery is not an end in itself; instructors must consider how statistics courses will influence students' approaches to other subjects and applications. In principle, computer activities can be designed and implemented to improve any and all of these three goals. The HyperCard software for the Macintosh computer should result in an important advance in the interface between computer and user. This will be valuable for tutorial programs. Cognitive scientists are designing software which analyzes student solutions to standard problems by inferring a student's intentions from the details of her/his solution and then offering diagnostic assistance. Programs like "Stat Helper" (briefly discussed) for the Macintosh allow students to interact with the computer in solving a variety of problems. Students can learn about regressions better through hands-on experience on personal or mainframe computers. Computer experiments can exhibit a variety of points about regression applications. Computers can expand coverage and make applications more accessible to the average student. Students must develop some sense about what questions regressions can and cannot be expected to answer. Four examples, including graphs and statistical data, are given: automobile weight and fuel mileage, polynomial (quadratic), omitted independent variable, and logarithmic relationship. Nine references and numerous tables and graphs are provided. (GEA)
  • Author(s):
    Biehler, R.
    Year:
    1997
    Abstract:
    The community of statisticians and statistics educators should take responsibility for the evaluation and improvement of software quality from the perspective of education. The paper will develop a perspective, an ideal system of requirements to critically evaluate existing software and to prodcue future software more adequate both for learning and doing statistics in introductory courses. Different kinds of tools and microworlds are needed. After discussion general requirements for such programs, a prototypical ideal software system will be presented in detail. It will be illustrated how such a system could be used to construct learning environments and to support elementary data analysis with exploratory working style.
  • Author(s):
    Ben-Zvi, D.
    Editors:
    J. Garfield & G. Burrill
    Year:
    1997
    Abstract:
     TinkerPlots are discussed in detail. Examples are provided to illustrate innovative uses of technology. In the future, these uses may also be supported by a wider range of new tools still to be developed. To summarize some of the fi n dings, the role of digital technologies in statistical reasoning is metaphorically compared with travelling between data and conclusions, where these tools represent fast modes of transport. Finally, we suggest future directions for technology in research and practice of developing students’ statistical reasoning in technology-enhanced learning environments.      
  • Author(s):
    Biehler, R.
    Editors:
    Keitel, C., & Ruthven, K.
    Year:
    1993
    Abstract:
    The overall goal of this chapter is to discuss perspectives for using technology in statistics education. For this purpose, we will analyse and illustrate new revolutionary developments in statistics itself. We intend to make sense of statistical software tools by relating them to subject matter developments and to analytical perspectives generated by the requirements of software from a didactical point of view. New developments in statistics education and curriculum dependent educational software tools will be critically related to the technological and subject matter changes outside school.
  • Author(s):
    Werner, M.
    Editors:
    Rossman, A., & Chance, B.
    Year:
    2006
    Abstract:
    We investigate experiences of first-year, non-science students with three data analysis tools during a pre-Calculus, introductory statistics course at the American University in Cairo, Egypt. Students could choose between DataDesk, Excel or StatCrunch, and were required to use one of these packages to analyze data collected for a semester project. It was especially important to evaluate these software packages from the point of view of a first-time user with no previous experience in either Statistics or advanced computer usage; several students were at first somewhat apprehensive of using a computer to analyze data. Among other outcomes, this investigation led to the development of student-based comparison of three software packages, from the perspective of a large group of potential users.
  • Author(s):
    Rudolph, W. B., & Tvrdik, D.
    Year:
    1991
    Abstract:
    Described is a strategy that allows students to experiment with probability without applying formulas to solve problems. Students are able to intuitively develop concepts of probability before formal definitions and properties. Sample problems are included along with BASIC programs for some of the problems. (KR)
  • Author(s):
    Nitko, A. J., & Lane, S.
    Editors:
    Vere-Jones, D., Carlyle, S., & Dawkins, B. P.
    Year:
    1991
    Abstract:
    This paper provides a conceptual framework for generating assessment tasks which provide an instructor with a richer description of students' thinking and reasoning than is possible by just giving students problems to solve. Although the framework is general, its application is illustrated with material from college and beginning graduate level pre-calculus introductory statistics courses which focuses on the following topics: sampling, interval estimation, point estimation, and hypothesis testing.
  • Author(s):
    Nitko, A. J., & Lane, S.
    Year:
    1990
    Abstract:
    In many statistics courses homework exercises and examinations focus primarily on solving problems. Marks are assigned to students' responses according to the degree to which a problem solution is correct and/or to which a student's procedure employed in the solution is correct. When a statistics course has an applications or data analysis orientation, instructors often find that even though students can solve textbook and examination problems, they are frequently unable to apply probability and statistics to solve "real world" research problems in which judgments have to be made about the technique(s) to be used and in which substantive interpretations of the results of statistical analyses need to be made. The paper reviews several formats of examination questions and assessment procedures which have been used over the years in noncalculus courses in applied statistical methods which focus on data analyses, parameter estimation, and hypothesis testing. Among the types of assessment techniques reviewed are short-answer questions, essay questions, yes-no questions with student-provided justifications, concept-oriented multiple-choice items, masterlist items, analogical reasoning items, graphic inference items, free association tasks, and concept mapping tasks. The paper also reports on a computer-assisted test of knowledge structure called MicroCAM. This test allows students to create on a computer screen a spatial representation of the way in which they perceive key statistical concepts to be linked one to another. The test also permits students to specify the type of relationship which links two or more concepts together. In this way a student's unique knowledge structure is revealed. Implications of these different assessment methods for diagnosing students' learning difficulties and for teaching statistics to mathematically naive students are discussed.
  • Author(s):
    May, R. B., & Hunter, M. A.
    Year:
    1993
    Abstract:
    There are two especially useful models of statistical inference, although only one, the normal curve model, is universally taught. The less well known permutation model is contrasted with the normal model and the case made that the permutation model is often more appropriate for the analysis of psychological data. Inappropriate interpretations generated by teaching only the normal model are illustrated. It is recommended that both models be taught so that students and applied workers have a better chance both of understanding the nature of the hypothesis that is being tested, and of correctly discriminating the statistical conditions that support causal inference and generality inference.

Pages

The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education