Proceedings

  • In this paper we explore the possibility of using computer simulations in relation to the correction of misconceptions in probability judgments. But before considering this issue, we will look at some of possible difficulties that could come along with using computer simulations.

  • Data analysis can be introduced at any stage during a student's school career and its introduction does not have to be restricted to contexts labeled "statistics lessons". In this paper the view is taken that wherever and whenever data analysis appears in the curriculum, the aim is that students will learn something about data analysis as a skill in its own right, as well as about its use as a tool for investigation. Some aspects of data analysis, in common with many other practical skills, may have to be learned rather than taught. The teacher's function is then to facilitate the learning process, to provide an environment in which data analysis can be carried out, to make encouraging suggestions and to stop students from going too far down a fruitless path. Often the teacher will be learning along with the students. In order to start on a program for learning data analysis, data are needed and the first part of this paper discusses what kinds of data might be suitable and how to obtain them. Suggestions are then made as to how start studying a data set and what might be done on an initial analysis.

  • Performance on problems included in the most recent administration of NAEP suggest that the majority of secondary students believe in the independence of random events. In the study reported here a high percentage of high-school and college students answered similar problems correctly. However, about half of the students who appeared to be reasoning normatively on a question concerning the most likely outcome of five flips of a fair coin gave a logically inconsistent answer on a follow-up question about the least likely outcome. It is hypothesized that these students were reasoning according to an "outcome approach" to probability in which they interpreted their task as predicting what actually would occur if they flipped a fair coin five times. This finding suggests that the percentage correct on corresponding NAEP items are inflated estimates of normative reasoning about independence.

  • We have been teaching a General Linear Models approach to problem formulation and analysis to university students and applied researchers for over 25 years. The students and researchers have come from a wide variety of application areas such as education and all of the social sciences, biology, management, and operations research. The objectives of the course range from the very clearly defined to the very unclear. The range of objectives and the heterogeneity of backgrounds create difficulties in the assessment of performance. Among these difficulties are defining the objectives of the instruction, developing instruments to evaluate student performance, and methods of awarding grades.

  • The basic principles of experimental design, as it is usually taught today, both to students specialising in statistics and to those in other disciplines, were established nearly fifty years ago. For many statisticians and users of statistics, the major textbook on Design is still Cochran and Cox (1957), written before the advent of the computer. During the last twenty-five years, our computational habits have changed radically, and it is important to ask whether the fundamental ideas of experimental design remain unchanged, or whether these ideas were influenced by the computational environment in which they were developed. I shall not be talking about design by computer, but about the teaching of design liberated by the computer from restrictions imposed by the need to analyse the data. I shall attempt to consider not only the teaching of design now, but into the future.

  • Courses in experimental design usually provide students with a set of possible designs and corresponding methods of analysis but fail to offer them opportunities for using these new "tools" to solve actual problems. This situation is unfortunate since some of these students may eventually be required to give advice to researchers planning experiments without ever having experience the joys, frustrations, or compromises involved in conducting an experiment. In fact, with the advent of cooperative, internship, and work-study programs, the student is being requested to give advice even sooner than in the past. How can these students, who have never been confronted by time or cost constraints, choice of appropriate factors, or design of follow-up experiments fully appreciate the problems their clients face? This paper describes in detail the computer package which we have developed for this purpose and outlines its use as a teaching aid. Following more than ten years of combined experience with this package, we believe that it is a unique, extremely versatile, and powerful tool not only for use in experimental design courses, but in regression, sampling, multivariate, and introductory courses as well.

  • It is not the purpose of this paper to delve into the reasons why the particular software being used may or may not be effective, although this is in itself an extremely important issue. Our purpose is to suggest a rationale as to why computer based simulations are not as helpful as we might suppose, and to propose an alternative path leading to statistical inference, which potentially avoids this problem.

  • What is it that makes the difference for students? What is their perspective on what benefited or hindered their learning in a first statistics class? Additionally, can students' reasons be categorized based on different student characteristics or learning styles? To answer these questions, the present investigation is a qualitative follow-up to a quantitative analysis that was aimed at determining relationships among learning styles, academic programs, background variables and attitude toward statistics.

  • The Bayesian approach shall serve us in this respect: which are the advantages and drawbacks of such a model, mainly with regard to a better understanding for stochastics education? This question refers to two different aspects: to both the exemplary use of such an approach in instruction, and to the benefits and more comprehensive knowledge resulting in principle from such a didactical way of phrasing the question. Accordingly, and from an educational perspective, this is not simply a matter of deciding whether the Bayesian approach is right or wrong - i.e. of contributing to the controversy about the subjective and the objective concept of probability - but rather this debate is intended to help us obtain new insights into the relationship between probability theory and statistics which are not confined to stating either a strict separation, or an identity, between probability theory and statistics.

  • Most decisions in uncertain situations involve comparison of the probabilities of success under the alternative choices, and selection of that alternative where the chances of success are higher. Hence the ability to discriminate between probabilities of varying discrepancies is crucial in such tasks. The development of that capacity was examined in two previous experiments. The purpose of the present study is to identify the principles of choice in individual response patterns. One of the educational implications of this study is that the teaching of probability to children should be carefully planned. The lesson to be learned from the results of the present research, in conjunction with earlier studies, is that the concept of proportion (and probability) is very elusive.

Pages

register