Proceedings

  • The author's approach to teaching an integrative unit to a small group of master's level Applied Statistics students in 2000-2001 is described. Details of the various activities such as data analysis, reading and discussion of papers, and training in consultancy skills are given, as also are details of the assessment. The students' and lecturer's views of the unit are discussed.

  • For the past twenty years, we have been using an original technique to teach statistics and survey sampling methods to postgraduates studying economics and statistics. The students must put their knowledge into practice by carrying out a survey sample for a client who they will have found by themselves. This may include a marketing study for a shop, a brand or a public service, or measuring the audience ratings of a radio station or local television station. More than 100 different surveys have already been carried out by students on this program over the last 20 years. Furthermore, every six years, during the regional parliamentary elections, the entire group (25 students) carries out an estimate of the results for the public local television station, on the basis of the first ballot papers counted in a sample of 300 polling stations; our results are broadcast live on television 30 minutes after the close of polling.

  • New results in research on judgment under uncertainty show a way of how to improve the teaching of statistical reasoning. The implications of this research are that (i) successful learning needs doing, and (ii) that the format in which information is represented plays a decisive role. Statistical problems are, for instance, solved much better if the relevant pieces of information are presented as frequencies rather than probabilities. It also helps a lot if random processes can be observed rather than only read about. A computer program is presented that incorporates these implications from psychological research. The software accompanies an elementary text book on probability theory to be used in high school.

  • At the present time, frequentist ideas dominate most statistics undergraduate programs, and the exposure to Bayesian ideas in undergraduate statistics is very limited. There are historical reasons for this frequentist dominance. Efron (1986) concluded that frequentists had captured the high ground of objectivity (p. 4). Bayesian methods have superior performance, often even outperforming frequentist procedures when evaluated under frequentist criteria. In the past, Bayesian methods were of limited practical use, since analytic solutions for the Bayesian posterior distributions were only possible in a few cases, and the numerical calculation of the posterior often was not feasible because of lack of computer power. Recent developments in computing power, and the development of Markov chain methods for sampling from the posterior have made Bayesian methods possible, even in very complicated models. It is clearly unsatisfactory for our profession that most of our students are not being introduced to the best methods available. In this paper I make a proposal for how our profession should deal with this challenge, by giving my answers to the journalistic "who, what, where, when, why, and how" questions about the place of Bayesian Statistics in undergraduate statistical education.

  • This paper reviews the past and current interest in using Bayesian thinking to introduce statistical inference. Rationale for using a Bayesian approach is described and particular methods are described that make it easier to understand Bayes' rule. Several older and modern introductory statistics books are reviewed that use a Bayesian perspective. It is argued that a Bayesian perspective is very helpful in teaching a course in statistical literacy.

  • Some ideas about how basic aspects of nonparametric curve estimation can be introduced to students at a post secondary level will be discussed here. The idea of estimating population curves, like the density or the regression function, is studied from a nonparametric viewpoint. Starting from well-known estimators as the histogram or the regressogram, the discussion will then go to some of the smoothing methods developed in the last four decades, mainly focusing on the kernel density and regression estimators. Some ideas about the important problem of smoothing parameter selection will also be presented.

  • We first note that tests for interaction are missing in virtually all textbooks on nonparametric statistics. We will discuss some reasons why this is so. We then make a case for featuring tests for interaction in the course. By learning how to use median polish and graphical displays students can begin to conceptualize what an interaction means. This will strengthen their understanding of additive models as well. After a conceptual basis for understanding interaction is in place, we can then proceed to design tests for interaction. They will not be strictly nonparametric. This provides a good opportunity for discussion of what it means to have a nonparametric test and why it is impossible to construct an ordinary permutation test for interaction.

  • The bootstrap is a general resampling procedure which can be applied to estimate the sampling distribution of a statistic. From the statistical practitioner's point of view it has attractive properties because it requires few assumptions, little modeling or analysis, and can be applied in an automatic way in a wide variety of situations regardless of their theoretical complexity. The bootstrap can provide answers to questions that are too complicated for traditional statistical analyses, which are usually based on asymptotic normal approximations. A brief discussion of the non-parametric bootstrap is presented, followed by examples and illustrations. Possible suggestions regarding the teaching of these concepts at various levels are made. The key requirements for computer implementation of the bootstrap method include a flexible programming language with a collection of reliable quasi-random number generators, a wide range of built-in statistical bootstrap procedures and a reasonably fast processor. The use of the statistical languages S and Fortran, using the current commercial versions S-Plus 4.5 and Digital Fortran 6.0, are illustrated.

  • The aim of this paper is to show how teachers can use Visual Basic language on the spreadsheet for student sin order to gain the skills need in using nonparametric techniques for density and regression function estimations. The author has built a useful didactic tool on Excel Visual Basic for teaching and practice of nonparametric kernel methods, being intended for students with some preliminary knowledge on this topic. This Visual Basic Application (VBA) is loaded into Excel as a MACRO (or into the modules of a Workbook for EXCEL). The specific user functions incorporated into it are easy tools for students to obtain an intuitive perception of nonparametric estimation for density and regression functions. The VBA also allows Excel to make use of an added menu similar to a small Statistical Package specialised in nonparametric methods.

  • A major objective of the University of Transkei Research Resource Centre is to enable staff and students to acquire research knowledge and skills. This is intended to empower faculty to initiate quality research projects and participate effectively in ongoing research. We recognised that research skills of staff and students ranged from none to well-experienced. In addressing different needs we found that an effective method was to relate all activities to the context of the research and where possible to specific research projects. Importantly we endeavoured to anticipate needs and afforded researchers with face-to-face sessions, a number of workshops, short courses and research seminars. This paper discusses how the consultancy process is used to empower social science researchers.

Pages