Journal Article

  • We often forget how science and engineering function. Ideas come from previous exploration more often than from lightning strokes. Important questions can demand the most careful planning for confirmatory analysis. Broad general inquiries are also important. Finding the question is often more important than finding the answer. Exploratory data analysis is an attitude, a flexibility, and a reliance on display, NOT a bundle of techniques, and should be so taught. Confirmatory data analysis, by contrast, is easier to teach and easier to computerize. We need to teach both; to think about science and engineering more broadly; to be prepared to randomize and avoid multiplicity.

  • Continuous Quality Improvement (CQI) better known in industry as Total Quality Management (TQM), is a management philosophy which has transformed many businesses and corporations internationally and is now beginning to make strong inroads into universities, predominantly on the administrative side. This paper raises the question of whether the conceptual frame work provided by CQI/TQM is a fertile one for addressing the problems involved in university teaching. It translates basic tenets of CQI/TQM into the university teaching context and outlines how these ideas have been implemented in a large, multisection, introductory statistics course. Particular attention is given to the problems of fostering steady year-to-year improvements in a course that can survive changes of personnel, and in making improvements by stimulating group creativity and then capturing the results for the future.

  • The interconnected themes of quality and the marketing of the discipline of statistics are explored. An understanding of statistics as the study of the process of scientific enquiry is advocated as a consciously targeted market position. Because it reaches such a high proportion of the managers and decision makers of the future, the introductory university or college statistics course is highlighted as a potent marketing opportunity for enhancing the long term health of statistics. Attention is given to teaching students to think "statistically", to become educated consumers of statistical expertise and to communicate well with non-statisticians.

  • Two studies were run to determine whether the interpretations of statements or forecasts using vague probability and frequency expression such as likely, improbable, frequently, or rarely, were sensitive to the base rates of the events involved, In the first experiment, professional weather forecasters judged event probabilities in situations drawn from a medical context. In the second experiment, students judged matched forecast scenarios of common semantic content that differed only in prior probability (as determined by an independent group of subjects). Results were (a) the interpretations of forecasts using neutral (e.g., possible) and high probability or frequency terms (e.g. usually) were strong, positive functions of base rate, while the interpretations of forecasts using low terms (e.g. rarely) were much less affected by base rates; (b) in the second experiment interpretations of forecasts appeared to represent some kind of average of the meaning of the expression and the base rate.

  • Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of zero to probabilities not at all in the vague concept represented by the term, a membership value of one to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree. A modified pair-comparison procedure was used in two experiments to empirically establish and assess membership functions for several probability terms. Subjects performed two tasks in both experiments: They judged (a) to what degree one probability rather than another was better described by a given probability term, and (b) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task a data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task b. The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms.

  • Piaget worked out his logical theory of cognitive development, Koehler the Gestalt laws of perception, Pavlov the principles of classical conditioning, Skinner those of operant conditioning, and Bartlett his theory of remembering and schemata - all without rejecting null hypotheses. But, by the time I took my first course in psychology at the University of Munich in 1969, null hypothesis tests were presented as the indispensable tool, as the sine qua non of scientific search. Post-World War 2 German psychology mimicked a revolution of research practice that had occurred between 1940 and 1955 in American psychology. What I learned in my courses and textbooks about the logic of scientific inference was not without a touch of morality, a scientific version of the 10 commandments: Thou shalt not draw inferences from a nonsignificant result. Thou shalt always specify the level of significance before the experiment; those who specify it afterward (by rounding up obtained p values) are cheating. Thou shalt always design the experiments so that thou canst perform significance testing.

  • It is argued that a nonparametric framework for the introductory statistics course is both mathematically and conceptually simpler than the customary normal-theory framework. Two examples are considered: the Kendall rank correlation coefficient versus the Pearson product-moment correlation coefficient, and the confidence interval for the median versus the confidence interval based on the one sample t statistic.

  • Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative than one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomena is demonstrated in a variety of contexts including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between-subjects and within- subjects comparisons. Alternative interpretations of the conjunction fallacy are discussed and attempts to combat it are explored.

  • College business statistics students (N=84) participated in a spreadsheet CBT tutorial. Noncontent related students-instructor interaction (present/absent) was studied in two learning settings: individual study and paired study. Student-instructor interaction led to higher achievement in the individual setting. However, when subjects studied in pairs, instructor interaction was less influential. The lowest scoring subjects were those who studied individually and who received no instructor interactions. The results demonstrate that what the instructor does, and how the students are arranged can affect achievement in CBT.

  • This study attempts to identify the relevant mental model for hypothesis testing. Analysis of textbooks provided the identification of the declarative and procedural knowledge that constitute the relevant mental models in hypothesis testing. A cognitive task analysis of intermediates' and experts' mental models was conducted in order to identify the relevant mental models for teaching novices. Of interest were the steps taken to arrive at the solution and the representations that were used in the problem solving process. Results indicate that experts and intermediates differ in their conceptual understanding. In addition, diagrammatic problem representation was useful in for all particularly for the intermediates. On this basis, the intermediate models were deemed relevant for instructing novices. Two instructional strategies were investigated: presentation sequence (concepts and procedures taught separately or together) and presentation mode (diagrammatic vs. descriptive). Based on their findings, the researchers conclude that meaningful learning occurs when conceptual instruction is provided prior to the procedures, that is, when they are taught separately rather than concurrently, and when a diagrammatic strategy was utilized rather than a descriptive method. This facilitates development of representational ability for understanding hypothesis testing. In short, using separate and diagrammatic representation strategies are effective for teaching novices in the area of hypothesis testing. The researchers conclude that by developing relevant mental models through this type of instruction, the learner's knowledge can be more accessible (awareness of knowledge), functional (predict or explain), and improvable.

Pages