Research

  • The importance of facilitating study and practice materials that are consistent with graded assessments and instructional objectives is well known, if not commonly used , in educaitonal practice. Reciprocal Peer Tutoring (RPT) is a collaborative approach that embeds assessment in a formalized learning process to facilitate student involvement with course content and improve achievement. Students engaging in RPT activity, each student of a dyad is independently responsible for synthesizing course content and constructing practice multiple-choice test questions, complete with answers, based on the course curriculum. Each dyad then administers practice tests to each other prior to formal class examinations. Upon completion of the practice exams, partners score each other's work and alternate roles as tutors, and tutees to assess each other's performance, give feedback on missed items, and discuss individual questions and course content. In this dual role as tutor and tutee, students benefit through the preparation and instruction in which tutors engage, as well as from the instruction that tutees receive. This study examines the impact of reciprocal peer tutoring (RPT) on student achievement over six sections of an introductory statistics course. A comparison of RPT treatment relative to a control accounting for instructor, showed an effect of RPT treatment at the time of the last examination of the semester. This finding is tempered by additional analyses into the effectiveness fo the RPT treatment. Student achievement relative to increasing levels of cognitive complexity of exam items showed mixed results. Furthermore, a comprehensive analysis of the student work within RPT treatment revelaed students having difficulties implementing the intervention.

  • The influences on adult quantitative literacy were studied using information from the National Adult Literacy Survey, 1,800,000 individuals between 25 and 35 years of age and not in school. The major influences on quantitative literacy were educational background (t = 123-; df = 1; p<.0001), daily television usage (t = 1538; df = 1; p<.0001), and disability (t = 713; df = a; p<.0001). Education impacted television usage (t = 691; df = 1; p<.0001) and personal yearly income (t = 991; df = 1; p<.0001). Ethnicity affected income levels (t = 898; df = 1; p<.0001), which in turn influenced television viewing (t = 1514; df = 1; p<.0001). The results indicated that education seemed the key to increasing levels of quantitative literacy. Library usage, parents' education, and gender did not exhibit any relationship with quantitative literacy.

  • Statistics is viewed as having no true connection with real-life activities. Even the typical components of an introductory statistics course (descriptive statistics, probability, and inferenctial statisticsP are seen as being unrelated to each other. Often descriptive statistics, being less mathematically sophisticated, is rushed through, then lase of probability and combinatorics are inroduced via formulas, and finally inferential statistics is presented. The link between inferential statistics and probability is often completely lost upon the student (Hawkins, Jolliffe, & Glickman, 1992). Most research in statisitcs education has been focused upon what the instructor can do to imrpove the cognitive side of instruction (Gal & Ginsberg, 1992); Fordon, 1999). RElatively less research has focused upon the statistics student (some examples are Gal & Ginsberg, 1994; Garfield, 1995; Gordon, 1995a, 1995b, 1999). In particular, little work has been done in exploring the approach to learning used by statistics students; one such work, that classified students as using either a deep or surface approach to learning, is by Gordon (1999).

  • Scoffoling refers to the instructional support that instructors or more skillful peers offer learners to bridge the gap between their current skill levels and the desired level. An aspect of scaffolding that is olften ignored is the fading of support as the learner masters the skill. It has been suggested that there is a risk of over-relying on the support of integrated media in computer-assisted instruction. A three-dimension (3-D) model of scaffolding that incorporates level of subtask, level of support, and number of repititions of practice has been proposed to vary the technology support systematically in response to the learner's performance. The 3-D contingent scaffolding model was implemented in a comptuer-based instructional program for statistics called "Hypothesis Testing--the Z-test" in order to establish baseline data for integrated media-based instruction or a hypermedia learning environment. The scaffolded instruction was evaluted in terms of knowledge maintenance and transfer by comparing it to full-support instruction and least-support instruction. Findings from 75 college students provide evidence that the scaffolded computer-based instruction promoted knowledge maintenance and improved independent knowledge application, while promoting learning consistently across individuals. Results also show that a dynamic measure of the learner's ability is a better predictor of the learning outcome for subjects using this scaffolded instruction than static measures. The model provides a systemic way to link the concept of scaffolding to integrated media design features using both support building and support fading techniques.

  • To support NCTM's newest process standard, the potential of multiple representations for teaching repertoire is explored through a real-world phenomenon for which full understanding is elusive using only the most common representation (a table of numbers). The phenomenon of "reversal of a comparison when data are grouped" is explored in surprisingly many ways, each with their own insights: table, circle graph, slope & correlation coefficients, platform scale, trapezoidal representation, unit square model, probability (balls in urns), matrix determinants, linear transformations, vector addition, and verbal form. For such a mathematically-rich phenomenon, the number of distinct representations may be too large to expect a teacher to have time to use all of them. Therefore, it is necessary to learn which representations might be more effective than others, and then form a sequence from those selected. Pilot studies were done with pre-service secondary teachers (n1 = 7 at a public research university and n2 = 3 at a public comprehensive university) on exploring a sequence of 7 different representations of Simpson's Paradox. Students tended to want to stay with the most concrete and visual representations (note: a concrete-visual-analytic progression may not be expected to apply in the usual manner in the particular case of Simpson's Paradox).

  • The Birthday Problem is "How many people must be in a room before the probability that some share a birthday (ignoring the year and ignoring leap days) becomes at least 50%?" Multiple approaches to the problem are explored and compared, addressing probability concepts, problem solving, modelling assumptions, approximations (supported by Taylor series), recursion, (Excel) spreadsheets, simulation, and student preconceptions. The traditional product representation that yields the exact answer is not only tedious with a regular calculator, but did not provide insight on why the answer (23) is so much smaller than most students' predictions (typically, half of 365). A more intuitive (but slightly inexact) approach synthesized by the author focuses on the total number of "opportunities" for matched birthdays (e.g., the new "opportunities" for a match added by the kth person who enters are those that the kth person has with each of the k-1 people already there). The author followed the model of Shaughnessy (1977) in having students give predictions in advance of the exploration and these written data (as well as interview data) collected from students indicated representative multiplier or representative quotient effects, consistent with the literature on misconceptions and heuristics. Data collected from students after the traditional and "opportunities" explorations indicate that a majority of students preferred the opportunities approach, favoring the large gain in intuition over the slight loss in precision.

  • This article explains and synthesizes two theoretical perspectives on the use of counterintuitive examples in statistics courses, using Simpson's Paradox as an example. While more research is encouraged, there is some reason to believe that selective use of such examples supports the constructivist pedagogy being called for in educational reform. A survey of college students beginning an introductory (non-calculus based) statistics course showed a highly significant positive correlation (r = .666, n = 97, p < .001) between interest in and surprise from a 5-point Likert scale survey of twenty true statistical statements in lay language, a result which suggests that such scenarios may motivate more than they demoralize, and an empirical extension of the model from the author's developmental dissertation research. [this paper was subsequently selected by the editors for inclusion in Getting the Best from Teaching Statistics, a collection of the best articles from volumes 15-21].

  • A university's introductory statistics course was redesigned to incorporate technology (including a website) and to implement a standards-based approach that would parallel the recent standards-based education mandate for the state's K-12 schools. The author collected some attitude (pre and post) and performance (post only) data from the "treatment" section and two "comparison (i.e., more traditional)" sections. There was a pattern of positive attitude towards the redesigned aspects of the course, including group work, lab and project emphasis, criterion-referenced assessment and examples from real-life. On the three problems given to the three sections at the end of the course, the only significant ANOVA (F(2,101) = 4.2, p = .0168) involved the treatment section scoring higher than the other sections. This occurred on a problem involving critical thinking (with a graphic from USA Today), an emphasis supported by the particular standards of the redesigned course.

  • The representativeness heuristic has been invoked to explain two opposing expectations - that random sequences will exhibit positive recency (the hot hand fallacy) and that they will exhibit negative recency (the gambler's fallacy). We propose alternative accounts for these two expectations: (1) The hot hand fallacy arises from the experience of characteristic positive recency in serial fluctuations in human performance. (2) The gambler's fallacy results from the experience of characteristic negative recency in sequences of natural events, akin to sampling without replacement. Experiment 1 demonstrates negative recency in subjects' expectations for random binary outcomes from a roulette game, simultaneously with positive recency in expectations for another statistically identical sequence - the successes and failures of their predictions for the random outcomes. These findings fit our proposal but are problematic for the representativeness account. Experiment 2 demonstrates that sequence recency influences attributions that human performance or chance generated the sequence.

  • We distinguish two conceptions of sample and sampling that emerged in the context of a teaching experiment conducted in a high school statistics class. In one conception "sample as a quasi-proportional, small-scale version of the population" is the encompassing image. This conception entails images of repeating the sampling process and an image of variability among its outcomes that supports reasoning<br>about distributions. In contrast, a sample may be viewed simply as "a subset of a population"- an encompassing image devoid of repeated sampling, and of ideas of variability that extend to distribution. We argue that the former conception is a powerful one to target for instruction.

Pages

register