Thesis / Dissertation

  • This dissertation investigated children's probabilistic reasoning during a twomonth<br>teaching experiment. As part of the research process, the researcher developed a<br>computer microworld environment, Probability Explorer, for children's explorations with<br>probability experiments. The design of the microworld is based on a constructivist theory<br>of learning, design of mathematical computer microworlds, and research on students'<br>understanding of probability and rational number concepts. Two major features in the<br>microworld include a dynamic link between numerical, graphical and iconic<br>representations of data that are updated simultaneously during a simulation, and the<br>ability to design experiments and assign probabilities to the possible outcomes.<br>The teaching experiment was conducted with three nine-year-old children. The<br>children participated in 10 hours of teaching sessions using the microworld. Each child<br>also participated in pre- and post- task-based interviews to assess their reasoning in<br>probabilistic situations. Each teaching session was videotaped, and computer interactions were recorded through internal mechanisms to create a video, including children's audio, of all actions in the microworld. These tapes provided the basis for analysis and interpretation of the children's development of probabilistic reasoning while using the microworld tools.<br>The individual case studies detail the children's probabilistic reasoning during the<br>pre-interview, teaching experiment, and post-interview. After extensive coding, several<br>themes were identified and discussed in each case study. Some of the major themes<br>included: understanding and interpretation of theoretical probability in equiprobable and<br>unequiproable situations; theories-in-action about the law of large numbers; and<br>development of part-whole reasoning as it relates to probability comparisons, a priori<br>predictions, and analysis of relative frequencies.<br>The children's development of probabilistic reasoning and their interactions with<br>the computer tools varied during the study. The children employed different strategies<br>and utilized different combinations of representations (e.g., numerical, graphical, iconic)<br>to make sense of the random data to enact their own theories-in-action. The results from<br>this study imply that open-ended microworld tools have the potential to act as agents for<br>children's development of intuitive-based probability conceptions. Dynamically linked<br>multiple representations and flexibility in designing experiments can facilitate an<br>exploratory approach to probability instruction and enhance children's meaning-making<br>activity and probabilistic reasoning.

  • The present knowledge society requires statistical literacy-the ability to interpret, critically evaluate, and communicate about statistical information and messages (Gal, 2002). However, research shows that students generally do not gain satisfactory statistical understanding. The research presented in this thesis is a sequel to design research carried out by Cobb, McClain, and Gravemeijer (2003) and aims at contributing to an empirically grounded instruction theory for early statistics education in which educational computer tools are used. Computer software allows users to dynamically interact with large data sets and to explore different representations in ways that are impossible by hand. The computer tools used were the Statistical Minitools (Cobb et al., 1997), which have been designed for middle school students. One important end goal of instruction was that students would gain understanding of sampling and learn to use 'shape' to reason about distributions. In line with the theory of Realistic Mathematics Education, a central tenet was that learning mathematics should be a meaningful activity. The research questions were: 1. How can students with little statistical background develop a notion of distribution? 2. How does the process of symbolizing evolve when students learn to reason about distribution? In the latter question, 'symbolizing' refers to the reflexive process of making symbols and mentally constructing the objects which they represent. The design research consisted of five cycles of three phases: a design phase, a teaching experiment, and a retrospective analysis. Prior to these cycles, historical and didactical phenomenological analyses (Freudenthal, 1991) were carried out as well as exploratory interviews. The historical study gave rise to hypotheses that were partially tested in the teaching experiments carried out in grades 7 and 8 (12 to 14 years old). In the design phase, a so-called hypothetical learning trajectory (Simon, 1995) was formulated, which was tested and revised in the subsequent cycles of design research. The recurring activity of discussing growing samples proved useful to support reasoning about distribution and sampling. For the analyses of students' learning, a method was used similar to the constant comparative method (Strauss &amp; Corbin, 1998). It turned out that students conceived distributions as 'bumps' consisting of a small group of low values, a large group of 'average' values, and a small group of high values. Peirce's semiotics proved useful for analyzing students' process of symbolizing, in particular, his notions of hypostatic abstraction and diagrammatic reasoning. Hypostatic abstraction refers to the transition from a predicate to a new abstract object ("the dots are spread out" to "the spread is large"). Diagrammatic reasoning consists of three steps: making a diagram, experimenting with it, and reflecting on the results. The research shows the importance of letting students make their own diagrams and discussing these. The computer tools seemed most useful during the experimentation phase. Remarkably, the best diagrammatic reasoning occurred only during class discussions without computers around. One recommendation is: only invest in using computer tools if all educational factors such as teaching, end goals, instructional activities, tools, and assessment are tuned to each other.

  • This study explores the reasoning that emerged among eight high school juniors and seniors as they participated in a classroom teaching experiment addressing stochastic conceptions of sampling and statistical inference. Toward this end, instructional activities engaged students in embedding sampling and inference within the foundational notion of sampling distributions - patterns of dispersion that one conceives as emerging in a collection of a sample statistic's values that accumulate from re-sampling.<br>The study details students' engagement and emergent understandings in the context of<br>instructional activities designed to support them. Analyses highlight these components: the design of instructional activities, classroom conversations and interactions that emerged from students' engagement in activities, students' ideas and understandings that emerged in the process, and the design team's interpretations of students' understandings. Moreover, analyses highlight the synergistic interplay between these components that drove the unfolding of the teaching experiment over the course of 17 lessons in cycles of design, engagement, and interpretation. These cycles gave rise to an emergent instructional trajectory that unfolded in four interrelated phases of instructional engagements:<br>Phase 1: Orientation to statistical prediction and distributional reasoning;<br>Phase 2: Move to conceptualize probabilistic situations and statistical unusualness;<br>Phase 3: Move to conceptualize variability and distribution;<br>Phase 4: Move to quantify variability and extend distribution.<br>Analyses reveal that students experienced significant difficulties in conceiving the distribution of sample statistics and point to possible reasons for them. Their difficulties centered on composing and coordinating imagined objects with actions into a hierarchical structure in re-sampling scenarios that involve: a population of items, selecting items from the population to accumulate a sample, recording the value of a sample statistic of interest, repeating this process to accumulate a collection of data values, structuring such collections and conceiving patterns within and across them in ways that support making statistical inferences.

  • This research work is included in the research agenda developed by the "Teachers' professional development group" at the University of Cadiz, Spain. The main aim was carrying out a first approximation to studying the planning of the teaching and learning process related with Dealing with Chance at Secondary School level. The elected methodological research design was qualitative and interpretative, and focused on two topics: textbooks and teacher's arguments. Content analysis of the chapters devoted to Chance and Probability in a sample of secondary school textbooks in four different teaching levels (12-16 years-old students) of the Spanish publishers Guadiel, Santillana, Bru&ntilde;o and McGraw Hill was carried out. The content analysis aims were describing the units, charactering the elements that configure the teacher's intervention and delimiting possible subjacent didactic models.<br>The study of three teacher's arguments served to analyse their ideas and decisions about the use of information sources in the process of planning the teaching and learning. We also investigated the teachers' different arguments in relation to their judgements and decisions to introduce this unit and the possible influence of information sources on this decision.<br>We based on the Teachers Professional Knowledge framework to elaborate the instruments for collecting information, the category systems and to interpret the results. Three progress hypotheses about the evolution of teachers of increasing complexity were stated. We introduce the hypotheses about the teachers' progression in the use of information sources when planning the teaching and learning process, where we differentiate information and knowledge sources.<br>Differences between textbooks, referring to the unit structure and content treatment showed two tendencies in the teaching of chance and probability. The first one is associated to traditional didactic models and is characterised by predominance of classical laplacian probability, where the study of outcomes is independent of the study of random generators. The second tendency is linked to technological didactic models and gives priority to frequentist probability. Here outcomes become a tool to facilitate the quantification of probability and take into account the obstacles and errors that can<br>appear in building this notion. These differences reflect the tendencies in the textbooks subjacent didactic model, and produce information about two possible perspectives in the teacher's planning of these units.<br>The analysis of teachers' arguments suggests a profile of traditional intervention, where the main information source is the textbooks that facilitate the teachers' content election and sequence. Teachers do not consider the students conceptions as a source of information about their previous knowledge. Two of the teachers presented some evolution through a spontaneous model, characterised by considering different sources of information, such as courses, journals, etc, that allows them to introduce some innovations in the teaching and learning process to promote the students' experimentation and participation. These teachers introduced these innovations in a non-systematic manner, after reflecting about the necessities detected in the development of their teaching interventions.<br>The third teacher presented some evolution through a technological didactic model, where he made a more systematic planning of the necessities detected in the evaluation of students' knowledge, and tried to help students overcome their errors. Two teachers did not introduce the units related to Chance and Probability, basing on the tradition in mathematics teaching, scientific determinism and lack of time to conclude the curriculum. The third teacher introduced these units, from a classical-laplacian perspective, influenced by the use of the textbook.<br>We conclude from our results:<br>- The potential of "Teacher Professional Knowledge" framework to connect different fields: probability, sources of information and the planning of teaching and learning process.<br>- The necessity of distinguishing between sources of information and sources of knowledge.<br>- The usefulness of analysing the text structure and the mathematics and probabilistic content to identify tendencies in the textbook subjacent didactic models<br>- The usefulness of content analysis of textbooks activities to produce information about the capacities that students can develop, the interactions between students, and possible obstacles in the learning of probability.

  • No general theory has been formulated to show interrelations among a collection of variables that are related to statistics anxiety. The present study made an attempt to develop a comprehensive model that would predict statistics anxiety from several dispositional, situational, and environmental antecedents derived from the current literature. Two hundred forty-six college students who were enrolled in introductory statistics courses completed a survey packet that included a set of questions and five standardized assessment instruments that measured statistics anxiety, mathematics anxiety, attitudes toward statistics, test anxiety, and general anxiety. Extensive preliminary data screening assured the appropriateness of the data for parametric statistics. Independent - test results showed significant differences between low-and-high anxious students in terms of attitudes toward statistics, test anxiety, mathematics anxiety, general anxiety, previous mathematics experience, satisfaction, and pace. A direct discriminant function analysis was used to discriminate between low-and-high statistics-anxious students. A significant discriminant function, based on the attitudes toward statistics and test anxiety, classified the groups accurately approximately 80% of the time. Five measurement models and one structural model were specified, identified, estimated, and tested. Results showed that the modified structural model did not fit the data well. However, the dispositional and situational antecedents models fit the data well. The original environmental antecedents model was modified to fit the data. In the final model, the dispositional and situational antecedents models contributed significantly to statistics anxiety. The environmental antecedents model was not a significant contributor; however, it was significantly related to the other variables in the model. The dispositional antecedents model alone accounted for 58% and the situational antecedents model alone accounted for 23% of the variance in statistics anxiety scores. The present study showed that statistics anxiety is a complicated construct that is difficult to measure and investigate. Findings of the present study also suggest that personality-related factors may be one of the most important effects of statistics anxiety. More studies are needed to clarify the construct of statistics anxiety and its relationships with other variables.

  • The purpose of the study was to investigate the effects of the spreadsheet on achievement in selected statistical topics and the effects on beliefs about statistics of undergraduate students in an elementary statistics course. This study was instituted as part of the investigator's effort to enhance the statistical experience of undergraduate students. The study sought answers to the following questions: Does the use of the spreadsheet affect students' achievement on every topic selected for the study? Is the level of previous computer experience of students related to their achievement on the topics selected for the study? Does the use of the spreadsheet affect students' beliefs about statistics? Does students' achievement on the topics taught with the spreadsheet approach differ from achievement on the topics taught without the spreadsheet? The investigator conducted the experiment with students in one class at the beginning of the Fall 1999 academic semester in a community college setting. The investigator selected and taught eight Elementary Statistics course topics. The selected eight topics were grouped into two categories of four: topics taught with no spreadsheets and topics presented to students with the aid of spreadsheet files. During class sessions, students used computer labs and the spreadsheet program, Excel 5 and/or Excel 98. The instructor developed the curricular units and the test instruments. Test gains show that the spreadsheet approach to instruction was positively related to student achievement on every topic selected for the study. In addition, students' achievement on tests of topics taught with the spreadsheet was greater than their achievement on tests of topics taught with no spreadsheet. The use of the spreadsheet files seemed to affect students' beliefs about statistics. The analysis of students' responses to the statements on the questionnaire indicated that students were more in agreement with the questionnaire's statements after its second administration at the end of the study than they were after the first administration of the questionnaire.

  • Technology-based problem-solving models are being successfully implemented in the mathematics curriculum. This study focused on enhancing problem-solving ability by supplementing traditional instruction in statistics with metacognitively-cued, computer-coached activities. The purposes of this study were to investigate the: (1) differences in ability to solve basic, statistical word problems when comparing a metacognitively-cued, computer-tool (MCCT) group to a metacognitively-cued, computer-coached (MCCC) group; (2) differences in metacognitive ability while solving basic, statistical word problems when comparing a MCCT group to a MCCC group; (3) relationship between problem-solving ability and metacognitive ability while solving basic statistical word problems. A sample of 120 community college, elementary statistics students was divided into four sections with a MCCT and a MCCC group at one time period and a MCCT and a MCCC group at a different time period. Treatments lasted eight weeks of a summer semester. Dependent variables were ability to solve basic statistical word problems as measured by a teacher-made test and ability to think metacognitively while solving the word problems, as measured by the Assessment of Cognition Monitoring Effectiveness (ACME) procedure. The students were also measured on the quality of their responses to written metacognitive cues while solving a basic statistical word problem before each of the exams during the experiment. The dependent variables were measured at five different times throughout the semester. It was expected that the metacognitively-cued, computer-coached groups would show the most improvement and metacognitively-cued, computer-tool groups would show the least improvement on all measures. The data analysis revealed that the apparent difference in problem-solving ability between the MCCT groups and the MCCC groups grew as the study progressed, achieving statistical significance at the last testing, with the MCCC groups being significantly higher at the last testing. The MCCC groups also demonstrated significant higher metacognitive-ability. In addition, significant correlations were found between problem-solving ability and metacognitive ability, ranging from .28 to .66. The presence of some significant teacher effects suggests that the effectiveness of coaching software may be affected by instructional strategy.

  • The purpose of this study was twofold. First, the purpose was to design a web based graduate level statistics course for MBA students and to analyze the attitudes of the online students toward the course. The second purpose was to compare the students taking the course online versus the students taking the course in a traditional classroom setting. Achievement along with three mediating variables was investigated. The three mediating variables included: prior computer experience, prior math knowledge and experience and attitude toward the subject of statistics. The participants were forty-two graduate students in their first year of the MBA program, thirteen students took the class online, twenty-nine attended a traditional class. Students' attitudes toward learning in an online environment overall were favorable. Differences were found in the attitude toward the subject of statistics and prior computer experience; however, no casual relationship between class and achievement was detected. Students who learned in an online environment achieved comparably to students learning in a traditional classroom. The online course developed for this research can be used as an educationally equivalent managerial statistics course taught in a traditional classroom setting.

  • In this research we are interested in the teaching and learning of normal distributions in an introductory data analysis course. The research is based on a theoretical framework where two different dimensions (institutional and personal) of meaning and understanding are considered for mathematical objects. We are interested in the following questions: 1. What is the institutional reference meaning of the normal distribution in a traditional introductory course of data analysis? In Chapter 4 we describe an empirical analysis of 11 University textbooks, from which we determine the main elements of meaning (problems, practices to solve these problems, representations, concepts, properties and types of arguments) that are presented in these textbooks in relation to the normal distribution. 2. How should the teaching of normal distribution be organised to take into account the use of computers? A teaching sequence of the normal distribution, which takes into account the result of the previous analysis and which incorporates the use of computers is described in Chapter 5. The main differences introduced by the use of computers as regards the reference meaning and the student's predicted activities in the different tasks are analysed. 3. What difficulties arise when developing this teaching? In fact, how is the teaching carried out? The observation of the teaching sequence in two successive academic years (1998-99) (1999-2000) and the interactions between the lecturer and the students in the different sessions are analysed in Chapter 6. The main points of difficulty and change predicted in the teaching are described. 4. How does the teaching work for students? What are their difficulties? What do they learn? (evolution of personal meaning along instruction). A total of 117 students were sampled. In each session the students, working in pairs, produced written documents with their solution to open-ended tasks. These documents were analysed to identify correct and incorrect elements of meaning that students used in their solutions. The progression in learning was clear as regards the theoretical concepts and use of software, and less general as regards the methods of solution or real data analysis activities. 5. What is the students' personal knowledge after teaching? We used two different instruments to assess students' learning: a) a questionnaire; b) a data analysis activity from a new data file to be solved with the use of computers. The data analysis showed that the majority of students were able to understand isolated concepts associated with the normal distribution (on average each student gave 70% of correct answers in the questionnaire). On the contrary there was a great difficulty in integrating these elements to solve real data analysis problems (only 40% of students succeeded in the open task). We conclude that data analysis is a high level activity which is difficult to teach in the time available for an introductory course, and that the main aim in these courses should be to train "users of statistics". Finally to complement our work, we compare the main differences in learning between students with and without previous statistical knowledge.

  • The main questions addressed by this study were: (a) what do the knowledge structures of introductory statistics students look like, (b) how do these knowledge structures change as the semester progresses, and (c) are there any similarities or differences among different students' structures? Nine graduate students enrolled in an introductory educational statistics course agreed to meet with me one-on-one once every three weeks during the term they were taking the course. Each session, we discussed course concepts and how the student believed they related to each other. Each session included previous concepts we had discussed plus new concepts taught in class since our last session. The final session included a discussion of 45 statistical concepts and their relationships. The theoretical perspective I chose for this study was Anderson's ACT-R* theory. In particular, I am interested by the idea that students learn more than just declarative knowledge, or facts and definitions, and mechanical knowledge, or procedures and processes. Anderson and others (e.g., Jonassen, Beissner &amp; Yacci; Byrnes) argue that there is a third type of knowledge students actively build as they learn: structural or relational knowledge. This third type of knowledge serves to relate all of the declarative and mechanical knowledge students learn. My thesis is that this third type of knowledge is an indication of a student's understanding of the material they are learning. If these structures are not integrated or complex, then neither is the student's understanding. The main idea here follows the current trends in statistics education research, that students need to know more than what the mean is or how to calculate it (declarative and mechanical knowledge respectively); they also need to know what the mean tells us about a set of data and why it is an important indicator of a sample's central tendency. They also need to understand, for example, why we cannot calculate a mean for nominal and ordinal variables such as gender or class rank. The results of my dissertation did demonstrate students' ability to organize course concepts in a way that is meaningful to them. With nine different organizations, I also present evidence that even though students are taking the same course, with the same instructor and same textbook, they do build different understandings (constructivism is also an important theoretical perspective captured by this data). Finally, with five different organizations over an entire semester, I present evidence that students' organizations do change. Future research needs to explore these organizations in more depth to determine how students develop these organizations, what might lead them to change their organizations, and what these organizations mean as an indicator of students' statistical knowledge.

Pages

register