Student evaluations of teaching have increased in importance to universities in Australia over<br>recent years due to changes in government policy. There has been significant debate in the<br>literature as to the validity and usefulness of such evaluations and as to whether students who<br>respond to the evaluations are indeed representative of the student population. A potential<br>invalidating issue is self selection in the evaluation process. In this paper, we consider student<br>evaluations of a large first year business statistics subject that had 1073 eligible students<br>enrolled across four campuses at the time of the evaluation. The study is based on the 373<br>students (34.8%) who responded to the survey, and their final results. The evaluations were<br>open for a period of six weeks leading up to and just after the final exam. The study looks in<br>detail at the student population identifying such attributes as gender; home campus; course of<br>study; domestic/international; Commonwealth Supported Place/full fee paying, etc. and then<br>mapping these results to those of the students who responded to the survey.
The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education