One way of examining forecasting methods via assignments is to give each student a real or simulated set of data, with a requirement to forecast future values. However checking the accuracy of calculations for the host of possible methods can be onerous. One solution is to make part or the entire assessment dependent on the accuracy of the forecasts obtained. This mirrors real life, where forecasts are judged not by the method used but by how accurate the predictions turn out. This paper investigates how this might work with an actual example. Using simulated data from a model which incorporates trend, seasonality, Easter effect and randomness, we use a function of the mean square error of the forecasts to determine the final mark for a variety of methods. Results indicate that the students who have put in more work, and/or fitted the better models, would obtain the better marks.
The CAUSE Research Group is supported in part by a member initiative grant from the American Statistical Association’s Section on Statistics and Data Science Education