Teachers often come to our professional development programs thinking that statistics<br>is about mean, median, and mode. They know how to calculate these statistics, though<br>they don't often have robust images about what they mean or how they're used. When<br>they begin working with computer data analysis tools to explore real data sets, they are<br>able to see and manipulate the data in a way that hasn't really been possible for them<br>before. They identify particular points on graphs and can interpret what their positions<br>mean. They notice clumps and gaps in the data and generally find that there's a lot<br>more to see in the data than they ever imagined before. In addition, those exploring<br>data in this way often ground their interpretations in the range of things they know<br>from the contexts surrounding the data. They discover the richness and complexity of<br>data.<br>Yet all this detail and complexity can be overwhelming. Without some method of<br>focusing or organizing or otherwise reducing the complexity, it can be impossible to say<br>anything at all about a data set. How do people decide what aspects of a data set are<br>important to attend to? What methods do they use to reduce the cognitive load of trying<br>to attend to all the points? This paper will begin to describe some of the ways that<br>people do this while using new, software-driven representational tools. In the process,<br>we will begin to sketch out a framework for the techniques that people develop as they<br>reason in the presence of variability.