(due before Monday, February 13)
Now that we know what a visualization is, and why we’re doing it, we can ask the question: how do we know if it’s any good?
There are of course lots of different answers, and the answer depends on what perspective you’re coming from. These readings will try to get you to consider many different perspectives.
First, Tamara Munzner has thought through this deeply, and has some interesting thoughts about the many ways we assess Visualizations as academics.
- Tamara Munzner. A Nested Model for Visualization Design and Validation. Infovis 2009 (project page with pdf)
Of course, we can’t talk about “what is good” without consulting Tufte for his strong opinions. (not that he isn’t going to make his opinions clear).
- Edward Tufte. The Fundamental Principles of Analytical Design. in Beautiful Evidence. (protected pdf). In hindsight, this Tufte chapter is actually much better in the “how” to make a good visualization, and trying to distill the general principles, than many of the others we’ve read. But its Tufte, so its still full of his opinions on “what is good.”
As a counterpoint to Tufte, I suggest a paper that is very provocative. There has been a ton written about this paper, so we need to see the original source.
- Bateman, S., Mandryk, R.L., Gutwin, C., Genest, A.M., McDine, D., Brooks, C. 2010. Useful Junk? The Effects of Visual Embellishment on Comprehension and Memorability of Charts. In ACM Conference on Human Factors in Computing Systems (CHI 2010), Atlanta, GA, USA. 2573-2582. Best paper award. DOI=10.1145/1753326.1753716. (get paper at the project page here)
I think this paper is important not just because it has a finding that is contrary to what a lot of people wanted us to believe, but also because it actually has a lot of deeper ideas for what makes a visualization good, and how we might actually be able to be objective about measuring some of those deeper types of “good.” Of course, many people find the conclusions of the paper blasphemous. Stephen Few has a very coherent discussion of it here. In some sense, I’d say it’s more useful than the original paper – but I would really suggest you look at the original first. While more level-headed than most, Few still has an agenda.
Which leads to the last category: how to try to objectively measure those “deeper” kinds of good.
Chris North at Virginia Tech has been doing some very interesting work on trying to quantify how much “insight” visualizations generate. I actually recommend reading the shorter “Visualization Viewpoints” article, since it gives a better overview of the basic ideas. If you’re interested, you can go read the longer journal article that details a specific experiment.
- Chris North, “Visualization Viewpoints: Toward Measuring Visualization Insight”, IEEE Computer Graphics & Applications, 26(3): 6-9, May/June 2006. [pdf] (doi)
- Purvi Saraiya, Chris North, Karen Duca, “An Insight-based Methodology for Evaluating Bioinformatics Visualizations”, IEEE Transactions on Visualization and Computer Graphics, 11(4): 443-456, (July 2005). [pdf] (doi)
Here’s another variant of trying to really judge the “deeper” success of a tool:
- Shneiderman, Ben, and Catherine Plaisant. “Strategies for evaluating information visualization tools.” In Proceedings of the 2006 AVI workshop on BEyond time and errors novel evaluation methods for information visualization – BELIV ’06, 1. New York, New York, USA: ACM Press, 2006. (online) (official ACM page). (doi)
There is a lot here – but I think its really valuable.
The Munzner, Tufte, Bateman, and North readings are required for everyone. Read Munzner in detail, the others it’s mainly to get the gist. The Few discussion of Bateman is recommended to everyone. The other two are optional.
For now, we’ll stick with the same mechanism to have people discuss readings on a Piazza page. You’ve been finding interesting things to say without much prompting, so I’ll just leave it at that.