Readings 13: Evaluation

Evaluation is such a big and hard question. This will get at the key concepts.

Since you are (hopefully) pre-occupied with the project, I am cutting the requirements to a minimum. This is hard since the topic is so essential and foundational.

  1. (required)  Analysis (Chapter 4 from Munzner’s Visualization Analysis & Design) (Munzner-04-Validation.pdf 0.5mb)

    This is a variant of the nested model paper below. It gets the main points. The nested model is really influencial in my thinking.

  2. (required)  Chris North, “Visualization Viewpoints: Toward Measuring Visualization Insight”, IEEE Computer Graphics & Applications, 26(3): 6-9, May/June 2006. pdf (doi; 4 pages)

    This is a good introduction to the challenges of visualization evaluation. And it’s short.

The next two readings are a little less formal - but valuable none-the-less. They more closely get at “what makes a good visualization.”

  1. (required)  The five qualities of great visualizations (Chapter 2 of The Truthful Art) (theTruthfulArtCh2.pdf 10.0mb)

  2. (required)  Graphical Integrity (Chapter 2 of Tufte’s The Visual Display of Quantitative Information) (1-VDQI-2-GraphicalIntegrity.pdf 62.2mb)

Optional

The nested model comes from a paper. I’ll say this is one of the most important papers in the entire field. For class, it’s redundant with the chapter (which came later). If you plan to work in the field, you should see the actual paper.

  • (optional)  Munzner, T. (2009). A Nested Model for Visualization Design and Validation. IEEE Transactions on Visualization and Computer Graphics, 15(6), 921–928. (pdf) (doi)

    Chapter 4 of Munzner’s book is based on this earlier paper that was quite influential (at least to my thinking). It is somewhat redundant with what is in the chapter, but for completeness, you might want to see the original.

In the past, I asked students to read an example of an empirical paper that shows about the importance of careful experiment design and analysis. I picked these two last year. They are examples of very thorough empirical methods. I have chosen them less because of what they are about and more because of their methodology. However, the first one gets at some core issues about how we use statistics in experiments.

  • (optional)  Jouni Helske, Satu Helske, Matthew Cooper, Anders Ynnerman, and Lonni Besançon. 2021. Can Visualization Alleviate Dichotomous Thinking? Effects of Visual Representations on the Cliff Effect. IEEE Transactions on Visualization and Computer Graphics 27, 8 (August 2021), 3397–3409. DOI: https://doi.org/10.1109/TVCG.2021.3073466

    This paper gets at a common problem with statistical interpretation and how charts might change it.

  • (optional)  Dragicevic, P., & Jansen, Y. (2018). “Blinded with Science or Informed by Charts? A Replication Study.” IEEE Transactions on Visualization and Computer Graphics, 24(1 (Proceedings InfoVis 2017)), 1–1. DOI PDF

    I pick this one because it takes quite a simple question and tries to be painstakingly thorough with it. Moreover, it is mainly trying to replicate an experiment that got a lot of press. While the authors didn’t set out to contradict the prior paper, it seems they got a different answer to the same question.

The “Chartjunk” paper would be required reading - except that we’ve already learned about it from Cairo, The Functional Art Chapter 3 (theFunctionalArtCh3.pdf 11.4mb). It’s worth looking at if you’re really interested in the topic. And the Few blog posting may be more valuable than the article itself

  • (optional)  Bateman, S., Mandryk, R.L., Gutwin, C., Genest, A.M., McDine, D., Brooks, C. 2010. Useful Junk? The Effects of Visual Embellishment on Comprehension and Memorability of Charts. In ACM Conference on Human Factors in Computing Systems (CHI 2010), Atlanta, GA, USA. 2573-2582. Best paper award. project page w/pdf (doi). (10 pages)

    This is a pretty provacative paper. You can pick apart the details (and many have), but I think the main ideas are important. There is a ton written about this paper (those of the Tufte religon view this as blasphemy). Stephen Few has a very coherent discussion of it here. In some sense, I’d say it’s as useful than the original paper – but I would really suggest you look at the original first. While more level-headed than most, Few still has an Tufte-ist agenda. Reading the Few article is highly recommended – in some ways, its more interesting than the original.

In case you cannot get enough of Tufte, you can get his ideas on what is good (Ch5) and bad (Ch6).

  • (optional)  Fundamental Principles of Analytical Design (Chapter 5 of Tufte’s Beautiful Evidence) (4-BeautEvid-5-FundamentalPrinciples.pdf 14.4mb)
  • (optional)  Corruption in Evidence Presentations (Chapter 6 of Tufte’s Beautiful Evidence) {{ book-link “4-BeautEvid-6-Corruption.pdf” }}

If you’re wondering whether the deceptions Tufte mentions actually fool people, here’s an empirical study of it:

  • (optional)  Pandey, A. V., Rall, K., Satterthwaite, M. L., Nov, O., & Bertini, E. (2015). How Deceptive are Deceptive Visualizations?: An Empirical Analysis of Common Distortion Techniques. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15 (pp. 1469–1478). New York, New York, USA: ACM Press. (doi)

Some other stuff on evaluation:

  • (optional)  Lam, H., Bertini, E., Isenberg, P., Plaisant, C., & Carpendale, S. (2011). Empirical Studies in Information Visualization: Seven Scenarios. IEEE Transactions on Visualization and Computer Graphics, 18(9), 1520–1536. http://doi.org/10.1109/TVCG.2011.279

  • (optional)  Correll, M., Alexander, E., Albers Szafir, D., Sarikaya, A., Gleicher, M. (2014). Navigating Reductionism and Holism in Evaluation. In Proceedings of the Fifth Workshop on Beyond Time and Errors Novel Evaluation Methods for Visualization – BELIV ’14 (pp. 23–26). New York, New York, USA: ACM Press. ( http://graphics.cs.wisc.edu/Papers/2014/CAASG14)

    What happens when I let my students rant.

  • (optional)  Gleicher, M. (2012). Why ask why? In Proceedings of the 2012 BELIV Workshop on Beyond Time and Errors – Novel Evaluation Methods for Visualization – BELIV ’12 (pp. 1–3). New York, New York, USA: ACM Press. (link)

    Me ranting about how evaluation shouldn’t be an end unto itself. The workshop talk was much better than what I wrote.

  • You should read at least one of the papers by Michelle Borkin and colleagues on the memorability of visualization. These papers are very provocative, and provoked some people to be downright mean in attacking it. You don’t need to worry about the details – just try to get the essence. The project website has lots of good information.

    • (optional)  Michelle Borkin et. al. What Makes a Visualization Memorable? pdf InfoVis 2013 (10 pages). This is another radical thought of “maybe Tufte-ism isn’t all there is – and we can measure it.” Again, we can quibble with the details, but they really re getting at something real here.

    • (optional)  Michelle Borkin et. al. Beyond Memorability: Visualization Recognition and Recall. InfoVis 2015. (pdf); 10 pages