Survey Results

Thanks to everyone who took the DE12 post-mortem survey. It was quite interesting for me. It gives me a lot of food for thought on how to improve things - even though it is too late to make changes for this year.

(although, it does re-assure me that some of my last minute changes were good)

If you’re curious, here are the survey results (and my takeaways)…

Critiques

About giving critiques…

  • 59% (37/63) students said “I think writing critiques helps me understand how to do visualization better”
  • 38% (24/63) students said “I don’t think I got much out of writing critiques, but feel like I gave useful feedback to classmates”

I still believe doing critique is a good way to learn. And it seems like they are providing some utility to the recipients (especially since the course staff cannot give timely feedback).

On receiving critiques…

  • 51% “It might have been useful had I needed to revise this visualization”
  • 35% “It was helpful in aiding my understanding of what I did (and learning visualization), even though I didn’t get to redesign my visualization”
  • 15% “It wasn’t useful” or “I didn’t look at it”

Well, you are going to revise these visualizations, so I guess that means most people found them useful.

Unfortunately, I have no way to correlate this with the quality of the critiques or the visualizations (since it’s all anonymous).

Some visualizations are more interesting to critique (both in terms of learning from examining them and being able to provide useful feedback). And the quality of the critiques vary a lot. This is part of the reason the number that we did is so large: I was hoping that critiquing 6 different designs from 6 different authors would raise the probability that you would get at least some that were interesting.

Similarly, the quality of the critiques written did vary. (although, almost all seem at least OK). Again, the larger numbers hopefully meant that you got at least some decent critiques.

Takeaway: this is an imperfect exercise, but I think it is useful. It’s tricky from a practical point (it requires everyone to do things on time, and even then, the feedback can be too slow to be useful).

The CLO

24 students (6 groups) participated in the CLO.

23/24 students said “Most of my group were engaged and took the assignments seriously (they provided content for feedback and provided feedback)” (that’s good).

22/24 students said “Yes - the CLO was helpful” - only 2 said “No - I think the CLO could have been helpful, but the interactions with my group didn’t work out”. No one said “No - I think the CLO was a failed experiment”.

There were some interesting comments about the CLO - in particular, students who weren’t part of it regretted the decision. I don’t know what to do with this: forcing people to do it breaks the “opt-in, everyone wants to be there, so they take it seriously” aspect (which, admittedly is just an ideal).

Takeaway: I view the experiment as a success, and will try it again. I think I might try the “small groups on a tight schedule” for more of the class early in the semester so that students can be more informed as to whether it works for them. I don’t know if I want to force all students to have to turn around critiques in 2 days.

Tools

51 students (81%) used Tableau (although, many used other things).

33 students (more than half) did some Python programming.

In the comments (and talking to people outside of class), Tableau comes up a lot.

Takeaway: Tableau is a tricky design element of class. I fear that I either need to “lean in more” (spend more time “teaching” about it), or provide more viable options. It’s hard to say “use whatever tool you like” since most people don’t have any experience to pick tools with.

Questions / Data

Students were split on having to come up with their own questions

  • 34 (54%) students said “I would have prefered to be given the questions”
  • 34 (54%) students said “I liked being able to pick questions”

Yes, at least 2 people both liked picking questions but would have preferred not to.

There were many interesting comments on this point as well.

As far as the data set:

  • 50 (79%) - “Yes - it was appropriately challenging and familiar enough”
  • 12 (19%) = “No - it was too hard to find interesting things”
  • 1 (2%) = “No - it was too hard to work with”

Takeaway: Finding “interesting” stories is an important, but separate, aspect of doing data science/visualization. Right now, we do “my stories” in the early phases (not final visualizations), and “your stories” at the end. Having to come up with 4 interesting stories might be a bit much.

Last year, we did the “story finding” aspects collaboratively - so it was kindof a mix. Students could find their own stories, but they didn’t have to. I think that is an effective strategy.

What to do with class

Students like talking through examples (75%), some would have preferred more “redesign” exercises (17%). Few (8%) would have preferred traditional lectures.

As far as what to do more of?

  • 62% Discuission of examples
  • 48% More complicated applications
  • 48% In-class critiques of student work
  • 29% Advanced topics / research
  • 41% More about implementation
  • 27% In-class design exercises
  • 27% Discussion of basics

Advanced topics and more complicated applications generally go together.

Takeaway: People like examples. There should be more. And possibly mixed with the “complicated applications” piece.

It gets at the existential question of this class: should this be a “traditional grad class” focused on research and reading papers, or a “practically oriented” class, focused on providing skills. Doing both is hard, so the class has evolved more towards teaching students to make visualizations. Some of this follows the student demographic - increasingly, the class is not (CS) research-oriented students.

In the future, I am not sure how to balance between “I want students to be able to make visualizations” and “I want students to be aware of the important/current research”. I feel that the “foundations” are the common link between them.

What to do with all this?

In a larger class (and this isn’t even that large of a class), I don’t get as much of a sense of what students are experiencing and wanting. These little surveys are really helpful. If nothing else, it makes me realize how disconnected I am.

For this semester… this gives me evidence to support my choices (CLOs and the re-do assignment).

For next year… this is good for my “continual improvement”. I will make adjustments. I plan to keep examples.

I am stumped what to do about Tableau.