Design Exercise 6-1 Grading

This is how we graded Design Exercise 6-1.

We appreciate that students really followed the spirit of the assignment. Almost everyone thought about how to address these “hard data types” and conveyed designs without actually created a “real visualization with data”.

We graded each of the 8 different visualizations separately - scoring them on the usual A=90 scale. We took the average of those 8. We then added a fudge factor (since, in some rare cases, the whole of the assignment was more than the sum of the parts). For most people this was zero. We then added a “correction” for the critiques. (you will see your grade is the sum of these 3 things)

Either Cat or I graded each assignment. Some we did together. Others both of us did. In the event that both of us graded you, you get both sets of comments, but you get the higher of the two scores.

For the critiques, you received -5 if you did not turn them in. You received -1 for each critique that was bad. For exeptional critiques we gave +1. So the scores are between -3 and 3, but in most cases the critiques were good or great, so the “correction” score is 0.

For grading each visualization: we tried to appreciate the design without judging the artistic ability of the designer. We selected each grade from the following scale

  • A+ (95) - truly exceptional design and rationale
  • A (92) - the design has clear effectiveness and rationale and shows creativity
  • A (90) - the design shows some thought and creativity to address the task, and thoroughness in the explanation and rationale
  • AB (88) - the design shows some thought and creativity to address the task, or thoroughness in the explanation and rationale
  • AB (85) - the design and rationale show thought and completeness
  • B (82) - the design and rationale are complete and acceptable
  • BC (77) - design and rationale missing some key aspect
  • F (60) - clearly unacceptable
  • F (0) - not turned in

For each visualization, we selected some checkboxes from a list. Here are some explanations of what the boxes mean:

  • Artistic: “Artistic Merit (well done visually)”
  • Design Succeeds: “Effective at communicating the design”
  • Task: “Defines a task”
  • Addresses: “Argues for why design addresses the task”
  • simple_task: “Simple Task”
  • good_task: “Good (Creative) Task”
  • beware_claim: “Beware of claims (claim of something good which is suspect)”
  • Notable: “Exceptional/Notable - good for future examples”
  • literature: “Good use of design from the literature”
  • self-critique: “Good Self-Critique”
  • Good_Comp: “Good Comparison (with Baseline or other)”
  • Reduces: “Does a data reduction to make the problem more tractable”
  • Substandard_Rationale: “Substandard Rationale”
  • No_Comparison: “Does not give comparison (requires comparison with baseline)”
  • Rationales: “Includes tasks for Each Glyph Design (given and student) (MD1)”
  • Scales: “Rationale explains how it scales (MD2)”
  • –NO_BASELINE: “Doesn’t respond to baseline”
  • –NO_COMP: “No Comparisons”
  • extra: “Has Extra Designs”
  • Valid: “Valid representation of data”
  • no_multi: “Unclear how this shows how multiple airlines have the same route”
  • non-standard-tree: “Non-Standard Tree (not node link)”
  • tree_for_multi: “Incorrectly uses a tree diagram for something that isn’t”
  • Creative_Design: “Creative design beyond a standard”
  • unsound_argument: “Unsound Argument (unclear effectiveness claims)”

We did label what type of visualization we thought each was - more for our own record keeping. Any kind of visualization could be good or bad.

We also had an input to assess the variety in each section.

The individual elements are all named things like “s1a_checks” - which means “set visualization 1A, check boxes”.