DC3 Endgame

by Mike Gleicher on May 2, 2017

This post is to detail how we’ll do the handins and grading for Design Challenge 3. That posting has a lot of details on what to turn in, so you might want to read it again.

The deadlines in the assignment posting:

  1. Monday 5/8 – Noon – if you turn things in before this, we will check it
  2. Tuesday 5/9 – Noon – official deadline for turning things on Canvas. Things turned in after this will be considered late and (possibly) penalized.
  3. Tuesday 5/9 and Wednesday 5/10 (afternoon) – we’ll do demos. If you turned your project in on time (e.g. before the demos), and something comes up in the demo, you can update your project (it will be considered late – so ask at the demo if it’s worth making the fix). If you turn in your assignment during/after the demos, it will be counted as late.
  4. Wednesday May 10th – we cannot grade things turned in after this.


We will do “demos” on Tuesday, May 8th and Wednesday May 9th.

If you made a tool, a demo is pretty much required. If you really don’t want to show off what you’ve made, then we can try to look at what you turn in by itself, but it’s unlikely we’ll be able to test it.

If you didn’t make a tool, you are still welcome to schedule a demo so you can show off what you did and we can ask questions about it. It’s not essential if what you turn in is well documented and self-explanatory, but it gives us a chance to clarify things.

We will schedule demos for 20 minute slots, with 3 people in each slot (since not everyone will take the same amount of time). We expect to spend about 5 minutes with each one, but there is flexibility (some will be faster and some slower), and everyone can “load data” in parallel at the beginning.

Sign up for a time slot at: https://calendly.com/gleicher/cs765-demo/05-09-2017

Please try to take one of the available slots. If you really can’t make any of them either (1) try to get by without doing a demo, or (2) send email and we can try to schedule something for Monday (5/8) or Friday (5/5).

More on Grading

The turn in process and the things we’re looking for are detailed in the assignment, but here is a more specific process that we will use. Look at the list of 7 criteria (in “How will this be graded”) and the hand-in requirement

For Tools:

  1. [Demo] A quick overview of the design – what are the basic ideas and goals (this will supplement what you write in the handin)
  2. [Demo] Demonstrations based on data of your choice (data sets provided in the repository) – for these, we expect that you’ll not only be able to show that your tool does something reasonable, but point out how it addresses some of the tasks you identify.
  3. [Demo] Demonstrations based on our data sets (data sets we’ll give you at the demo) – for these, we’ll want to see that your tool works on new data sets. Also, we’ll want to see if your tool helps find some of the things that we have “planted” in the data. (for example, we might make a data set where a person got sick, or where there is a bad assignment)
  4. [After Demo] Review of the documentation and handin – from this, we’ll assess you design documents (e.g., task analysis, design rationale). While we will probably look at your code to satisfy our curiousity, we will not penalize you if your code is ugly. We will care that your handin is complete (e.g., that there are instructions on how to run things). We probably won’t try to run people’s tools (which is why the demos are important).

For Sketches and Visualizations:

At the demo, we’ll try to get a quick overview of what you were trying to do. We’ll let you point out some of the kinds of things you see in the examples you generate. Then we’ll ask you about how it might look on other data (including some specific things like “what would it look like if a good student got sick? how easy would it be to spot?”). The idea is that it will give you a chance to let us know how the design would work in other situations, even if you don’t give us a tool that let’s us try it.

For these kinds of assignment, our review of the hand-in after the demo will be more important.

Things we’ll look for (all assignment types): see the list on the assignment, but this is more “grading specific”:

  1. Did you turn in all required parts of the assignment?
  2. Does your task analysis show that you’ve thought about what you are trying to show?
  3. Does your design address some of these tasks? Can you provide convincing examples / rationale that the design will work?
  4. Is the design creative / interesting?
  5. Does the design make good use of the principles we discussed throughout class (e.g. encoding choices, …)?
  6. Is the design communicated well? Even if the implementation is crude, or the you just have a sketch, is there enough other information so we could imagine what it will look like in practice?
  7. Does the implementation work? Does it provide outputs that fit the design? Did it work on data sets you haven’t seen before?

There are, of course, trade-offs. If you have a tool that makes even a simple visualization but does so robustly (works on the new data sets), that’s an accomplishment – we’ll be impressed that you pulled it off in a short amount of time. If you just give a sketch, you’ll have to impress us with the creativity of your design or the thoroughness of your rationale or …



Previous post:

Next post: