Design Challenge 2
Contents
Design Challenge 2: Product Review Exploration
Overview
In this assignment, you will build a tool to work with consumer product review data. In a sense, it’s a little backwards: we are telling you what the data is, and you need to tell us what the task is.
By consumer product reviews, we mean the kinds of product reviews you see on e-commerce sites such as Amazon. Users (people who have bought the products) give a rating (a number between 1-5) to different products. The very simplest form of the data is a collection of reviews as a triple: the product ID (nominal), the user ID (nominal), and a rating (quantitative, interval). You can have access to more information about each review if you want – the time the review was made, the name of the product, the category of the product, even the text of the review. However, you do not need to work with the richer data.
Even with the simple forms of the data, there are rich questions to ask. For example if you pick a product with 10 reviews and an average rating of 4, you might want to understand what caused this 4. Are there are few people who hated it? And if so, are these people who just hate everything (i.e., give low scores to all the products they rate – even if those products are often scored highly by others).
Of course, the data becomes harder with scale – there are many products and many customers providing reviews. The questions to ask at scale aren’t ones that an individual consumer might ask. Are there weird patterns in the data? Are there reviewers who hate everything? Products that get lots of attention?
You challenge is to pick some tasks, and design visualizations that address them.
We are looking for designs that use visualization techniques to address “harder” tasks. For example, giving the average score (or even the score distribution) for a single product may address the “I want to know how well reviewed was this product” task, but it doesn’t use visualization to get at a richer question or answer. Successful solutions will do something interesting with the data at scale.
You may choose how many tasks and how many visualizations. You might choose a single, really hard task and a sufficiently ambitious visualization (in terms of design and implementation). Or you might choose a bunch of simpler tasks that are addressed with several simpler visualizations (preferably all that are linked together).
What this Design Challenge Is Not
There are many interesting analytics tasks one could imagine using this kind of review data for. For example, you might want to build programs that predict how someone might rate a product that they have not yet rated (this is called a recommender system). You might want to do natural language processing on the review text to classify reviews, for example to identify fake reviews or particularly useful reviews. However, for this assignment, we don’t want you to develop advanced “intelligent” analytics – we prefer that you focus on identifying tasks that involve more direct connection to the data where the main part of your work is to develop visualizations. (if you’re interested in doing more advanced analytics for these kinds of problem, talk to me)
Your project will need to do some analysis of the data to get it in a form to make visual presentation. Given the scale of the data, you will probably want to do some aggregation and statistics. We just don’t want that analysis to be the focus of the project.
Project Phases – Due Dates
Phase 1: Task List (due Weds, Oct 16th) – You will turn in a list (as a PDF) describing tasks that you have thought of. Note, this is a hard deadline. We will compile the list of tasks and provide it to the class. This phase will be graded loosely (good, acceptable, not-turned in), and influence your final grade.
Phase 2: Sketches (due Weds, Oct 23rd) – You will turn in a PDF with some “sketches” of your design ideas, with a description of how they address some tasks. Note: we may not be able to provide feedback on the designs, but want to check to make sure that you are working on them. We also won’t hold you to your plans.
Phase 3: Signs of Life (due Weds, Oct 30th) – You will turn in “something” that will convince us that you have actually started to implement things. We also will ask you to describe your implementation plan. This will be graded check/no check/not turned in and influence your final grade.
Phase 4: Final Handin (due Weds, Nov 6 deadline extended to Nov 13) – You will turn in your project. Explicit instructions will be provided. We will ask for your source code, but also sufficient pictures and description that we can appreciate your assignment without having to run it. If your system is interactive, you are encouraged to provide a video.
Phase 5: In-Class Demos (Wednesday, Nov 13 and 20th) – Some students will be invited to show off their designs in class. More details will be provided.
Cheating with Data
For many tasks you could imagine, you may not have the information you need in the data. For example, if you are interested in comparing between a set of products, you may not know which products are comparable. You can make reasonable assumptions, providing you explain them. For example, you might expect that the user can provide lists of products they are interested in (e.g., if they are comparing similar items to decide which to buy, or have a list of their company’s products to see how they are doing overall).
There is also a chance that you might not be able to find anything interesting in the data. In some sense, that might be a good thing. For example, if your tool is meant to identify products that are being “trolled” (given negative reviews by a few people with questionable reviewing histories), there might not actually be any of this happening in the data sets you have. You may want to make a synthetic data set in order to test/demonstrate your designs. (e.g., add some fake users or fake products to the real data sets)
For this reason, we will provide some fake data sets (see Datasets on this page). We’re going to generate data sets that have interesting things to find.
If you make your own data, be sure to describe it in your writeup.
Be clear what data sets you use for the pictures that you make. If you make an interesting testing data set, we may ask you to provide it. (if you make an interesting data set and are willing to share it with others, let us know)
Level of Expectations for Programming
When I asked at the beginning of the semester, no one asked for a “non-programming” version of the projects. In the past, making project descriptions work for programmers and non-programmers led to project descriptions that were hopelessly complicated. (see 2018’s DC2 for an example). That said, if you feel like your programming skills aren’t up for the task, see the end of this section.
You may use any programming tools that you want – we do not require any particular language or libraries. That said, if you pick something we are unfamiliar with, we probably cannot help you. Given the short time frame of the assignment, it may not be practical for you to learn a new programming language. However, we are willing to be sympathetic to people who are ambitious in learning new tools. We will ask you (as part of the required self-evaluation).
We ask that you turn in everything we might need to run your program (in terms of the source code). However, we understand that we may not have the right environment to run it. Therefore, we may ask you to give a demo on your own computer (if your program requires a demonstration).
Given the tight time constraints, it may not be possible for you to build fancy interactive systems. We would like you to focus on the visualization aspects of the project. For example: your program might help a user compare a set of products. In the ideal case, you’d have a nice interface where a user could search for products and select them, with the visualization being dynamically updated as new products are selected. Instead, you might have the program be something that you type a list of product IDs as one of the parameters to a python function at the prompt, and having this function draw a static picture.
We see two major distinctions in the kinds of assignments you might build:
- Your program works on fixed data sets vs. your program can take new data sets. Of course, it is better if your program can take in new data sets (and we will provide new data sets for testing). If you choose to have a “brittle” program that only works on one data set that you’ve tried ahead of time, that is OK – but you might want to make up for that deficiency by having really compelling visualizations that we can easily imagine wanting to use on other data sets.
- Your program creates a static picture vs. your program allows a user to interact with the visualization. Interaction definitely gives you more options for creating interesting visualizations, but it is also hard. Also, we may not be able to try out the interactivity – so you will need to document it. If your program creates static visualizations, it is much easier for you to turn them in.
It is possible for you to not program your visualizations. You can make your designs by “hand”. This probably means you aren’t using real data. Hybrid assignments, where you do some analysis of the real data and then either draw things based on the numbers, or manually augment initial visualizations are allowable. For manual assignments, there will be significantly increased level of expectation for the designs and analysis of the designs. If you are creating things by hand, you should consider many tasks, multiple designs, and thorough descriptions of your rationale.
Datasets
The simplest form of the data is a list of reviews, where each review has 3 elements: a user ID, a product ID, and a rating (1-5). You can probably create a sufficient assignment using just this information. However, it may be more interesting if you consider more information about the reviews and products.
We will provide you with some data to work with. However, you are welcome to find your own and/or create synthetic data sets. The datasets are in a folder on Canvas (link to DC2 Data). These datasets are mix of synthetic ones we generated and real ones derived from the Amazon data (below).
The “easy” form of the datasets are CSV files with 4 columns: product ID, user ID, rating, and timestamp. The timestamp is when the review is written – you could use this to look for trends (e.g., a product got bad reviews initially, but seemed to get better). For some CSV files (notably the ones derived from Amazon data), we also provide metadata. The CSV files for metadata have limited information (title, price, and a category). The JSON metadata files have much more information. The metadata is documented on the Amazon data web page. The files we provide have been “reduced” to contain only metadata for things that have been reviewed.
Some of the datasets are things we created synthetically. We tried to “seed” them with interesting patterns to find.
The “real” data is derived from datasets provided by Prof. Julian McAuley at UCSD. The web page documents the data formats and data well. He provides some pretty large collections of reviews. The sets that we are providing you come from the “5-core” data (which means that every product has each 5 reviews and every customer has given at least 5 reviews). The small set (Musical Instruments) has 900 products and around 10,000 reviews. The CDs and Albums set has 6443 products and over a million reviews. We may provide some others.
If you want more information and data – such as the reviews themselves (including the text of the reviews and if they are rated as helpful), or some very large collections of reviews, you can use the source data files direct from the UCSD website.
Note 1: we will add additional data as the project progresses. At the beginning there may only be a few data sets.
What to turn in and hints and assessment criteria
Phase 1: Task List (due Weds, Oct 16th) – You will turn in a list (as a PDF) describing tasks that you have thought of. Note, this is a hard deadline. We will compile the list of tasks and provide it to the class. This phase will be graded loosely (good, acceptable, not-turned in), and influence your final grade.
Good assignments will provide a range of possible tasks. They should consider a variety of different user types (e.g., consumers, brand managers).
Phase 2: Sketches (due Weds, Oct 23rd) – You will turn in a PDF with some “sketches” of your design ideas, with a description of how they address some tasks. Note: we may not be able to provide feedback on the designs, but want to check to make sure that you are working on them. We also won’t hold you to your plans.
Good assignments will show that you are thinking about how to apply visualizations to solve tasks.
Phase 3: Signs of Life (due Weds, Oct 30th) – You will turn in “something” that will convince us that you have actually started to implement things. We also will ask you to describe your implementation plan. This will be graded check/no check/not turned in and influence your final grade.
Good assignments will make clear that you are making progress towards having something to turn in for the final deadline. A screen shot
Phase 4: Final Handin (due Weds, Nov 6 extended to Nov 13) – You will turn in your project. This is described below.
Phase 5: In-Class Demos (Wednesday, Nov 13 and 20th) – Some students will be invited to show off their designs in class. More details will be provided. Not all students will be invited. We will choose assignments that really benefit from being shown to class. We will ask you ahead of time if you are willing to do a demo. You can do your demo on your own laptop (please bring whatever adapter you need to plug into the HDMI projector).
The Final Hand-In
For your final hand-in, we ask you to provide (1) a PDF with documentation and images of the visualizations, (2) the source code for the software as a ZIP file, and (optionally) (3) a video showing off your system. Your PDF must be turned in via Canvas. If either the other files are less then 10MB, you may turn them in via Canvas. Otherwise, put them on Box (all University students get infinite space accounts), share them with the TA, and put a note on Canvas. Alternatively, you may turn in your program by providing the course staff access to a GitHub repo (instructions to be provided).
Your documentation is the main thing we will look at. You should assume that we will not be able to run your program. You must describe:
- Descriptions of the Designs and Their Intents: Describe your visualizations, their intended tasks, and the rationale for why the designs address the tasks. Provide pictures (screenshots). Remember, we probably will not be able to run your code – so you need to show it off in the document. Provide examples of how your designs make it easy to see the things it is supposed to help the viewer see.
- Use Case Evaluation: Show examples (e.g., screenshots with descriptions) that show that your designs really address the tasks that they are meant to address. (this is part of 1, but is so important that I emphasize it)
- A Scalability Discussion: Describe how your designs scale to larger and harder data sets. Note that the scalability of your design may be different than the scalability of your implementation. Your prototype implementation may be too slow to work with millions of items, but the visualization design might scale (assuming a more sophisticated implementation).
- Information about the programs: This must include requirements (what languages and libraries) and instructions on how to run the program.
- Interaction: Be clear whether or not your visualizations are interactive. If they are interactive, describe the interactivity well enough that we can imagine how things work without actually running your program. Use lots of pictures. You may also add a video (not in the PDF).
- Data Sets: Be clear what data sets you use in your examples (any images and descriptions you turn in). Let us know if your program can read other data sets, and what its limitations are.
- Self-Assessment: Please give your honest assessment of your familiarity with the tools that you used, and how much of your energy for this project went into learning those tools. For example, you might say “I am an experienced Python programmer, and used to having to learn new APIs, so picking up Bokeh wasn’t a big deal” or “I had never done any JavaScript programming, so I spend a ton of time working through a lot of tutorials to learn D3”.
We ask that you turn in “all” source code for your program, so that we would have a reasonable chance of being able to build/run it ourselves. There is a fine line between including obscure libraries that we might not have, and bundling up the whole universe. Use your judgment. Definitely include anything that you have written. And remember that in the writeup, we need instructions.
You are welcome to turn in a video (no longer than 5 minutes) demonstrating your system. Making a “good” video is hard, but a brief screencast video with voiceover can be a quick way to show off interactivity in the system. Please be careful about big video files: if the file is more than a 10MB, do not upload it to Canvas. (make it available to use via Box or something else).
If we cannot figure out what your program does (1) this is a bad sign as it means that your documentation is not good enough, so (2) we may ask you to give us a demo. This may be part of the in-class demos (if we believe that your program is interesting enough that everyone should see it), or we may schedule a time to meet with you.
Grading: Your final grade for DC2 will be based on your final handin. We will primarily be using the write-up to judge the submission.
If you did not do well on the initial phases of the assignment, we will assess a penalty explicitly. That is, we will give you a grade for the “quality” of your handed in assignment, and then assess a penalty.
We may give rewards for students who do in class demos. These will generally be assignments that are getting As anyway.
Note that the assignment does explicitly put weight on how well you can document and provide examples for your design. It is not just an implementation exercise.
We will provide more information on grading standards closer to the deadline.