- We are attempting to make a sketch-based interface that relies on motion graphs. The user ought to be able to draw a spline in an environment, and a plausible motion that adheres closely to the path of the spline should be created. Different “brush” types will allow for the user to specify additional movement modalities at certain points in the motion (e.g. “this section of the motion should include jumping” &c.). The motion database used to generate these motions should be drawn from an annotated subset of the CMU motion database. It looks like we will be using unidirectional A* to traverse our distance matrix.
- In two weeks we expect to have the full motion graph, spline-based constraints, and spline drawing/picking. Multiple motion brushes we may or may not have, since the CMU database is not uniformly annotated.
- Roughly the same success criteria as before: a success is an application that allows the user to draw a spline and have a plausible spline-following motion result. Having these interactions occur in close to real time would be another mark of success, but it’s not vital to the execution of the project.
- We have the infrastructure in place to load arbitrary databases of motion files, automatically creating fragments from these motions, and then making connections based on local minima between different frames. We’ve also got tools in place to visualize our fragment-based motion graphs (see below). We’ve got a unidirectional A* implementation, but at the moment it’s not hooked into the rest of the project.
- We’re using the CMU database, and we’ve collected over 1 GB of motion files. If we stick to just a few different motion brushes (e.g. running, walking, jumping) then we’ll have more than enough examples to generate spline-following motions in that space.
- We intend to have the “motion screensaver” aspect of motion graphs, as well as the A* infrastructure, in place by the end of today. In the immediate near future we’ll also have the ability to gauge the extent to which a motion is violating given constraints. By the end of next week we should have the ability to provide a spline and generate a motion. Then we can spend the last week working on the spline-based UI. Adrian will take the lead with the spline drawing and picking, Aaron is tackling constraints, Reid is tackling the graph generation, Michael is handling A* (and the potential bidirectional A*). In the near future our code will be less modular and we’ll have to collaborate more.
Phase 2 Status: Adrian, Michael, Reid, and Aaron
Previous post: Project 1 Post-Break Recap
Next post: Phase 1 Recap – Chaman, Jim, Raja, Xiaolu
{ 1 comment }
Some additional clarifications:
1) The random walk graph we are generating is based on loading in a folder of motions, splitting each motion into equally sized fragments (currently a hard coded parameter) and then creating connections when the similarity between two fragments end/start states is lower than a certain threshold (also a parameter). This algorithm is O(n^2) w/r/t the number of fragments. We’re looking at tweaking the similarity metric to take derivative, smoothness, and motion type into account, as well as constraint adherence.
2) The graph visualizer is generated on the fly, and updates as more motions are added and more connections are created.