1. “This paper presents a new motion model we term deformable motion models for human motion analysis and synthesis”… “[o]ur key idea is to apply statistical analysis techniques to a large set of annotated motion examples[.]”
2. We want to generate realistic human motion, which is already hard (even stitching together captured motion can result in footsliding and other easily perceived “oddities”). What’s more we want to allow novices to generate motion on the fly.
3. Annotate the reference motions and models, and then subdivide the data into two data sets, one of motion and one of time. Use ML to learn a probability distribution across geometry and time, generate likely animations by looking for plausible parameters that fit the model we want.
4. The example in the paper seems like a real killer app: the user sets a series of trajectory constraints and a natural walking animation is generated that attempts to plausibly fit those constraints, in real time. There are some limitations if there are no reference animations, or if the constraints are unnatural, but que sera sera.
This sort of procedural animation is useful for video game designers and animators: instead of having to hand animate or hand-deform motion captured data to fit a character in an environment, reasonably novice users can simply set constraints and leave the animation business to others.
5. http://faculty.cs.tamu.edu/jchai/projects/tog-body-09/tog_deformable_video.mov is the full paper video. 99 MB!