Component-based Trajectory Models for Human Character Animation

Component-based Trajectory Models for Human Character Animation

Research Area:

Biomedical and Biologically Motivated Technical Applications

Researchers:

Martin A. Giese; Albert Mukovskiy;

Collaborators:

Lars Omlor; A. Park
  

Description:

The efficient parameterization of complex human movements is a core problem of modern computer animation. For the synthesis of animations with a high degree of realism learning-based approaches have become increasingly popular. Most of these approaches approximate style-parameterized trajectories based on motion capture data. The size of the required data bases depends critically on the efficiency of the chosen trajectory representation. One characteristics of efficiency is that such representations should allow the synthesis of large trajectory classes with a very limited amount of motion capture data.

To develop efficient representations of motion data we take an approach that is inspired by concepts from motor control in biological systems. The control of complex multi-joint movements in biological systems likely is based on the superposition of lower-dimensional components (synergies). Previous work in neuroscience shows that such components can be extracted from biological data, and that they can be parameterized in the framework of dynamical systems.

Inspired by results from Motor Control we solve the degrees of freedom problem in full-body animation with an algorithm that comprises the following steps:

We learn movement primitives from a set of MoCap data applying a new algorithm for blind source separation that models the data by mixtures of delayed sources (anechoic mixtures). These extracted source signals (or synergies) form the basis of a generative trajectory model.

To obtain a real-time capable animation system we define nonlinear dynamical systems that generate the associated trajectories online. This is done by constructing a nonlinear mapping between the attractor solutions of the dynamical systems (e.g. VdP oscillators) and the source signals using Support Vector Regression (SVR).

 The dynamical systems that correspond to different sources are coupled in order to stabilize a coordinated behavior between different synergies.

The introduction of appropriate dynamic couplings between the dynamical systems controlling individual avatars allows the realization of interactive behavior, like following behavior or the synchronization of the rhythmic movements. In addition different styles can be animated by morphing between different emotions and movement styles using linear interpolation methods.

A navigation dynamics is implemented by specifying an appropriate dynamical system for the heading direction. Curved walking is animated by online blending between examples of curved and straight walking.

Demos:   

https://goo.gl/qJRw4R

https://goo.gl/6FcRRq

https://goo.gl/wkmWQ8

https://goo.gl/qc5gzY

Publications

Omlor, L. & Giese, M. A. (2011). Anechoic Blind Source Separation using Wigner Marginals. Journal of Machine Learning Research, 12, 1111-1148. [More] 
Giese, M. A., Mukovskiy, A., Park, A.-N., Omlor, L. & Slotine, J.-J. (2009). Real-Time Synthesis of Body Movements Based on Learned Primitives. In Cremers D, Rosenhahn B, Yuille A L (eds): Statistical and Geometrical Approaches to Visual Motion Analysis, Lecture Notes in Computer Science, 5604, 107-127. [More] 
Park, A.-N., Mukovskiy, A., Omlor, L. & Giese, M. A. (2008). Synthesis of character behaviour by dynamic interaction of synergies learned from motion capture data. Skala V (ed): Proceedings of the 16th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG),4-7 Feb, Plzen, Czech Republic, 9-16. [More] 
Mukovskiy, A., Park, A.-N., Omlor, L., Slotine, J.-J. & Giese, M. A. (2008). Self-organization of character behavior by mixing of learned movement primitives. Proceedings of the 13th Fall Workshop on Vision, Modeling, and Visualization (VMV) , October 8-10, Konstanz, Germany, 121-130. [More] 
Park, A.-N., Mukovskiy, A., Omlor, L. & Giese, M. A. (2008). Self organized character animation based on learned synergies from full-body motion capture data. Proceedings of the 2008 International Conference on Cognitive Systems (CogSys), University of Karlsruhe, Karlsruhe, Germany, 2-4 April, Springer-Verlag, Berlin. [More] 
Omlor, L. & Giese, M. A. (2007). Learning of translation-invariant independent components: multivariate anechoic mixtures. MultiLearning of Translation-Invariant Independent Components: Multivariate Anechoic Mixtures. In: Davies M.E., James C.J., Abdallah S.A., Plumbley M.D. (eds) Independent Component Analysis and Signal Separation. ICA 2007., 4666, 762-769. [More] 
Omlor, L. & Giese, M. A. (2007). Extraction of spatio-temporal primitives of emotional body expressions. Neurocomputing, 70(10-12), 1938-1942. [More] 
Park, A.-N., Omlor, L. & Giese, M. A. (2007). Synergy-based method for the self-organization of full-body movements with high degree of realism. Bülthoff H H, Chatziastros A, Mallot H A, Ulrich R (eds): Proceedings of the 10th. Tübinger Perception Conference (TWK 2007), Knirsch, Kirchentellinsfurt, 152. [More] 
Omlor, L. & Giese, M. A. (2006). Blind source separation for over-determined delayed mixtures. NIPS'06: Proceedings of the 19th International Conference on Neural Information Processing Systems, 19, 1049-1056. [More]