Neural model for the visual recognition of goal-directed actions

Research areas:
Uncategorized
Year:
2007
Type of Publication:
Article
Authors:
Fleischer, Falk
Casile, Antonino
Giese, Martin A.
Journal:
ESF-EMBO Symposium: Three Dimensional Sensory and Motor Space: Perceptual Consequences of Motor Action , 6-11 October, Sant Feliu de Guixols, Spain
Month:
01
BibTex:
Note:
not reviewed
Abstract:

Neural model for the visual recognition of goal-directed actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what the precise nature this visuo-motor interaction is, and which relevant computational functions can be accomplished by purely visual processing. We present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of action understanding can be accomplished by appropriate analysis of spatio-temporal visual features. The model is based on a hierarchical feed-forward architecture for invariant object and motion recognition [3,4,5] employing principles that are similar to the ones that have been established for stationary object recognition. The model addresses in particular how invariance against position variations of object and effector can be accomplished, while preserving the relative spatial information that is required for an accurate recognition of the hand-object interaction. It is demonstrated that the model is able to correctly classify different grasp types determining whether the action matches correctly the object affordance. The model demonstrates that well-established simple physiologically plausible neural mechanisms account for important aspects of visual action recognition without the need of a detailed 3D representation of object and action. It complements existing models and provides a basis for a further quantitative analysis of visual influences on action recognition. [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426.