Publications

Year: 2009

Fleischer, F., Casile, A. & Giese, M. A (2009). A neural model of the visual tuning properties of action-selective neurons in STS and area F5 Journal of Vision, 9(8), 1106.
A neural model of the visual tuning properties of action-selective neurons in STS and area F5
Abstract:

A neural model of the visual tuning properties of action-selective neurons in STS and area F5 The visual recognition of goal-directed movements is crucial for the understanding of intentions and goals of others as well as for imitation learning. So far, it is largely unknown how visual information about effectors and goal objects of actions is integrated in the brain. Specifically, it is unclear whether a robust recognition of goal-directed actions can be accomplished by purely visual processing or if it requires a reconstruction of the three-dimensional structure of object and effector geometry. We present a neurophysiologically inspired model for the recognition of goal-directed grasping movements. The model reproduces fundamental properties of action-selective neurons in STS and area F5. The model is based on a hierarchical architecture with neural detectors that reproduce the properties of cells in visual cortex. It contains a novel physiologically plausible mechanism that combines information on object shape and effector (hand) shape and movement, implementing the necessary coordinate transformations from retinal to an object centered frame of reference. The model was evaluated with real video sequences of human grasping movements, using a separate training and test set. The model reproduces a variety of tuning properties that have been observed in electrophysiological experiments for action-selective neurons in STS and area F5. The model shows that the integration of effector and object information can be accomplished by well-established physiologically plausible principles. Specifically, the proposed model does not compute explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned view-dependent representations for sequences of hand shapes. Our results complement those of existing models for the recognition of goal-directed actions and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Month: 08
Pages: 1106
Full text: Online version
Giese, M. A., Caggiano, V., Casile, A. & Fleischer, F (2009). Visual encoding of goal-directed movements: a physiologically plausible neural model Meeting of the Society for Neuroscience 2009, Washington DC, USA.
Visual encoding of goal-directed movements: a physiologically plausible neural model
Abstract:

Visual encoding of goal-directed movements: a physiologically plausible neural model Visual responses of action-selective neurons, e.g. in premotor cortex and the superior temporal sulcus of the macaque monkey, are characterized by a remarkable combination of selectivity and invariance. On the one hand, the responses of such neurons show high selectivity for details about the grip and the spatial relationship between effector and object. At the same time, these responses show substantial invariance against the retinal stimulus position. While numerous models for the mirror neuron system have been proposed in robotics and neuroscience, almost none of them accounts for the visual tuning properties of action-selective neurons exploiting physiologically plausible neural mechanisms. In addition, many existing models assume that action encoding is based on a full reconstruction of the 3D geometry of effector and object. This contradicts recent electrophysiological results showing view-dependence of the majority of action-selective neurons, e.g. in premotor cortex. We present a neurophysiologically plausible model for the visual recognition of grasping movements from real videos. The model is based on simple well-established neural circuits. Recognition of effector and goal object is accomplished by a hierarchical neural architecture, where scale and position invariance are accomplished by nonlinear pooling along the hierarchy, consistent with many established models from object recognition. Effector recognition includes a simple predictive neural circuit that results in temporal sequence selectivity. Effector and goal position are encoded within the neural hierarchy in terms of population codes, which can be processed by a simple gain field-like mechanism in order to compute the relative position of effector and object in a retinal frame of reference. Based on this signal, and object and effector shape, the highest hierarchy level accomplishes a distinction between functional (hand matches object shape and position) and dysfunctional (no match between hand and object shape or position) grips, at the same time being invariant against strong changes of the stimulus position. The model was tested with several stimuli from the neurophysiological literature and reproduces, partially even quantitatively, results about action-selective neurons in the STS and premotor cortex. Specifically, the model reproduces visual tuning properties and the view-dependence of mirror neurons in premotor cortex and makes additional predictions, which can be easily tested in electrophysiological experiments.

Authors: Giese, Martin A.; Caggiano, Vittorio Casile, Antonino Fleischer, Falk
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2009). Invariant recognition of goal-directed hand actions: a physiologically plausible neural model Perception 38 ECVP Abstract Supplement, 51.
Invariant recognition of goal-directed hand actions: a physiologically plausible neural model
Abstract:

Invariant recognition of goal-directed hand actions: a physiologically plausible neural model The recognition of transitive, goal-directed actions requires highly selective processing of shape details of effector and goal object, and high robustness with respect to image transformations at the same time. The neural mechanisms required for solving this challenging recognition task remain largely unknown. We propose a neurophysiologically-inspired model for the recognition of transitive grasping actions, which combines high selectivity for different grips with strong position invariance. The model is based on well-established physiologically plausible simple neural mechanisms. Invariance is accomplished by combining nonlinear pooling (by maximum operations) and a specific neural representation of the relative position of object and effector based on a gain-field like mechanism. The proposed architecture accomplishes accurate recognition of different grip types on real video data and reproduces correctly several properties of action-selective neurons in occipital, parietal and premotor areas. In addition, the model shows that the accurate recognition of goal-directed actions can be accomplished without an explicit reconstruction of the 3-D structure of effectors and objects, as assumed in many technical systems for the recognitions of hand actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition In: GNS Congress, Goettingen, Germany.
Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Jastorff, J., Kourtzi, Z. & Giese, M. A. (2009). Visual learning shapes the processing of complex movement stimuli in the human brain. Journal of Neuroscience, Vol. 29 No. 44, pp. 14026-38.
Visual learning shapes the processing of complex movement stimuli in the human brain
Authors: Jastorff, J. Kourtzi, Zoe Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Journal: Journal of Neuroscience, Vol. 29 No. 44, pp. 14026-38
Year: 2009
Month: 11
Full text: Online version
Giese, M. A., Mukovskiy, A., Park, A.-N., Omlor, L. & Slotine, J.-J. (2009). Real-Time Synthesis of Body Movements Based on Learned Primitives. In Cremers D, Rosenhahn B, Yuille A L (eds): Statistical and Geometrical Approaches to Visual Motion Analysis, Lecture Notes in Computer Science, 5604, 107-127.
Real-Time Synthesis of Body Movements Based on Learned Primitives
Authors: Giese, Martin A.; Mukovskiy, Albert; Park, Aee-Ni Omlor, Lars Slotine, Jean-Jacques E.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion Journal of Vision, 9(8), 614.
Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Endres, D. & Giese, M. A. (2009). Temporal Segmentation with Bayesian Binning. NIPS 2009 workshop on temporal segmentation.
Temporal Segmentation with Bayesian Binning
Abstract:

Bayesian Binning (BB) is an exact inference technique which was originally developed for applications in Computational Neuroscience, e.g. modeling spike count distributions or estimating peri-stimulus time histograms (PSTH). BB encodes a (conditional) probability distribution (or density) which is piecewise constant in the domain of interest. This suggests that BB might be useful for retrospective temporal segmentation tasks, too. We illustrate the potential usefulness of BB for temporal segmentation on two examples. First, we segment neural spike train data, demonstrating that BB is able to locate change points in the PSTH correctly. Second, we employ BB for (human) action sequence segmentation. We show that BB accurately identifies the transition points in the action sequence (e.g. a change from ’walking’ to ’jumping’).

Authors: Endres, Dominik Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Specific influences of self-motion on the detection of biological motion Perception 38 ECVP Abstract Supplement, page: 85..
Specific influences of self-motion on the detection of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Barliya, A., Omlor, L., Giese, M. A. & Flash, T. (2009). An analytical formulation of the law of intersegmental coordination during human locomotion. Experimental Brain Research, 193(3), 371-385.
An analytical formulation of the law of intersegmental coordination during human locomotion
Authors: Barliya, Avi Omlor, Lars Giese, Martin A.; Flash, Tamar
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Endres, D. & Földiák, P. (2009). Interpreting the Neural Code with Formal Concept Analysis. Advances in Neural Information Processing Systems, 21, 425-432.
Interpreting the Neural Code with Formal Concept Analysis
Abstract:

We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons.

Authors: Endres, Dominik Földiák, Peter
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Giese, M. A., Ilg, W., Golla, H. & Thier, P. (2009). System und Verfahren zum Bestimmen einer Bewegungskategorie sowie deren Ausprägungsgrad. Patent No. 10 2004 060 602.1-35. Deutsches Patentamt, M\"unchen.
System und Verfahren zum Bestimmen einer Bewegungskategorie sowie deren Ausprägungsgrad
Authors: Giese, Martin A.; Ilg, Winfried; Golla, Heidrun Thier, Peter
Research Areas: Uncategorized
Type of Publication: Patent
Patent number: 10 2004 060 602.1-35
Filing date: 0000-00-00
Issue date: 0000-00-00
Filing date: 0000-00-00
Issue date: 0000-00-00
Roether, C. L., Omlor, L. & Giese, M. A. (2009). Features in the Recognition of Emotions from Dynamic Bodily Expression. In: Mason G. , Ilg U.J. (eds): Dynamics of Visual Motion Processing: Neuronal, Behavioral and Computational Approaches, 3, 313-340.
Features in the Recognition of Emotions from Dynamic Bodily Expression
Authors: Roether, C. L. Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Endres, D., Priss, U. & Földiák, P (2009). Interpreting the Neural Code with Formal Concept Analysis Perception 38 ECVP Abstract Supplement, page 127.
Interpreting the Neural Code with Formal Concept Analysis
Authors: Endres, Dominik Priss, Uta Földiák, Peter
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Endres, D., Földiák, P. & Priss, U. (2009). An Application of Formal Concept Analysis to Neural Decoding. The 6th international conference on Concept Lattices and their Applications (CLA 2008), Olomouc, Czech Republic., CEUR-WS, 433, 181-192.
An Application of Formal Concept Analysis to Neural Decoding
Abstract:

This paper proposes a novel application of Formal Concept Analysis (FCA) to neural decoding: the semantic relationships between the neural representations of large sets of stimuli are explored using concept lattices. In particular, the effects of neural code sparsity are modelled using the lattices. An exact Bayesian approach is employed to construct the formal context needed by FCA. This method is explained using an example of neurophysiological data from the high-level visual cortical area STSa. Prominent features of the resulting concept lattices are discussed, including indications for a product-of-experts code in real neurons.

Authors: Endres, Dominik Földiák, Peter Priss, Uta
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A. (2009). Bio-inspired approach for the recognition of goal-directed hand actions. In X. Jiang and N. Petkov (Eds.): Int. Conf. on Computer Analysis of Images and Patterns (CAIP) 2009, LNCS, 5702, 714-722.
Bio-inspired approach for the recognition of goal-directed hand actions
Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A. (2009). View-independent recognition of grasping actions with cortex-inspired model. 9th IEEE-RAS Int Conf on Humanoid Robots (Humanoids) 2009, Paris, France, 514-519.
View-independent recognition of grasping actions with cortex-inspired model
Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Park, A.-N., Mukovskiy, A., Slotine, J.-J. & Giese, M. A. (2009). Design of dynamical stability properties in character animation. In: The 6th Workshop on Virtual Reality Interaction and Physical Simulation. ,VRIPHYS 09, Nov 5-6, Karlsruhe,Germany, 85-94.
Design of dynamical stability properties in character animation
Authors: Park, Aee-Ni Mukovskiy, Albert; Slotine, Jean-Jacques E. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Roether, C. L., Omlor, L., Christensen, A. & Giese, M. A. (2009). Critical features for the perception of emotion from gait. Journal of Vision, 9(6), 1-32.
Critical features for the perception of emotion from gait
Authors: Roether, C. L. Omlor, Lars Christensen, Andrea Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Timmann, D., Konczak, J., Ilg, W., Donchin, O., Hermsdörfer, J., Gizewski, E. R. et al. (2009). Current advances in lesion-symptom mapping of the human cerebellum. Neuroscience, 162(3), 836-851.
Current advances in lesion-symptom mapping of the human cerebellum
Authors: Timmann, Dagmar Konczak, J\"urgen Ilg, Winfried; Donchin, Opher Hermsdörfer, J. Gizewski, Elke R. Schoch, Beate
Research Areas: Uncategorized
Type of Publication: Article
Omlor, L. & Slotine, J.-J. (2009). Continuous Non-Negative Matrix Factorization For Time-Dependent Data. In Proceedings of the European Signal Processing Conference , Glasgow, UK, 2009.
Continuous Non-Negative Matrix Factorization For Time-Dependent Data
Authors: Omlor, Lars Slotine, Jean-Jacques E.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version

Year: 2008

Benali, A., Weiler, E., Benali, Y., Dinse, H. R. & Eysel, U. T. (2008). Excitation and inhibition jointly regulate cortical reorganization in adult rats. J Neurosci, 28, 12284-12293.
Excitation and inhibition jointly regulate cortical reorganization in adult rats
Authors: Benali, Alia; Weiler, E Benali, Y Dinse, H. R Eysel, U. T
Type of Publication: Article
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2008). Faciliation of action recognition by motor programs is critically dependent on timing Perception 37 ECVP Abstract Supplement. page: 25 (TRAVEL AWARD).
Faciliation of action recognition by motor programs is critically dependent on timing
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Endres, D., Oram, M., Schindelin, J. & Földiák, P. (2008). Bayesian Binning Beats Approximate Alternatives: Estimating Peri-stimulus Time Histograms. Advances in Neural Information Processing Systems, 20, 393-400.
Bayesian Binning Beats Approximate Alternatives: Estimating Peri-stimulus Time Histograms
Authors: Endres, Dominik Oram, Mike Schindelin, Johannes Földiák, Peter
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Timmann, D., Brandauer, B., Hermsdörfer, J., Ilg, W., Konczak, J., Gerwig, M. et al. (2008). Lesion-Symptom Mapping of the Human Cerebellum. Cerebellum, 7(4), 602-6.
Lesion-Symptom Mapping of the Human Cerebellum
Authors: Timmann, Dagmar Brandauer, Barbara Hermsdörfer, J. Ilg, Winfried; Konczak, J\"urgen Gerwig, Marcus Gizewski, Elke R. Schoch, Beate
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Roether, C. L., Omlor, L. & Giese, M. A. (2008). Lateral asymmetry of bodily emotion expression. Current Biology, 18, R329-330.
Lateral asymmetry of bodily emotion expression
Authors: Roether, C. L. Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Park, A.-N., Mukovskiy, A., Omlor, L. & Giese, M. A. (2008). Synthesis of character behaviour by dynamic interaction of synergies learned from motion capture data. Skala V (ed): Proceedings of the 16th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG),4-7 Feb, Plzen, Czech Republic, 9-16.
Synthesis of character behaviour by dynamic interaction of synergies learned from motion capture data
Authors: Park, Aee-Ni Mukovskiy, Albert; Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Mukovskiy, A., Park, A.-N., Omlor, L., Slotine, J.-J. & Giese, M. A. (2008). Self-organization of character behavior by mixing of learned movement primitives. Proceedings of the 13th Fall Workshop on Vision, Modeling, and Visualization (VMV) , October 8-10, Konstanz, Germany, 121-130.
Self-organization of character behavior by mixing of learned movement primitives
Authors: Mukovskiy, Albert; Park, Aee-Ni Omlor, Lars Slotine, Jean-Jacques E. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Ilg, W., Giese, M. A., Gizewski, E. R., Schoch, B. & Timmann, D. (2008). The influence of focal lesions of the cerebellum on the control and adaptation of gait. Brain, 131(Pt. 11), 2913-27.
The influence of focal lesions of the cerebellum on the control and adaptation of gait
Authors: Ilg, Winfried; Giese, Martin A.; Gizewski, Elke R. Schoch, Beate Timmann, Dagmar
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Giese, M. A., Thornton, I. & Edelman, S. (2008). Metrics of the perception of body movement. Journal of Vision, 8(9), 1-18.
Metrics of the perception of body movement
Authors: Giese, Martin A.; Thornton, Ian Edelman, Shimon
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A. (2008). Neural Model for the Visual Recognition of Goal-directed Movements. In V. Kurkova, R. Neruda, and J. Koutnik (Eds.): Int Conf on Artifical Neural Networks (ICANN) 2008, Part II, LNCS, 5164, 939-948.
Neural Model for the Visual Recognition of Goal-directed Movements
Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A. (2008). Physiologically-inspired model for the visual tuning properties of mirror neurons. 3rd Int Conf on Cognitive Systems (CogSys) 2008, Karlsruhe, Germany, Springer Verlag, 19-24.
Physiologically-inspired model for the visual tuning properties of mirror neurons
Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Curio, C., Giese, M. A., Breidt, M., Kleiner, M. & B\"ulthoff, H. H. (2008). Exploring human dynamic facial expression recognition with animation. Proceedings of the 2008 International Conference on Cognitive Systems, University of Karlsruhe, Karlsruhe, Germany, April 2-4, 2008, Springer Verlag.
Exploring human dynamic facial expression recognition with animation
Authors: Curio, Cristobal Giese, Martin A.; Breidt, Martin Kleiner, Mario B\"ulthoff, H. H.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Curio, C., Giese, M. A., Breidt, M., Kleiner, M. & B\"ulthoff, H. H. (2008). Probing Dynamic Human Facial Action Recognition From The Other Side Of The Mean. APGV '08: Proceedings of the 5th symposium on Applied perception in graphics and visualization, 59-66.
Probing Dynamic Human Facial Action Recognition From The Other Side Of The Mean
Authors: Curio, Cristobal Giese, Martin A.; Breidt, Martin Kleiner, Mario B\"ulthoff, H. H.
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
B\"ulthoff, H. H., Wallraven, C. & Giese, M. A. (2008). Perceptual Robotics: Example-based representations of shapes and movements. In Siciliano B, Khatib O: Springer Handbook of Robotics, 1481-1498.
Perceptual Robotics: Example-based representations of shapes and movements
Authors: B\"ulthoff, H. H. Wallraven, Christian Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Omlor, L., Giese, M. A. & Roether, C. L (2008). Distinctive postural and dynamic features for bodily emotion expression Journal of Vision, 8(6), 910a.
Distinctive postural and dynamic features for bodily emotion expression
Authors: Omlor, Lars Giese, Martin A.; Roether, C. L.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the recognition of transitive actions Perception, 37(suppl.), 155.
Neural model for the recognition of transitive actions
Abstract:

Neural model for the recognition of transitive actions The visual recognition of goal-directed movements is crucial for imitation and possibly the understanding of actions.We present a neurophysiologically-inspired model for the recognition of goal-directed hand movements. The model exploits neural principles that have been used before to account for object and action recognition: (i) hierarchical neural architecture extracting form and motion features; (ii) optimization of mid-level features by learning; (iii) realization of temporal sequence selectivity by recurrent neural circuits. Beyond these classical principles, the model proposes novel physiologically plausible mechanisms for the integration of information about effector shape, motion, goal object, and affordance. We demonstrate that the model is powerful enough to recognize hand actions from real video sequences and reproduces charac- teristic properties of real cortical neurons involved in action recognition. We conclude that: (i) goal-directed actions can be recognized by view-based mechanisms without a simulation of the actions in 3-D, (ii) well-established neural principles of object and motion recognition are sufficient to account for the visual recognition of goal-directed transitive actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Full text: Online version
Fleischer, F., Casile, A. & Giese, M. A (2008). Simulating mirror-neuron responses using a neural model for visual action recognition Proceedings of the Seventeenth Annual Computational Neuroscience Meeting CNS, July 19th - 24th 2008, Portland, Oregon, USA.
Simulating mirror-neuron responses using a neural model for visual action recognition
Abstract:

Simulating mirror-neuron responses using a neural model for visual action recognition The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. We present a neurophysiologically inspired model for the visual recognition of hand movements. It demonstrates that several experimentally shown properties of mirror neurons can be explained by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts 2D form features with subsequently increasing complexity and invariance to position along the hierarchy [3,4,5]. (2) Extraction of optimal features on different hierarchy levels by eliminating features which are not contributing to correct classification results. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) A simple neural mechanism that combines the spatial information about goal object and its affordance and the information about the end effector and its movement. The model is validated with video sequences of both monkey and human grasping actions. We show that simple well-established physiologically plausible mechanisms can account for important aspects of visual action recognition and experimental data of the mirror neuron system. Specifically, these results are independent of explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned 2D pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References 1. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G: Understanding motor events: a neurophysiological study. Exp Brain Res 1992, 91:176-180. 2. Rizzolatti G, Craighero L: The mirror-neuron system. Annu Rev Neurosci 2004, 27:169-192. 3. Giese MA, Poggio T: Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 2003, 4:179-192. 4. Riesenhuber M, Poggio T: Hierarchical models of object recognition in cortex. Nat Neurosci 1999, 2:1019-1025. 5. Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T: Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell 2007, 29:411-426. 6. Xie X, Giese MA: Nonlinear dynamics of direction-selective recurrent neural media. Phys Rev E Stat Nonlin Soft Matter Phys 2002, 65:051904. 7. Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 1996, 16:2112-2126. 8. Hopfield JJ, Brody CD: What is a moment? "Cortical" sensory integration over a brief interval. Proc Natl Acad Sci U S A 2000, 97:13919-13924. 9. Oztop E, Kawato M, Arbib M: Mirror neurons and imitation: a computationally guided review. Neural Netw 2006, 19:254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of actions Int Conf on Cognitive Systems Neuroscience (COSYNE) 2008, Salt Lake City, USA.
Neural model for the visual recognition of actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Non

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Ilg, W., Christensen, A., Karnath, H. O. & Giese, M. A (2008). Facilitation of action recognition by self-generated movements depends critically on timing Neuroscience Meeting, Washington DC.
Facilitation of action recognition by self-generated movements depends critically on timing
Authors: Ilg, Winfried; Christensen, Andrea Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of hand actions Journal of Vision, 8(6), 53a.
Neural model for the visual recognition of hand actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Nonlin Soft Matter Phys 65, 051904. [9] Oztop, E. et al. (2006): Neural Netw. 19, 254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Park, A.-N., Mukovskiy, A., Omlor, L. & Giese, M. A. (2008). Self organized character animation based on learned synergies from full-body motion capture data. Proceedings of the 2008 International Conference on Cognitive Systems (CogSys), University of Karlsruhe, Karlsruhe, Germany, 2-4 April, Springer-Verlag, Berlin.
Self organized character animation based on learned synergies from full-body motion capture data
Authors: Park, Aee-Ni Mukovskiy, Albert; Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Endres, D. & Földiák, P. (2008). Exact Bayesian Bin Classification: A Fast Alternative to Bayesian Classification and its Application to Neural Response Analysis. Journal of Computational Neuroscience, 24(1), 24-35.
Exact Bayesian Bin Classification: A Fast Alternative to Bayesian Classification and its Application to Neural Response Analysis
Authors: Endres, Dominik Földiák, Peter
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Földiák, P. & Endres, D. (2008). Sparse Coding. Scholarpedia, 3(1), 2984.
Sparse Coding
Authors: Földiák, Peter Endres, Dominik
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version

Year: 2007

Giese, M. A. (2007). Learnimng-based representations of complex body movements: Studies in brains and machines. Phd Thesis.
Learnimng-based representations of complex body movements: Studies in brains and machines
Authors: Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Phd Thesis
Month: 11
Full text: Online version
Ilg, W., Röhrig, R., Thier, P. & Giese, M. A. (2007). Learning-based methods for the analysis of intra-limb coordination and adaptation of locomotor patterns in cerebellar patients. IEEE 10th International Conference on Rehabilitation Robotics, 13-15 June, Noordwijk, The Netherlands, pages: 1090-1095.
Learning-based methods for the analysis of intra-limb coordination and adaptation of locomotor patterns in cerebellar patients
Authors: Ilg, Winfried; Röhrig, R. Thier, Peter Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Broetz, D., Burkard, S., Schöls, L., Synofzik, M. & Ilg, W. (2007). Koordination im Mittelpunkt - Physiotherapiekonzept bei zerebellärer Ataxie. Physiopraxis, 5(11/12), 23-26.
Koordination im Mittelpunkt - Physiotherapiekonzept bei zerebellärer Ataxie
Authors: Broetz, D. Burkard, Susanne Schöls, L. Synofzik, Matthis Ilg, Winfried
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Omlor, L. & Giese, M. A. (2007). Learning of translation-invariant independent components: multivariate anechoic mixtures. MultiLearning of Translation-Invariant Independent Components: Multivariate Anechoic Mixtures. In: Davies M.E., James C.J., Abdallah S.A., Plumbley M.D. (eds) Independent Component Analysis and Signal Separation. ICA 2007., 4666, 762-769.
Learning of translation-invariant independent components: multivariate anechoic mixtures
Authors: Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: Online version
Omlor, L. & Giese, M. A. (2007). Extraction of spatio-temporal primitives of emotional body expressions. Neurocomputing, 70(10-12), 1938-1942.
Extraction of spatio-temporal primitives of emotional body expressions
Authors: Omlor, Lars Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version
Graf, M., Reitzner, B., Corves, C., Casile, A., Giese, M. A. & Prinz, W. (2007). Predicting point-light actions in real-time. Neuroimage, 36(suppl. 2), T22-23.
Predicting point-light actions in real-time
Authors: Graf, Markus Reitzner, Bianca Corves, Caroline Casile, Antonino Giese, Martin A.; Prinz, Wolfgang
Research Areas: Uncategorized
Type of Publication: Article
Full text: PDF | Online version

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media

We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.