Non-reviewed Conference Papers and Abstracts

Year: 2009

Fleischer, F., Casile, A. & Giese, M. A (2009). A neural model of the visual tuning properties of action-selective neurons in STS and area F5 Journal of Vision, 9(8), 1106.
A neural model of the visual tuning properties of action-selective neurons in STS and area F5
Abstract:

A neural model of the visual tuning properties of action-selective neurons in STS and area F5 The visual recognition of goal-directed movements is crucial for the understanding of intentions and goals of others as well as for imitation learning. So far, it is largely unknown how visual information about effectors and goal objects of actions is integrated in the brain. Specifically, it is unclear whether a robust recognition of goal-directed actions can be accomplished by purely visual processing or if it requires a reconstruction of the three-dimensional structure of object and effector geometry. We present a neurophysiologically inspired model for the recognition of goal-directed grasping movements. The model reproduces fundamental properties of action-selective neurons in STS and area F5. The model is based on a hierarchical architecture with neural detectors that reproduce the properties of cells in visual cortex. It contains a novel physiologically plausible mechanism that combines information on object shape and effector (hand) shape and movement, implementing the necessary coordinate transformations from retinal to an object centered frame of reference. The model was evaluated with real video sequences of human grasping movements, using a separate training and test set. The model reproduces a variety of tuning properties that have been observed in electrophysiological experiments for action-selective neurons in STS and area F5. The model shows that the integration of effector and object information can be accomplished by well-established physiologically plausible principles. Specifically, the proposed model does not compute explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned view-dependent representations for sequences of hand shapes. Our results complement those of existing models for the recognition of goal-directed actions and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Month: 08
Pages: 1106
Giese, M. A., Caggiano, V., Casile, A. & Fleischer, F (2009). Visual encoding of goal-directed movements: a physiologically plausible neural model Meeting of the Society for Neuroscience 2009, Washington DC, USA.
Visual encoding of goal-directed movements: a physiologically plausible neural model
Abstract:

Visual encoding of goal-directed movements: a physiologically plausible neural model Visual responses of action-selective neurons, e.g. in premotor cortex and the superior temporal sulcus of the macaque monkey, are characterized by a remarkable combination of selectivity and invariance. On the one hand, the responses of such neurons show high selectivity for details about the grip and the spatial relationship between effector and object. At the same time, these responses show substantial invariance against the retinal stimulus position. While numerous models for the mirror neuron system have been proposed in robotics and neuroscience, almost none of them accounts for the visual tuning properties of action-selective neurons exploiting physiologically plausible neural mechanisms. In addition, many existing models assume that action encoding is based on a full reconstruction of the 3D geometry of effector and object. This contradicts recent electrophysiological results showing view-dependence of the majority of action-selective neurons, e.g. in premotor cortex. We present a neurophysiologically plausible model for the visual recognition of grasping movements from real videos. The model is based on simple well-established neural circuits. Recognition of effector and goal object is accomplished by a hierarchical neural architecture, where scale and position invariance are accomplished by nonlinear pooling along the hierarchy, consistent with many established models from object recognition. Effector recognition includes a simple predictive neural circuit that results in temporal sequence selectivity. Effector and goal position are encoded within the neural hierarchy in terms of population codes, which can be processed by a simple gain field-like mechanism in order to compute the relative position of effector and object in a retinal frame of reference. Based on this signal, and object and effector shape, the highest hierarchy level accomplishes a distinction between functional (hand matches object shape and position) and dysfunctional (no match between hand and object shape or position) grips, at the same time being invariant against strong changes of the stimulus position. The model was tested with several stimuli from the neurophysiological literature and reproduces, partially even quantitatively, results about action-selective neurons in the STS and premotor cortex. Specifically, the model reproduces visual tuning properties and the view-dependence of mirror neurons in premotor cortex and makes additional predictions, which can be easily tested in electrophysiological experiments.

Authors: Giese, Martin A.; Caggiano, Vittorio Casile, Antonino Fleischer, Falk
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2009). Invariant recognition of goal-directed hand actions: a physiologically plausible neural model Perception 38 ECVP Abstract Supplement, 51.
Invariant recognition of goal-directed hand actions: a physiologically plausible neural model
Abstract:

Invariant recognition of goal-directed hand actions: a physiologically plausible neural model The recognition of transitive, goal-directed actions requires highly selective processing of shape details of effector and goal object, and high robustness with respect to image transformations at the same time. The neural mechanisms required for solving this challenging recognition task remain largely unknown. We propose a neurophysiologically-inspired model for the recognition of transitive grasping actions, which combines high selectivity for different grips with strong position invariance. The model is based on well-established physiologically plausible simple neural mechanisms. Invariance is accomplished by combining nonlinear pooling (by maximum operations) and a specific neural representation of the relative position of object and effector based on a gain-field like mechanism. The proposed architecture accomplishes accurate recognition of different grip types on real video data and reproduces correctly several properties of action-selective neurons in occipital, parietal and premotor areas. In addition, the model shows that the accurate recognition of goal-directed actions can be accomplished without an explicit reconstruction of the 3-D structure of effectors and objects, as assumed in many technical systems for the recognitions of hand actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition In: GNS Congress, Goettingen, Germany.
Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion Journal of Vision, 9(8), 614.
Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Specific influences of self-motion on the detection of biological motion Perception 38 ECVP Abstract Supplement, page: 85..
Specific influences of self-motion on the detection of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Endres, D., Priss, U. & Földiák, P (2009). Interpreting the Neural Code with Formal Concept Analysis Perception 38 ECVP Abstract Supplement, page 127.
Interpreting the Neural Code with Formal Concept Analysis
Authors: Endres, Dominik Priss, Uta Földiák, Peter
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2008

Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2008). Faciliation of action recognition by motor programs is critically dependent on timing Perception 37 ECVP Abstract Supplement. page: 25 (TRAVEL AWARD).
Faciliation of action recognition by motor programs is critically dependent on timing
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Omlor, L., Giese, M. A. & Roether, C. L (2008). Distinctive postural and dynamic features for bodily emotion expression Journal of Vision, 8(6), 910a.
Distinctive postural and dynamic features for bodily emotion expression
Authors: Omlor, Lars Giese, Martin A.; Roether, C. L.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the recognition of transitive actions Perception, 37(suppl.), 155.
Neural model for the recognition of transitive actions
Abstract:

Neural model for the recognition of transitive actions The visual recognition of goal-directed movements is crucial for imitation and possibly the understanding of actions.We present a neurophysiologically-inspired model for the recognition of goal-directed hand movements. The model exploits neural principles that have been used before to account for object and action recognition: (i) hierarchical neural architecture extracting form and motion features; (ii) optimization of mid-level features by learning; (iii) realization of temporal sequence selectivity by recurrent neural circuits. Beyond these classical principles, the model proposes novel physiologically plausible mechanisms for the integration of information about effector shape, motion, goal object, and affordance. We demonstrate that the model is powerful enough to recognize hand actions from real video sequences and reproduces charac- teristic properties of real cortical neurons involved in action recognition. We conclude that: (i) goal-directed actions can be recognized by view-based mechanisms without a simulation of the actions in 3-D, (ii) well-established neural principles of object and motion recognition are sufficient to account for the visual recognition of goal-directed transitive actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Simulating mirror-neuron responses using a neural model for visual action recognition Proceedings of the Seventeenth Annual Computational Neuroscience Meeting CNS, July 19th - 24th 2008, Portland, Oregon, USA.
Simulating mirror-neuron responses using a neural model for visual action recognition
Abstract:

Simulating mirror-neuron responses using a neural model for visual action recognition The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. We present a neurophysiologically inspired model for the visual recognition of hand movements. It demonstrates that several experimentally shown properties of mirror neurons can be explained by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts 2D form features with subsequently increasing complexity and invariance to position along the hierarchy [3,4,5]. (2) Extraction of optimal features on different hierarchy levels by eliminating features which are not contributing to correct classification results. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) A simple neural mechanism that combines the spatial information about goal object and its affordance and the information about the end effector and its movement. The model is validated with video sequences of both monkey and human grasping actions. We show that simple well-established physiologically plausible mechanisms can account for important aspects of visual action recognition and experimental data of the mirror neuron system. Specifically, these results are independent of explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned 2D pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References 1. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G: Understanding motor events: a neurophysiological study. Exp Brain Res 1992, 91:176-180. 2. Rizzolatti G, Craighero L: The mirror-neuron system. Annu Rev Neurosci 2004, 27:169-192. 3. Giese MA, Poggio T: Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 2003, 4:179-192. 4. Riesenhuber M, Poggio T: Hierarchical models of object recognition in cortex. Nat Neurosci 1999, 2:1019-1025. 5. Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T: Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell 2007, 29:411-426. 6. Xie X, Giese MA: Nonlinear dynamics of direction-selective recurrent neural media. Phys Rev E Stat Nonlin Soft Matter Phys 2002, 65:051904. 7. Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 1996, 16:2112-2126. 8. Hopfield JJ, Brody CD: What is a moment? "Cortical" sensory integration over a brief interval. Proc Natl Acad Sci U S A 2000, 97:13919-13924. 9. Oztop E, Kawato M, Arbib M: Mirror neurons and imitation: a computationally guided review. Neural Netw 2006, 19:254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of actions Int Conf on Cognitive Systems Neuroscience (COSYNE) 2008, Salt Lake City, USA.
Neural model for the visual recognition of actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Non

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Ilg, W., Christensen, A., Karnath, H. O. & Giese, M. A (2008). Facilitation of action recognition by self-generated movements depends critically on timing Neuroscience Meeting, Washington DC.
Facilitation of action recognition by self-generated movements depends critically on timing
Authors: Ilg, Winfried; Christensen, Andrea Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of hand actions Journal of Vision, 8(6), 53a.
Neural model for the visual recognition of hand actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Nonlin Soft Matter Phys 65, 051904. [9] Oztop, E. et al. (2006): Neural Netw. 19, 254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2007

Curio, C., Breidt, M., Kleiner, M., B\, H. H. & Giese, M. A (2007). High-level after-effects in the recognition of dynamic facial expressions. Journal of Vision Perception, 36(suppl.), 994.
High-level after-effects in the recognition of dynamic facial expressions
Authors: Curio, Cristobal Breidt, Martin Kleiner, Mario B\, H. H. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Caggiano, V., Fogassi, L., Rizzolatti, G., Thier, P., Casile, A. & Giese, M. A (2007). Neurons in monkey pre-motor cortex (area F5) responding to filmed actions Perception, 36(suppl.), 73.
Neurons in monkey pre-motor cortex (area F5) responding to filmed actions
Authors: Caggiano, Vittorio Fogassi, Leonardo Rizzolatti, Giacomo Thier, Peter Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Caggiano, V., Fogassi, L., Rizzolatti, G., Thier, P., Casile, A. & Giese, M. A (2007). Mirror neurons responding to filmed actions Proc. of 37th Annual Meeting of the Society for Neuroscience , 3rd-7th November 2007, San Diego (USA)..
Mirror neurons responding to filmed actions
Authors: Caggiano, Vittorio Fogassi, Leonardo Rizzolatti, Giacomo Thier, Peter Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2003

Casile, A. & Giese, M. A (2003). Roles of motion and form in biological motion recognition. In Kaynak, Okyay, Alpaydin, Ethem, Oja, Erkki et al (editors), Artificial Neural Networks and Neural Information Processing - ICANN/ICONIP 2003. Lecture Notes in Computer Science , 2714, 854-862.
Roles of motion and form in biological motion recognition
Abstract:

Animals and humans recognize biological movements and actions with high robustness and accuracy. It still remains to be clarified how different neural mechanisms processing form and motion information contribute to this recognition process. We investigate this question using simple learning-based neurophysiologically inspired mechanisms for biological motion recognition. In quantitative simulations we show the following results: (1) Point light stimuli with strongly degraded local motion information can be recognized with a neural model for the (dorsal) motion pathway. (2) The recognition of degraded biological motion stimuli is dependent on previous experience with point light stimuli. (3) Opponent motion features seem to be critical for the recognition of these stimuli.

Authors: Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2002

Giese, M. A (2002). Learning recurrent neural models with minimal complexity from sparse neural data . NATO Workshop on Learning Theory and Practice, Leuven.
Learning recurrent neural models with minimal complexity from sparse neural data
Authors: Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

No year

Giese, M. A., BOGNÁR, A. & Vogels, R. Physiologically-inspired neurodynamical model for anorthoscopic perception .
Physiologically-inspired neurodynamical model for anorthoscopic perception
Type of Publication: In Collection

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media

We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.