Non-reviewed Conference Papers and Abstracts

Year: 2011

Chiovetto, E., Omlor, L., D'Avella, A. & Giese, M. A (2011). Comparison between unsupervised learning algorithms for the extraction of muscle synergies Meeting f the German Neuroscience Society (GNS), Goettingen, Germany.
Comparison between unsupervised learning algorithms for the extraction of muscle synergies
Authors: Chiovetto, Enrico Omlor, Lars d'Avella, Andrea Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Month: 03
Christensen, A., Ilg, W. & Giese, M. A (2011). Biological motion detection does not involve an automatic perspective taking Journal of Vision, 11(11), 743.
Biological motion detection does not involve an automatic perspective taking
Authors: Christensen, Andrea Ilg, Winfried; Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Borchers, S., Christensen, A., Ziegler, L. & Himmelbach, M (2011). Der Einfluss erlernter Objektgroessen auf die visuelle Kontrolle von Greifbewegungen bei monokularer und binokularer Praesentation In: Tagung experimentell arbeitender Psychologen, Halle, Germany.
Der Einfluss erlernter Objektgroessen auf die visuelle Kontrolle von Greifbewegungen bei monokularer und binokularer Praesentation
Authors: Borchers, Svenja Christensen, Andrea Ziegler, Lisa Himmelbach, Marc
Research Areas: Uncategorized
Type of Publication: In Collection
Taubert, N., Endres, D., Christensen, A. & Giese, M. A (2011). Shaking Hands in Latent Space: Modeling Emotional Interactions with Gaussian Process Latent Variable Models. In Edelkamp, S., Bach & J. (editors), KI 2011: Advances in Artificial Intelligence, LNAI 7006 , 330-334. Springer.
Shaking Hands in Latent Space: Modeling Emotional Interactions with Gaussian Process Latent Variable Models
Authors: Taubert, Nick; Endres, Dominik Christensen, Andrea Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Oberhoff, D., Endres, D., Giese, M. A. & Kolesnik, M (2011). Gates for Handling Occlusion in Bayesian Models of Images: An Initial Study. In Edelkamp, S., Bach & J. (editors), KI 2011: Advances in Artificial Intelligence, LNAI 7006 , 228-232. Springer.
Gates for Handling Occlusion in Bayesian Models of Images: An Initial Study
Authors: Oberhoff, Daniel Endres, Dominik Giese, Martin A.; Kolesnik, Marina
Research Areas: Uncategorized
Type of Publication: In Collection
Beck, T., Wilke, C., Wirxel, B., Endres, D., Lindner, A. & Giese, M. A (2011). A Bayesian Graphical Model for the Influence of Agency Attribution on Perception and Control of Self-action Ninth Göttingen meeting of the German Neuroscience Society.
A Bayesian Graphical Model for the Influence of Agency Attribution on Perception and Control of Self-action
Authors: Beck, Tobias Wilke, Carlo Wirxel, Barbara Endres, Dominik Lindner, A. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Endres, D. & Oram, M (2011). Modeling Non-stationarity and Inter-spike Dependency in High-level Visual Cortical Area STSa Ninth Göttingen meeting of the German Neuroscience Society.
Modeling Non-stationarity and Inter-spike Dependency in High-level Visual Cortical Area STSa
Authors: Endres, Dominik Oram, Mike
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2010

Fleischer, F., Caggiano, V., Fogassi, L., Rizzolatti, G., Thier, P. & Giese, M. A (2010). Temporal and Semantic Selectivity in Mirror Neurons in monkey premotor area F5 Meeting of the Society for Neuroscience 2010, San Diego, USA.
Temporal and Semantic Selectivity in Mirror Neurons in monkey premotor area F5
Authors: Fleischer, Falk Caggiano, Vittorio Fogassi, Leonardo Rizzolatti, Giacomo Thier, Peter Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Giese, M. A., Caggiano, V. & Thier, P (2010). View-based neural encoding of goal-directed actions: a physiologically-inspired neural theory Journal of Vision, 10(7), 1095.
View-based neural encoding of goal-directed actions: a physiologically-inspired neural theory
Abstract:

View-based neural encoding of goal-directed actions: a physiologically-inpired neural theory The visual recognition of goal-directed movements is crucial for action understanding. Neurons with visual selectivity for goal-directed hand actions have been found in multiple cortical regions. Such neurons are characterized by a remarkable combination of selectivity and invariance: Their responses vary with subtle differences between hand shapes (e.g. defining different grip types) and the exact spatial relationship between effector and goal object (as required for a successful grip). At the same time, many of these neurons are largely invariant with respect to the spatial position of the stimulus and the visual perspective. This raises the question how the visual system accomplishes this combination of spatial accuracy and invariance. Numerous theories for visual action recognition in neuroscience and robotics have postulated that the visual system reconstructs the three-dimensional structures of effector and object and then verifies their correct spatial relationship, potentially by internal simulation of the observed action in a motor frame of reference. However, novel electrophysiological data showing view-dependent responses of mirror neurons point towards an alternative explanation. We propose an alternative theory that is based on physiologically plausible mechanisms, and which makes predictions that are compatible with electrophysiological results. It is based on the following key components: (1) A neural shape recognition hierarchy with incomplete position invariance; (2) a dynamic neural mechanism that associates shape information over time; (3) a gain-field-like mechanism that computes affordance- and spatial matching between effector and goal object; (4) pooling of the output signals of a small number of view-specific action-selective modules. We show that this model is computationally powerful enough to accomplish robust position- and view-invariant recognition on real videos. At the same time, it reproduces and correctly predicts data from single-cell recordings, e.g. on the view- and temporal–order selectivity of mirror neurons in area F5.

Authors: Giese, Martin A.; Caggiano, Vittorio Thier, Peter
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Caggiano, V. & Giese, M. A (2010). Neural model for the visual tuning properties of action-selective neurons in monkey cortex Meeting of the German Neuroscience Society (GNS), Goettingen, Germany..
Neural model for the visual tuning properties of action-selective neurons in monkey cortex
Authors: Fleischer, Falk Caggiano, Vittorio Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Endres, D., Höffken, M., Vintila, F., Bruce, N. D., Bouecke, J. D., Kornprobst, P. et al (2010). Hooligan Detection: the Effects of Saliency and Expert Knowledge ECVP 2010 and Perception 39 supplement, page 193.
Hooligan Detection: the Effects of Saliency and Expert Knowledge
Authors: Endres, Dominik Höffken, M. Vintila, F. Bruce, Neil D. B. Bouecke, Jan D. Kornprobst, Pierre Neumann, H. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Endres, D., Beck, T., Bouecke, J. D., Omlor, L., Neumann, H. & Giese, M. A (2010). Segmentation of Action Streams: Comparison between Human and Statistically Optimal Performance Vision Sciences Society Congress, VSS 2010 and Journal of Vision, vol. 10 no. 7 article 807, 2010..
Segmentation of Action Streams: Comparison between Human and Statistically Optimal Performance
Abstract:

Natural body movements arise in form of temporal sequences of individual actions. In order to realize a visual analysis of these actions, the visual system must accomplish a temporal segmentation of such action sequences. Previous work has studied in detail the segmentation of sequences of piecewise linear movements in the two-dimensional plane. In our study, we tried to compare statistical approaches for segmentation of human full-body movement with human responses. Video sequences were generated by synthesized sequences of natural actions based on motion capture, using appropriate methods for motion blending. Human segmentation was assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. We compared this psychophysical data against different segmentation algorithms, which were based on the available 3D joint trajectories that were used for the synthesis of the motion stimuli. Simple segmentation methods, e.g. based on discontinuities in path direction or speed, were compared with an optimal Bayesian action segmentation approach from machine learning. This method is based on a generative probabilistic model. Transitions between classes (types of actions) were modelled by resetting the feature priors at the change points. Change point configurations were modelled by Bayesian binning. Applying optimization within a Bayesian framework, number and the length of individual action segments were determined automatically. Performance of this algorithmic approach was compared with human performance.

Authors: Endres, Dominik Beck, Tobias Bouecke, Jan D. Omlor, Lars Neumann, H. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2010). Influence of (a)synchronous egomotion on action perception In: Neural Encoding of Perception and Action, Tuebingen, Germany.
Influence of (a)synchronous egomotion on action perception
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2010). Einfluss (a)synchroner Eigenbewegung auf die Handlungswahrnehmung In: Tagung experimentell arbeitender Psychologen, Saarbruecken, Germany.
Einfluss (a)synchroner Eigenbewegung auf die Handlungswahrnehmung
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2010). Facilitation of biological-motion detection by motor execution does not depend on attributed body side Perception 39 ECVP Abstract Supplement, page 18.
Facilitation of biological-motion detection by motor execution does not depend on attributed body side
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2009

Fleischer, F., Caggiano, V., Casile, A. & Giese, M. A (2009). Neural model for the visual tuning properties of action-selective neurons in premotor cortex Meeting of the German Neuroscience Society (GNS), Goettingen, Germany.
Neural model for the visual tuning properties of action-selective neurons in premotor cortex
Abstract:

Neural model for the visual tuning properties of action-selective neurons in premotor cortex The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what the precise nature of this putative visuo-motor interaction is, and which relevant computational functions can be accomplished by purely visual processing. Here, we present a neurophysiologically inspired model for the visual recognition of grasping movements from videos. The model shows that the recognition of functional actions can be accounted for to a substantial degree by the analysis of spatio-temporal visual features using well-established simple neural circuits. The model integrates a hierarchical neural architecture that extracts form information in a view-dependent way accomplishing partial position and scale invariance [3,4,5]. It includes physiologically plausible recurrent neural circuits that result in temporal sequence selectivity [6,7,8]. As a novel computational step, the model proposes a simple neural mechanism that accounts for the selective matching between the spatial properties of goal objects and the specific posture, position and orientation of the effector (hand). Opposed to other models that assume a complete reconstruction of the 3D effector and object shape our model is consistent with the fact that almost 90 % of mirror neurons in premotor cortex show view-tuning. We demonstrate that the model is sufficiently powerful for recognizing goal-directed actions from real video sequences. In addition, it correctly predicts several key properties of the visual tuning of neurons in premotor cortex. We conclude that the recognition of functional actions can be accomplished by simple physiologically plausible mechanisms, without the explicit reconstruction of the 3D structures of objects and effector. Instead, prediction over time can be accomplished by the learning of spatio-temporal visual pattern sequences. This ‘bottom-up’ view of action recognition complements existing models for the mirror neuron system [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Nonlin Soft Matter Phys 65, 051904. [9] Oztop, E. et al. (2006): Neural Netw. 19, 254-271.

Authors: Fleischer, Falk Caggiano, Vittorio Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Giese, M. A., Casile, A. & Fleischer, F (2009). Neural model of action-selective neurons in STS and area F5 Int Conf on Cognitive Systems Neuroscience (COSYNE) 2009, Salt Lake City, USA.
Neural model of action-selective neurons in STS and area F5
Abstract:

Neural model of action-selective neurons in STS and area F5 The visual recognition of goal-directed movements is crucial for the understanding of intentions and goals of others as well as for imitation learning. So far, it is largely unknown how visual information about effectors and goal objects of actions is integrated in the brain. Specifically, it is unclear whether a robust recognition of goal-directed actions can be accomplished by purely visual processing or if it requires a reconstruction of the three-dimensional structure of object and effector geometry. We present a neurophysiologically inspired model for the recognition of goal-directed grasping movements from real video sequences. The model integrates several physiologically plausible mechanisms in order to realize the integration of information about goal objects and the effector and its movement: (1) A hierarchical neural architecture for the recognition of hand and object shapes, which realizes position and scale-invariant recognition by subsequent increase of feature complexity and invariance along the hierarchy based on learned example views [1,2,3]. However, in contrast to standard models for visual object recognition this invariance is incomplete, so that the retinal positions of goal object and effector can be extracted by a population code. (2) Simple recurrent neural circuits for the realization of temporal sequence selectivity [4,5,6]. (3) A novel mechanism combines information about object shape and affordance and about effector (hand) posture and position in an object-centered frame of reference. This mechanism exploits gain fields in order to implement the relevant coordinate transformation [7,8]. The model shows that a robust integration of effector and object information can be accomplished by well-established physiologically plausible principles. Specifically, the proposed model does not contain explicit 3D representations of objects and the effector movement. Instead, it realizes predictions over time based on learned view-dependent representation of the visual input. Our results complement those of existing models of action recognition [8] and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [2] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [3] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [4] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [5] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [6] Xie, X. and Giese, M. (2002): Phys Rev E Stat Nonlin Soft Matter Phys 65, 051904. [7] Salinas, E. and Abbott, L. (1995): J. Neurosci. 75, 6461-6474. [8] Pouget, A. and Sejnowski, T. (1997): J. Cogn. Neurosci. 9, 222-237. [9] Oztop, E. et al. (2006): Neural Netw. 19, 254-271.

Authors: Giese, Martin A.; Casile, Antonino Fleischer, Falk
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2009). A neural model of the visual tuning properties of action-selective neurons in STS and area F5 Journal of Vision, 9(8), 1106.
A neural model of the visual tuning properties of action-selective neurons in STS and area F5
Abstract:

A neural model of the visual tuning properties of action-selective neurons in STS and area F5 The visual recognition of goal-directed movements is crucial for the understanding of intentions and goals of others as well as for imitation learning. So far, it is largely unknown how visual information about effectors and goal objects of actions is integrated in the brain. Specifically, it is unclear whether a robust recognition of goal-directed actions can be accomplished by purely visual processing or if it requires a reconstruction of the three-dimensional structure of object and effector geometry. We present a neurophysiologically inspired model for the recognition of goal-directed grasping movements. The model reproduces fundamental properties of action-selective neurons in STS and area F5. The model is based on a hierarchical architecture with neural detectors that reproduce the properties of cells in visual cortex. It contains a novel physiologically plausible mechanism that combines information on object shape and effector (hand) shape and movement, implementing the necessary coordinate transformations from retinal to an object centered frame of reference. The model was evaluated with real video sequences of human grasping movements, using a separate training and test set. The model reproduces a variety of tuning properties that have been observed in electrophysiological experiments for action-selective neurons in STS and area F5. The model shows that the integration of effector and object information can be accomplished by well-established physiologically plausible principles. Specifically, the proposed model does not compute explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned view-dependent representations for sequences of hand shapes. Our results complement those of existing models for the recognition of goal-directed actions and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Giese, M. A., Caggiano, V., Casile, A. & Fleischer, F (2009). Visual encoding of goal-directed movements: a physiologically plausible neural model Meeting of the Society for Neuroscience 2009, Washington DC, USA.
Visual encoding of goal-directed movements: a physiologically plausible neural model
Abstract:

Visual encoding of goal-directed movements: a physiologically plausible neural model Visual responses of action-selective neurons, e.g. in premotor cortex and the superior temporal sulcus of the macaque monkey, are characterized by a remarkable combination of selectivity and invariance. On the one hand, the responses of such neurons show high selectivity for details about the grip and the spatial relationship between effector and object. At the same time, these responses show substantial invariance against the retinal stimulus position. While numerous models for the mirror neuron system have been proposed in robotics and neuroscience, almost none of them accounts for the visual tuning properties of action-selective neurons exploiting physiologically plausible neural mechanisms. In addition, many existing models assume that action encoding is based on a full reconstruction of the 3D geometry of effector and object. This contradicts recent electrophysiological results showing view-dependence of the majority of action-selective neurons, e.g. in premotor cortex. We present a neurophysiologically plausible model for the visual recognition of grasping movements from real videos. The model is based on simple well-established neural circuits. Recognition of effector and goal object is accomplished by a hierarchical neural architecture, where scale and position invariance are accomplished by nonlinear pooling along the hierarchy, consistent with many established models from object recognition. Effector recognition includes a simple predictive neural circuit that results in temporal sequence selectivity. Effector and goal position are encoded within the neural hierarchy in terms of population codes, which can be processed by a simple gain field-like mechanism in order to compute the relative position of effector and object in a retinal frame of reference. Based on this signal, and object and effector shape, the highest hierarchy level accomplishes a distinction between functional (hand matches object shape and position) and dysfunctional (no match between hand and object shape or position) grips, at the same time being invariant against strong changes of the stimulus position. The model was tested with several stimuli from the neurophysiological literature and reproduces, partially even quantitatively, results about action-selective neurons in the STS and premotor cortex. Specifically, the model reproduces visual tuning properties and the view-dependence of mirror neurons in premotor cortex and makes additional predictions, which can be easily tested in electrophysiological experiments.

Authors: Giese, Martin A.; Caggiano, Vittorio Casile, Antonino Fleischer, Falk
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2009). Invariant recognition of goal-directed hand actions: a physiologically plausible neural model Perception 38 ECVP Abstract Supplement, 51.
Invariant recognition of goal-directed hand actions: a physiologically plausible neural model
Abstract:

Invariant recognition of goal-directed hand actions: a physiologically plausible neural model The recognition of transitive, goal-directed actions requires highly selective processing of shape details of effector and goal object, and high robustness with respect to image transformations at the same time. The neural mechanisms required for solving this challenging recognition task remain largely unknown. We propose a neurophysiologically-inspired model for the recognition of transitive grasping actions, which combines high selectivity for different grips with strong position invariance. The model is based on well-established physiologically plausible simple neural mechanisms. Invariance is accomplished by combining nonlinear pooling (by maximum operations) and a specific neural representation of the relative position of object and effector based on a gain-field like mechanism. The proposed architecture accomplishes accurate recognition of different grip types on real video data and reproduces correctly several properties of action-selective neurons in occipital, parietal and premotor areas. In addition, the model shows that the accurate recognition of goal-directed actions can be accomplished without an explicit reconstruction of the 3-D structure of effectors and objects, as assumed in many technical systems for the recognitions of hand actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition In: GNS Congress, Goettingen, Germany.
Temporal Synchrony as Critical Factor for Faciliation and Interference of Action Recognition
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion Journal of Vision, 9(8), 614.
Influence of spatial and temporal congruency between executed and observed movements of the recognition of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2009). Specific influences of self-motion on the detection of biological motion Perception 38 ECVP Abstract Supplement, page: 85..
Specific influences of self-motion on the detection of biological motion
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Endres, D., Priss, U. & Földiák, P (2009). Interpreting the Neural Code with Formal Concept Analysis Perception 38 ECVP Abstract Supplement, page 127.
Interpreting the Neural Code with Formal Concept Analysis
Authors: Endres, Dominik Priss, Uta Földiák, Peter
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2008

Christensen, A., Ilg, W., Karnath, H. O. & Giese, M. A (2008). Faciliation of action recognition by motor programs is critically dependent on timing Perception 37 ECVP Abstract Supplement. page: 25 (TRAVEL AWARD).
Faciliation of action recognition by motor programs is critically dependent on timing
Authors: Christensen, Andrea Ilg, Winfried; Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Omlor, L., Giese, M. A. & Roether, C. L (2008). Distinctive postural and dynamic features for bodily emotion expression Journal of Vision, 8(6), 910a.
Distinctive postural and dynamic features for bodily emotion expression
Authors: Omlor, Lars Giese, Martin A.; Roether, C. L.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the recognition of transitive actions Perception, 37(suppl.), 155.
Neural model for the recognition of transitive actions
Abstract:

Neural model for the recognition of transitive actions The visual recognition of goal-directed movements is crucial for imitation and possibly the understanding of actions.We present a neurophysiologically-inspired model for the recognition of goal-directed hand movements. The model exploits neural principles that have been used before to account for object and action recognition: (i) hierarchical neural architecture extracting form and motion features; (ii) optimization of mid-level features by learning; (iii) realization of temporal sequence selectivity by recurrent neural circuits. Beyond these classical principles, the model proposes novel physiologically plausible mechanisms for the integration of information about effector shape, motion, goal object, and affordance. We demonstrate that the model is powerful enough to recognize hand actions from real video sequences and reproduces charac- teristic properties of real cortical neurons involved in action recognition. We conclude that: (i) goal-directed actions can be recognized by view-based mechanisms without a simulation of the actions in 3-D, (ii) well-established neural principles of object and motion recognition are sufficient to account for the visual recognition of goal-directed transitive actions.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Simulating mirror-neuron responses using a neural model for visual action recognition Proceedings of the Seventeenth Annual Computational Neuroscience Meeting CNS, July 19th - 24th 2008, Portland, Oregon, USA.
Simulating mirror-neuron responses using a neural model for visual action recognition
Abstract:

Simulating mirror-neuron responses using a neural model for visual action recognition The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. We present a neurophysiologically inspired model for the visual recognition of hand movements. It demonstrates that several experimentally shown properties of mirror neurons can be explained by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts 2D form features with subsequently increasing complexity and invariance to position along the hierarchy [3,4,5]. (2) Extraction of optimal features on different hierarchy levels by eliminating features which are not contributing to correct classification results. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) A simple neural mechanism that combines the spatial information about goal object and its affordance and the information about the end effector and its movement. The model is validated with video sequences of both monkey and human grasping actions. We show that simple well-established physiologically plausible mechanisms can account for important aspects of visual action recognition and experimental data of the mirror neuron system. Specifically, these results are independent of explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned 2D pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References 1. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G: Understanding motor events: a neurophysiological study. Exp Brain Res 1992, 91:176-180. 2. Rizzolatti G, Craighero L: The mirror-neuron system. Annu Rev Neurosci 2004, 27:169-192. 3. Giese MA, Poggio T: Neural mechanisms for the recognition of biological movements. Nat Rev Neurosci 2003, 4:179-192. 4. Riesenhuber M, Poggio T: Hierarchical models of object recognition in cortex. Nat Neurosci 1999, 2:1019-1025. 5. Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T: Robust object recognition with cortex-like mechanisms. IEEE Trans Pattern Anal Mach Intell 2007, 29:411-426. 6. Xie X, Giese MA: Nonlinear dynamics of direction-selective recurrent neural media. Phys Rev E Stat Nonlin Soft Matter Phys 2002, 65:051904. 7. Zhang K: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 1996, 16:2112-2126. 8. Hopfield JJ, Brody CD: What is a moment? "Cortical" sensory integration over a brief interval. Proc Natl Acad Sci U S A 2000, 97:13919-13924. 9. Oztop E, Kawato M, Arbib M: Mirror neurons and imitation: a computationally guided review. Neural Netw 2006, 19:254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of actions Int Conf on Cognitive Systems Neuroscience (COSYNE) 2008, Salt Lake City, USA.
Neural model for the visual recognition of actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Non

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Ilg, W., Christensen, A., Karnath, H. O. & Giese, M. A (2008). Facilitation of action recognition by self-generated movements depends critically on timing Neuroscience Meeting, Washington DC.
Facilitation of action recognition by self-generated movements depends critically on timing
Authors: Ilg, Winfried; Christensen, Andrea Karnath, H. O. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Fleischer, F., Casile, A. & Giese, M. A (2008). Neural model for the visual recognition of hand actions Journal of Vision, 8(6), 53a.
Neural model for the visual recognition of hand actions
Abstract:

Neural model for the visual recognition of actions The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing. Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features within a hierarchical neural system that reproduces fundamental properties of the visual pathway and premotor cortex. The model integrates several physiologically plausible computational mechanisms within a common architecture that is suitable for the recognition of grasping actions from real videos: (1) A hierarchical neural architecture that extracts form and motion features with position and scale invariance by subsequent increase of feature complexity and invariance along the hierarchy [3,4,5]. (2) Learning of optimized features on different hierarchy levels using a trace learning rule that eliminates features which are not contributing to correct classification results [5]. (3) Simple recurrent neural circuits for the realization of temporal sequence selectivity [6,7,8]. (4) As novel computational function the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector (hand). The model is evaluated on video sequences of both monkey and human grasping actions. The model demonstrates that simple well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, the proposed model does not contain explicit 3D representations of objects and the action. Instead, it realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complements those of existing models [9] and motivates a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions. References [1] Di Pellegrino, G. et al. (1992): Exp. Brain Res. 91, 176-180. [2] Rizzolatti, G. and Craighero, L. (2004): Annu. Rev. Neurosci. 27, 169-192. [3] Riesenhuber, M. and Poggio, T. (1999): Nat. Neurosci. 2, 1019-1025. [4] Giese, A.M. and Poggio, T. (2003): Nat. Rev. Neurosci. 4, 179-192. [5] Serre, T. et al. (2007): IEEE Pattern Anal. Mach. Int. 29, 411-426. [6] Zhang, K. (1996): J. Neurosci. 16, 2112-2126. [7] Hopfield, J. and Brody, D. (2000): Proc Natl Acad Sci USA 97, 13919-13924. [8] Xie, X. and Giese, M. (2002): Phys Rev E Stat Nonlin Soft Matter Phys 65, 051904. [9] Oztop, E. et al. (2006): Neural Netw. 19, 254-271.

Authors: Fleischer, Falk Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2007

Curio, C., Breidt, M., Kleiner, M., B\, H. H. & Giese, M. A (2007). High-level after-effects in the recognition of dynamic facial expressions. Journal of Vision Perception, 36(suppl.), 994.
High-level after-effects in the recognition of dynamic facial expressions
Authors: Curio, Cristobal Breidt, Martin Kleiner, Mario B\, H. H. Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Caggiano, V., Fogassi, L., Rizzolatti, G., Thier, P., Casile, A. & Giese, M. A (2007). Neurons in monkey pre-motor cortex (area F5) responding to filmed actions Perception, 36(suppl.), 73.
Neurons in monkey pre-motor cortex (area F5) responding to filmed actions
Authors: Caggiano, Vittorio Fogassi, Leonardo Rizzolatti, Giacomo Thier, Peter Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection
Caggiano, V., Fogassi, L., Rizzolatti, G., Thier, P., Casile, A. & Giese, M. A (2007). Mirror neurons responding to filmed actions Proc. of 37th Annual Meeting of the Society for Neuroscience , 3rd-7th November 2007, San Diego (USA)..
Mirror neurons responding to filmed actions
Authors: Caggiano, Vittorio Fogassi, Leonardo Rizzolatti, Giacomo Thier, Peter Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2003

Casile, A. & Giese, M. A (2003). Roles of motion and form in biological motion recognition. In Kaynak, Okyay, Alpaydin, Ethem, Oja, Erkki et al (editors), Artificial Neural Networks and Neural Information Processing - ICANN/ICONIP 2003. Lecture Notes in Computer Science , 2714, 854-862.
Roles of motion and form in biological motion recognition
Abstract:

Animals and humans recognize biological movements and actions with high robustness and accuracy. It still remains to be clarified how different neural mechanisms processing form and motion information contribute to this recognition process. We investigate this question using simple learning-based neurophysiologically inspired mechanisms for biological motion recognition. In quantitative simulations we show the following results: (1) Point light stimuli with strongly degraded local motion information can be recognized with a neural model for the (dorsal) motion pathway. (2) The recognition of degraded biological motion stimuli is dependent on previous experience with point light stimuli. (3) Opponent motion features seem to be critical for the recognition of these stimuli.

Authors: Casile, Antonino Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

Year: 2002

Giese, M. A (2002). Learning recurrent neural models with minimal complexity from sparse neural data . NATO Workshop on Learning Theory and Practice, Leuven.
Learning recurrent neural models with minimal complexity from sparse neural data
Authors: Giese, Martin A.
Research Areas: Uncategorized
Type of Publication: In Collection

No year

Giese, M. A., BOGNÁR, A. & Vogels, R. Physiologically-inspired neurodynamical model for anorthoscopic perception .
Physiologically-inspired neurodynamical model for anorthoscopic perception
Type of Publication: In Collection

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media