RELEVANCE: How body relevance drives brain organization

Research Area

Neural and Computational Principles of Action and Social Processing

Researchers

Martin A. Giese; Michael Stettler; Nick Taubert; Albert Mukovskiy; Winfried Ilg; Lucas M. Martini; Prerana Kumar; Alexander Lappe;

Collaborators

Rufin Vogels (KU Leuven); Beatrice de Gelder (Maastricht University)
Proposed start date
2020-07-01
Proposed end date
2025-06-30

Description

Social species, and specifically human and nonhuman primates, rely heavily on conspecifics for survival. Considerable time is spent watching each other’s behavior because this is often the most relevant source of information for preparing adaptive social responses. The project RELEVANCE aims to understand how the brain evolved special structures to process highly relevant social stimuli like bodies and to reveal how social vision sustains adaptive behaviour. This requires a novel way of thinking about biological information processing, currently among the brains’ most distinctive and least understood characteristic that accounts for the biggest difference between brains and computers.

The project will develop a mechanistic and computational understanding of the visual processing of bodies and interactions and show how this processing sustains higher abilities such as understanding intention, action and emotion. Relevance will accomplish this by integrating advanced methods from multiple disciplines. Crosstalk between human and monkey methods will establish homologies between the species, revealing cornerstones of the theory. Physiologically-inspired neuraland deep neural network models will help to understand the dynamic mechanisms of neural processing of complex social stimuli and its task-dependent modulation.

RELEVANCE aims at uncovering novel pronciples of the processing of social stimuli that might inspire novel diagnostiv and treatment approachesWe aim to model the visual recognition of bodies, actions and interactions. The goal is to use different computational approaches to ultimately predict and explain experimental data. In neuropsychiatry, and which might inspire nobel achitectures for the processing of socially relevant information in computer and robotic systems.

This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n° 856495).

 

Subprojects

Higly realistic monkey body avatars with veridical motion

Uncovering the neural mechansisms of the neural encoding of visually perceived body, action and social stimuli requires highly controled an dparameterized stimuli. Neurophysilogical experiments require such stimuli for monkeys, which we generate by combining cutting edge methods for markerless tracking with methods from computer animation.

Analyzing the role of mirror neurons in action encoding and selection

Mirror neurons in premotor and parietal cortex link visual processing and the neural encoding of motor behavior. We try to identify their exact computational role in terms of visual and motor encoding, and the representation of motor programs.

Neurodynamical models of the single-cell activity of body-selective visual neurons

Bodies represent high-dimensional and dynamically changing stimuli. Based on highly-controlled stimuli that we developed, and which have been used by our collaboration partners in Leuven and Maastricht, we develop neural models that explain physilogical and fMRI data.

Publications

Lappe, A., Bognár, A., Nejad, G. G., Mukovskiy, A., Martini, L. M., Giese, M. A. et al. (2024). Parallel Backpropagation for Shared-Feature Visualization. Advances in Neural Information Processing Systems(37), 22993-23012.
Parallel Backpropagation for Shared-Feature Visualization
Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Albert Mukovskiy; Lucas M. Martini; Martin A. Giese; Rufin Vogels
Type of Publication: Article
Journal: Advances in Neural Information Processing Systems
Number: 37
Pages: 22993-23012
Year: 2024
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A. (2024). MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture. bioRxiv.
MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture
Abstract:

Social interaction is crucial for survival in primates. For the study of social vision in monkeys, highly controllable macaque face avatars have recently been developed, while body avatars with realistic motion do not yet exist. Addressing this gap, we developed a pipeline for three-dimensional motion tracking based on synchronized multi-view video recordings, achieving sufficient accuracy for life-like full-body animation. By exploiting data-driven pose estimation models, we track the complete time course of individual actions using a minimal set of hand-labeled keyframes. Our approach tracks single actions more accurately than existing pose estimation pipelines for behavioral tracking of non-human primates, requiring less data and fewer cameras. This efficiency is also confirmed for a state-of-the-art human benchmark dataset. A behavioral experiment with real macaque monkeys demonstrates that animals perceive the generated animations as similar to genuine videos, and establishes an uncanny valley effect for bodies in monkeys.Competing Interest StatementThe authors have declared no competing interest.

Type of Publication: Article
Lappe, A., Bognár, A., Nejad, G. G., Raman, R., Mukovskiy, A., Martini, L. M. et al (2024). Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity. Journal of Vision September 2024 . Vision Science Society.
Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Rajani Raman Albert Mukovskiy; Lucas M. Martini; Rufin Vogels Martin A. Giese
Type of Publication: In Collection
JRESEARCH_BOOK_TITLE: Journal of Vision September 2024
Publisher: Vision Science Society
Month: September
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A (2024). Macaques show an uncanny valley in body perception. Journal of Vision September 2024 . Vision Science Society.
Macaques show an uncanny valley in body perception
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Type of Publication: In Collection
Rens, G., Bognár, A., Raman, R., Taubert, N., Li, B., Giese, M. A. et al (2023). Similarity in monkey fMRI activation patterns for human and monkey faces but not bodies . 13th Annual Meeting on PrimateNeurobiology, Apr.26-28 2023, Göttingen Primate Center..
Similarity in monkey fMRI activation patterns for human and monkey faces but not bodies
Authors: G. Rens A. Bognár R. Raman Nick Taubert; B. Li Martin A. Giese; B. De Gelder
Research Areas: Uncategorized
Type of Publication: In Collection

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media