RELEVANCE: How body relevance drives brain organization

Research Area

Neural and Computational Principles of Action and Social Processing

Researchers

Martin A. Giese; Michael Stettler; Nick Taubert; Albert Mukovskiy; Winfried Ilg; Lucas M. Martini; Prerana Kumar; Alexander Lappe;

Collaborators

Rufin Vogels (KU Leuven); Beatrice de Gelder (Maastricht University)
Proposed start date
2020-07-01
Proposed end date
2025-06-30

Description

Social species, and specifically human and nonhuman primates, rely heavily on conspecifics for survival. Considerable time is spent watching each other’s behavior because this is often the most relevant source of information for preparing adaptive social responses. The project RELEVANCE aims to understand how the brain evolved special structures to process highly relevant social stimuli like bodies and to reveal how social vision sustains adaptive behaviour. This requires a novel way of thinking about biological information processing, currently among the brains’ most distinctive and least understood characteristic that accounts for the biggest difference between brains and computers.

The project will develop a mechanistic and computational understanding of the visual processing of bodies and interactions and show how this processing sustains higher abilities such as understanding intention, action and emotion. Relevance will accomplish this by integrating advanced methods from multiple disciplines. Crosstalk between human and monkey methods will establish homologies between the species, revealing cornerstones of the theory. Physiologically-inspired neuraland deep neural network models will help to understand the dynamic mechanisms of neural processing of complex social stimuli and its task-dependent modulation.

RELEVANCE aims at uncovering novel pronciples of the processing of social stimuli that might inspire novel diagnostiv and treatment approachesWe aim to model the visual recognition of bodies, actions and interactions. The goal is to use different computational approaches to ultimately predict and explain experimental data. In neuropsychiatry, and which might inspire nobel achitectures for the processing of socially relevant information in computer and robotic systems.

This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n° 856495).

 

Subprojects

Higly realistic monkey body avatars with veridical motion

Uncovering the neural mechansisms of the neural encoding of visually perceived body, action and social stimuli requires highly controled an dparameterized stimuli. Neurophysilogical experiments require such stimuli for monkeys, which we generate by combining cutting edge methods for markerless tracking with methods from computer animation.

Analyzing the role of mirror neurons in action encoding and selection

Mirror neurons in premotor and parietal cortex link visual processing and the neural encoding of motor behavior. We try to identify their exact computational role in terms of visual and motor encoding, and the representation of motor programs.

Neurodynamical models of the single-cell activity of body-selective visual neurons

Bodies represent high-dimensional and dynamically changing stimuli. Based on highly-controlled stimuli that we developed, and which have been used by our collaboration partners in Leuven and Maastricht, we develop neural models that explain physilogical and fMRI data.

Publications

Lappe, A., Bognár, A., Nejad, G. G., Mukovskiy, A., Martini, L. M., Giese, M. A. et al. (2024). Parallel Backpropagation for Shared-Feature Visualization. Advances in Neural Information Processing Systems(37), 22993-23012.
Parallel Backpropagation for Shared-Feature Visualization
Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Albert Mukovskiy; Lucas M. Martini; Martin A. Giese; Rufin Vogels
Type of Publication: Article
Journal: Advances in Neural Information Processing Systems
Number: 37
Pages: 22993-23012
Year: 2024
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A. (2024). MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture. bioRxiv.
MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture
Abstract:

Social interaction is crucial for survival in primates. For the study of social vision in monkeys, highly controllable macaque face avatars have recently been developed, while body avatars with realistic motion do not yet exist. Addressing this gap, we developed a pipeline for three-dimensional motion tracking based on synchronized multi-view video recordings, achieving sufficient accuracy for life-like full-body animation. By exploiting data-driven pose estimation models, we track the complete time course of individual actions using a minimal set of hand-labeled keyframes. Our approach tracks single actions more accurately than existing pose estimation pipelines for behavioral tracking of non-human primates, requiring less data and fewer cameras. This efficiency is also confirmed for a state-of-the-art human benchmark dataset. A behavioral experiment with real macaque monkeys demonstrates that animals perceive the generated animations as similar to genuine videos, and establishes an uncanny valley effect for bodies in monkeys.Competing Interest StatementThe authors have declared no competing interest.

Type of Publication: Article
Lappe, A., Bognár, A., Nejad, G. G., Raman, R., Mukovskiy, A., Martini, L. M. et al (2024). Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity. Journal of Vision September 2024 . Vision Science Society.
Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Rajani Raman Albert Mukovskiy; Lucas M. Martini; Rufin Vogels Martin A. Giese
Type of Publication: In Collection
JRESEARCH_BOOK_TITLE: Journal of Vision September 2024
Publisher: Vision Science Society
Month: September
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A (2024). Macaques show an uncanny valley in body perception. Journal of Vision September 2024 . Vision Science Society.
Macaques show an uncanny valley in body perception
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Type of Publication: In Collection
Rens, G., Bognár, A., Raman, R., Taubert, N., Li, B., Giese, M. A. et al (2023). Similarity in monkey fMRI activation patterns for human and monkey faces but not bodies . 13th Annual Meeting on PrimateNeurobiology, Apr.26-28 2023, Göttingen Primate Center..
Similarity in monkey fMRI activation patterns for human and monkey faces but not bodies
Authors: G. Rens A. Bognár R. Raman Nick Taubert; B. Li Martin A. Giese; B. De Gelder
Research Areas: Uncategorized
Type of Publication: In Collection
Bognár, A., Mukovskiy, A., Nejad, G. G., Taubert, N., Stettler, M., Martini, L. M. et al (2023). Simultaneous recordings from posterior and anterior body responsive regions in the macaque Superior Temporal Sulcus . VSS 2023, May 19-24 2023, St. Pete Beach, Florida.
Simultaneous recordings from posterior and anterior body responsive regions in the macaque Superior Temporal Sulcus
Type of Publication: In Collection
Bognár, A., Mukovskiy, A., Nejad, G. G., Taubert, N., Stettler, M., Martini, L. M. et al (2023). Feature selectivity of body-patch neurons assessed with a large set of monkey avatars . 13th Annual Meeting on PrimateNeurobiology, Apr.26-28 2023, Göttingen Primate Center..
Feature selectivity of body-patch neurons assessed with a large set of monkey avatars
Type of Publication: In Collection
Bognár, A., Raman, R., Taubert, N., Li, B., Zafirova, Y., Giese, M. A. et al. (2023). The contribution of dynamics to macaque body and face patch responses. NeuroImage, 269.
The contribution of dynamics to macaque body and face patch responses
Abstract:

Previous functional imaging studies demonstrated body-selective patches in the primate visual temporal cortex, comparing activations to static bodies and static images of other categories. However, the use of static instead of dynamic displays of moving bodies may have underestimated the extent of the body patch network. Indeed, body dynamics provide information about action and emotion and may be processed in patches not activated by static images. Thus, to map with fMRI the full extent of the macaque body patch system in the visual temporal cortex, we employed dynamic displays of natural-acting monkey bodies, dynamic monkey faces, objects, and scrambled versions of these videos, all presented during fixation. We found nine body patches in the visual temporal cortex, starting posteriorly in the superior temporal sulcus (STS) and ending anteriorly in the temporal pole. Unlike for static images, body patches were present consistently in both the lower and upper banks of the STS. Overall, body patches showed a higher activation by dynamic displays than by matched static images, which, for identical stimulus displays, was less the case for the neighboring face patches. These data provide the groundwork for future single-unit recording studies to reveal the spatiotemporal features the neurons of these body patches encode. These fMRI findings suggest that dynamics have a stronger contribution to population responses in body than face patches.

Authors: A. Bognár R. Raman Nick Taubert; B Li Y Zafirova Martin A. Giese; B. De Gelder R. Vogels
Type of Publication: Article
Full text: PDF
Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A., Stettler, M., Vogels, R. & Giese, M. A (2022). Physiologically-inspired neural model for social interaction recognition from abstract and naturalistic videos . VSS Annual Meeting 2022.
Physiologically-inspired neural model for social interaction recognition from abstract and naturalistic videos
Type of Publication: In Collection
Giese, M. A., BOGNÁR, A. & Vogels, R (2022). Physiologically-inspired neural model for anorthoscopic perception .
Physiologically-inspired neural model for anorthoscopic perception
Type of Publication: In Collection
Kumar, P., Taubert, N., Raman, R., Vogels, R., de Gelder, B. & Giese, M. A (2022). Neural model for the representation of static and dynamic bodies in cortical body patches . VSS 2022.
Neural model for the representation of static and dynamic bodies in cortical body patches
Authors: Prerana Kumar; Nick Taubert; Rajani Raman Rufin Vogels Beatrice de Gelder Martin A. Giese
Type of Publication: In Collection
Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A., Stettler, M., Vogels, R. & Giese, M. A (2022). Neurophysiologically-inspired computational model of the visual recognition of social behavior and intent . FENS Forum, Paris.
Neurophysiologically-inspired computational model of the visual recognition of social behavior and intent
Abstract:

AIMS: Humans recognize social interactions and intentions from videos of moving abstract stimuli, including simple geometric figures (Heider {&} Simmel, 1944). The neural machinery supporting such social interaction perception is completely unclear. Here, we present a physiologically plausible neural model of social interaction recognition that identifies social interactions in videos of simple geometric figures and fully articulating animal avatars, moving in naturalistic environments. METHODS: We generated the trajectories for both geometric and animal avatars using an algorithm based on a dynamical model of human navigation (Hovaidi-Ardestani, et al., 2018, Warren, 2006). Our neural recognition model combines a Deep Neural Network, realizing a shape-recognition pathway (VGG16), with a top-level neural network that integrates RBFs, motion energy detectors, and dynamic neural fields. The model implements robust tracking of interacting agents based on interaction-specific visual features (relative position, speed, acceleration, and orientation). RESULTS: A simple neural classifier, trained to predict social interaction categories from the features extracted by our neural recognition model, makes predictions that resemble those observed in previous psychophysical experiments on social interaction recognition from abstract (Salatiello, et al. 2021) and naturalistic videos. CONCLUSION: The model demonstrates that recognition of social interactions can be achieved by simple physiologically plausible neural mechanisms and makes testable predictions about single-cell and population activity patterns in relevant brain areas. Acknowledgments: ERC 2019-SyG-RELEVANCE-856495, HFSP RGP0036/2016, BMBF FKZ 01GQ1704, SSTeP-KiZ BMG: ZMWI1-2520DAT700, and NVIDIA Corporation.

Type of Publication: In Collection
Kumar, P., Taubert, N., Raman, R., Vogels, R., de Gelder, B. & Giese, M. A (2021). Physiologically-inspired neural model for the visual recognition of dynamic bodies . Neuroscience 2021.
Physiologically-inspired neural model for the visual recognition of dynamic bodies
Authors: Prerana Kumar; Nick Taubert; Rajani Raman Rufin Vogels Beatrice de Gelder Martin A. Giese
Type of Publication: In Collection
Kumar, P., Taubert, N., Stettler, M., Vogels, R., de Gelder, B. & Giese, M. A (2021). Neurodynamical model for the visual recognition of dynamic bodies . ECVP 2021.
Neurodynamical model for the visual recognition of dynamic bodies
Type of Publication: In Collection
Kumar, P., Taubert, N., Stettler, M., Vogels, R., de Gelder, B. & Giese, M. A (2021). Neurodynamical model for the visual recognition of dynamic bodies . CNS 2021.
Neurodynamical model for the visual recognition of dynamic bodies
Type of Publication: In Collection
Giese, M. A., Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A. & Stettler, M (2021). Neurophysiologically-inspired model for social interactions recognition from abstract and naturalistic stimuli. VSS 2021, May 21-26 .
Neurophysiologically-inspired model for social interactions recognition from abstract and naturalistic stimuli
Type of Publication: In Collection
Giese, M. A., BOGNÁR, A. & Vogels, R. Physiologically-inspired neurodynamical model for anorthoscopic perception .
Physiologically-inspired neurodynamical model for anorthoscopic perception
Type of Publication: In Collection

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media