Neural mechanisms underlying the visual analysis of intent

Research Area

Neural and Computational Principles of Action and Social Processing

Researchers

Martin A. Giese; Nick Taubert; Michael Stettler;

Collaborators

Mohammad Hovaidi Ardestani; Andrea Christensen; Aleix Martinez (Ohio State University); Doris Tsao; Peter Thier; Peter Dicke; Silvia Spadacenta; Ramona Siebert; Marius Görner; Albert Mukovskiy
Proposed start date
2016-10-01
Proposed end date
2019-09-30

Description

Primates are very efficient in the recognition of intentions from various types of stimuli, involving faces and bodies, but also abstract moving stimuli, such as moving geometrical figures as illustrated in the seminal experiments by Heider and Simmel (1944). How such stimuli are exactly processed and what the underlying neural and computational mechanisms are remains largely unknown. In the context of a project that is funded by the Human Frontiers Science Program (HFSP) we try, in collaboration with Aleix Martinez  (Ohio State University) Doris Tsao (CalTech) and to unravel the neural basis of the perception of intention. In a further collaboration with the laboratory of Peter Thier (HIH / CIN) we work on the computational and neural basis of the encoding of dynamic faces, including especially the encoding of social attention.

Intention cannot only be decoded from detailed stimuli, such as moving faces and bodies. Classical psychophysical experiments show that intentions and social interactions can also be decoded from strongly impoverished stimuli such as moving geometrical figures. The underlying neural mechanism are completely unclear. We develop neural models, taking into account the physiological architecture of the visual pathway, and show that some of these functions might be accomplished by rather simple learning-based neural mechanisms. In collaboration. With A. Martinez (Ohio State University) we also try to find the neural correlates of such intention perception in human cortex. Future work (with D. Tsao, CalTech) will try to identify neural neural correlates of these functions in monkeys, laying the basis for more detailed neural models.

 

Neural model for the recognition of social interactions from abstract stimuli

 

Abstract stimulus showing ‘following’

Abstract stimulus showing ‘fighting’

Abstract stimulus showing ‘chasing’

 

 

 

 

We have also developed a dynamical systems model that can generate at least 12 distinctive classes of social interactions. We have shown that these generated movies can be classified by human subjects with high amount of accuracy.

 

 

Publications

Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A., Stettler, M., Vogels, R. & Giese, M. A (2022). Physiologically-inspired neural model for social interaction recognition from abstract and naturalistic videos . VSS Annual Meeting 2022.
Physiologically-inspired neural model for social interaction recognition from abstract and naturalistic videos
Type of Publication: In Collection
Publisher: VSS Annual Meeting 2022
Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A., Stettler, M., Vogels, R. & Giese, M. A (2022). Neurophysiologically-inspired computational model of the visual recognition of social behavior and intent . FENS Forum, Paris.
Neurophysiologically-inspired computational model of the visual recognition of social behavior and intent
Abstract:

AIMS: Humans recognize social interactions and intentions from videos of moving abstract stimuli, including simple geometric figures (Heider {&} Simmel, 1944). The neural machinery supporting such social interaction perception is completely unclear. Here, we present a physiologically plausible neural model of social interaction recognition that identifies social interactions in videos of simple geometric figures and fully articulating animal avatars, moving in naturalistic environments. METHODS: We generated the trajectories for both geometric and animal avatars using an algorithm based on a dynamical model of human navigation (Hovaidi-Ardestani, et al., 2018, Warren, 2006). Our neural recognition model combines a Deep Neural Network, realizing a shape-recognition pathway (VGG16), with a top-level neural network that integrates RBFs, motion energy detectors, and dynamic neural fields. The model implements robust tracking of interacting agents based on interaction-specific visual features (relative position, speed, acceleration, and orientation). RESULTS: A simple neural classifier, trained to predict social interaction categories from the features extracted by our neural recognition model, makes predictions that resemble those observed in previous psychophysical experiments on social interaction recognition from abstract (Salatiello, et al. 2021) and naturalistic videos. CONCLUSION: The model demonstrates that recognition of social interactions can be achieved by simple physiologically plausible neural mechanisms and makes testable predictions about single-cell and population activity patterns in relevant brain areas. Acknowledgments: ERC 2019-SyG-RELEVANCE-856495, HFSP RGP0036/2016, BMBF FKZ 01GQ1704, SSTeP-KiZ BMG: ZMWI1-2520DAT700, and NVIDIA Corporation.

Type of Publication: In Collection
Mukovskiy, A., Ardestani, M. H., Salatiello, A., Stettler, M. & Giese, M. A. (2021). Physiologically-inspired neural model for social interactions recognition from abstract and naturalistic stimuli.. Göttingen Meeting of the German Neuroscience Society.
Physiologically-inspired neural model for social interactions recognition from abstract and naturalistic stimuli.
Research Areas: Uncategorized
Type of Publication: Article
Journal: Göttingen Meeting of the German Neuroscience Society
Year: 2021
Giese, M. A., Mukovskiy, A., Hovaidi-Ardestani, M., Salatiello, A. & Stettler, M. (2021). Neurophysiologically-inspired model for social interactions recognition from abstract and naturalistic stimuli. VSS 2021.
Neurophysiologically-inspired model for social interactions recognition from abstract and naturalistic stimuli
Research Areas: Uncategorized
Type of Publication: Article
Salatiello, A., Hovaidi-Ardestani, M. & Giese, M. A. (2021). A Dynamical Generative Model of Social Interactions. Frontiers in Neurorobotics, 15, 62.
A Dynamical Generative Model of Social Interactions
Abstract:

The ability to make accurate social inferences makes humans able to navigate and act in their social environment effortlessly. Converging evidence shows that motion is one of the most informative cues in shaping the perception of social interactions. However, the scarcity of parameterized generative models for the generation of highly-controlled stimuli has slowed down both the identification of the most critical motion features and the understanding of the computational mechanisms underlying their extraction and processing from rich visual inputs. In this work, we introduce a novel generative model for the automatic generation of an arbitrarily large number of videos of socially interacting agents for comprehensive studies of social perception. The proposed framework, validated with three psychophysical experiments, allows generating as many as 15 distinct interaction classes. The model builds on classical dynamical system models of biological navigation and is able to generate visual stimuli that are parametrically controlled and representative of a heterogeneous set of social interaction classes. The proposed method represents thus an important tool for experiments aimed at unveiling the computational mechanisms mediating the perception of social interactions. The ability to generate highly-controlled stimuli makes the model valuable not only to conduct behavioral and neuroimaging studies, but also to develop and validate neural models of social inference, and machine vision systems for the automatic recognition of social interactions. In fact, contrasting human and model responses to a heterogeneous set of highly-controlled stimuli can help to identify critical computational steps in the processing of social interaction stimuli.

Authors: Alessandro Salatiello; M. Hovaidi-Ardestani Martin A. Giese
Research Areas: Uncategorized
Type of Publication: Article

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media

We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.