Analyzing the role of mirror neurons in action encoding and selection

Research Area

Neural and Computational Principles of Action and Social Processing

Researchers

Alexander Lappe; Martin A. Giese;

Collaborators

Lilei Peng; Hans-Peter Thier

Description

The fundamental objective of visual processing is to guide motor output appropriate for the visual stimulus. The mirror neuron system within premotor cortex plays a dual role in high-level visual processing of bodies and controlling motor behavior. We are interested in the computational properties of these neurons when dynamic visual body stimuli need to be used as a cue to determine which motor action the agent has to perform. In other words, we are studying how mirror neuron populations translate social visual input into correct motor output.

 

A monkey observes a video of another monkey performing an action. After pausing briefly, the monkey is tasked with performing either the observed action, or a different one.

A monkey observes a video of another monkey performing an action. After pausing briefly, the monkey is tasked with performing either the observed action, or a different one.

 

To this end, we analyze data from a highly sophisticated experimental setup in which a monkey is tasked with either repeating an observed action or performing a different learned action. We develop methods for dimensionality reduction to gain insights into the intricate relationship between action observation and execution that drives neural responses. The method reveals that the population predominantly stores the necessary information in terms of the self-action and not the visually observed action. Even during video observation, the population is driven more strongly by the self-action cued by the video than the video itself.

 

We developed a variant of PCA such that each principal component is attributed to one of the experimental factors.

We developed a variant of PCA such that each principal component is attributed to one of the experimental factors.

 

Given the traditional belief that mirror neurons primarily fire for visual observation and execution of the same action, we also investigate whether the population shows more activity when replicating actions than when performing non-observed actions. Indeed, we identify neurons that are tuned to the type of instruction. However, the mirror neuron population is not biased towards ‘mirroring’.

Further, we are studying how well purely task-optimized machine learning models can explain neural responses to body stimuli. The advantage of task-optimized models is that they can be trained on large data sets, allowing for the representation of very general spatio-temporal visual features. We are specifically interested in the effect on goodness-of-fit of different mechanisms of temporal integration like recurrence, temporal convolutions or explicit attention, and the effect of the training objective like action recognition or predictive-coding. Moreover, we will attempt to make sense of the exact shortcomings of artificial-neural-network models for neural processing of bodies. This approach will extend the current literature on artificial-neural-network models for object representations to the time domain.

 

Publications

Lappe, A., Bognár, A., Nejad, G. G., Mukovskiy, A., Martini, L. M., Giese, M. A. et al. (2024). Parallel Backpropagation for Shared-Feature Visualization..
Parallel Backpropagation for Shared-Feature Visualization
Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Albert Mukovskiy; Lucas M. Martini; Martin A. Giese; Rufin Vogels
Type of Publication: Misc
Lappe, A., Bognár, A., Nejad, G. G., Raman, R., Mukovskiy, A., Martini, L. M. et al (2024). Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity. Journal of Vision September 2024 . Vision Science Society.
Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Authors: Alexander Lappe; Anna Bognár Ghazaleh Ghamkhari Nejad Rajani Raman Albert Mukovskiy; Lucas M. Martini; Rufin Vogels Martin A. Giese
Type of Publication: In Collection
JRESEARCH_BOOK_TITLE: Journal of Vision September 2024
Publisher: Vision Science Society
Month: September

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media

We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.