Personal Page

M. Sc. Martini, Lucas M.

5.522
Section for Computational Sensomotorics
Department of Cognitive Neurology
Hertie Institute for Clinical Brain Research
Centre for Integrative Neuroscience
University Clinic Tübingen
Otfried-Müller-Str. 25
72076 Tübingen, Germany
+497071 2989130
Lucas M. Martini

Projects

Publications

Lappe, A., Bognár, A., Nejad, G. G., Raman, R., Mukovskiy, A., Martini, L. M. et al (2024). Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity. Journal of Vision September 2024 . Vision Science Society.
Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Authors: Lappe, Alexander; Bognár, Anna Nejad, Ghazaleh Ghamkhari Raman, Rajani Mukovskiy, Albert; Martini, Lucas M.; Vogels, Rufin Giese, Martin A.
Type of Publication: In Collection
JRESEARCH_BOOK_TITLE: Journal of Vision September 2024
Publisher: Vision Science Society
Month: September
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A (2024). Macaques show an uncanny valley in body perception. Journal of Vision September 2024 . Vision Science Society.
Macaques show an uncanny valley in body perception
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Type of Publication: In Collection
Lappe, A., Bognár, A., Nejad, G. G., Mukovskiy, A., Martini, L. M., Giese, M. A. et al. (2024). Parallel Backpropagation for Shared-Feature Visualization. Advances in Neural Information Processing Systems(37), 22993-23012.
Parallel Backpropagation for Shared-Feature Visualization
Authors: Lappe, Alexander; Bognár, Anna Nejad, Ghazaleh Ghamkhari Mukovskiy, Albert; Martini, Lucas M.; Giese, Martin A.; Vogels, Rufin
Type of Publication: Article
Journal: Advances in Neural Information Processing Systems
Number: 37
Pages: 22993-23012
Year: 2024
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A. (2024). MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture. bioRxiv.
MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture
Abstract:

Social interaction is crucial for survival in primates. For the study of social vision in monkeys, highly controllable macaque face avatars have recently been developed, while body avatars with realistic motion do not yet exist. Addressing this gap, we developed a pipeline for three-dimensional motion tracking based on synchronized multi-view video recordings, achieving sufficient accuracy for life-like full-body animation. By exploiting data-driven pose estimation models, we track the complete time course of individual actions using a minimal set of hand-labeled keyframes. Our approach tracks single actions more accurately than existing pose estimation pipelines for behavioral tracking of non-human primates, requiring less data and fewer cameras. This efficiency is also confirmed for a state-of-the-art human benchmark dataset. A behavioral experiment with real macaque monkeys demonstrates that animals perceive the generated animations as similar to genuine videos, and establishes an uncanny valley effect for bodies in monkeys.Competing Interest StatementThe authors have declared no competing interest.

Type of Publication: Article
Bognár, A., Mukovskiy, A., Nejad, G. G., Taubert, N., Stettler, M., Martini, L. M. et al (2023). Simultaneous recordings from posterior and anterior body responsive regions in the macaque Superior Temporal Sulcus . VSS 2023, May 19-24 2023, St. Pete Beach, Florida.
Simultaneous recordings from posterior and anterior body responsive regions in the macaque Superior Temporal Sulcus
Type of Publication: In Collection
View All

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media