Personal Page

M. Sc. Martini, Lucas M.

5.522
Section for Computational Sensomotorics
Department of Cognitive Neurology
Hertie Institute for Clinical Brain Research
Centre for Integrative Neuroscience
University Clinic Tübingen
Otfried-Müller-Str. 25
72076 Tübingen, Germany
+497071 2989130
Lucas M. Martini

Projects

Publications

Martini, L. M., Lappe, A. & Giese, M. A (2025). Pose and shape reconstruction of nonhuman primates from images for studying social perception. Journal of Vision September 2025 . Vision Science Society.
Pose and shape reconstruction of nonhuman primates from images for studying social perception
Abstract:

The neural and computational mechanisms of the visual encoding of body pose and motion remain poorly understood. One important obstacle in their investigation is the generation of highly controlled stimuli with exactly specified form and motion parameters. Avatars are ideal for this purpose, but for nonhuman species the generation of appropriate motion and shape data is extremely costly, where video-based methods often are not accurate enough to generate convincing 3D animations with highly specified parameters. METHODS: Based on a photorealistic 3D model for macaque monkeys, which we have developed recently, we propose a method that adjusts this model automatically to other nonhuman primate shapes, requiring only a small number of photographs and hand-labeled keypoints for that species. The resulting 3D model allows to generate highly realistic animations with different primate species, combining the same motion with different body shapes. Our method is based on an algorithm that deforms a polygon mesh of a macaque model with 10,632 vertices with an underlying rig of 115 joints automatically, matching the silhouettes of the animals and a small number of specified key points in the example pictures. Optimization is based on a composite error function that integrates terms for matching quality of the silhouettes, keypoints, and bone length, and for minimizing local surface deformation. RRSULTS: We demonstrate the efficiency of the method for several monkey and ape species. In addition, we are presently investigating in a psychophysical experiment how the body shape of different primate species interacts with the categorization of body movements of humans and non-human primates in human perception. CONCLUSION: Using modern computer graphics methods, highly realistic and well-controlled body motion stimuli can be generated from small numbers of photographs, allowing to study how species-specific motion and body shape interact in visual body motion perception. Acknowledgements: ERC 2019-SyG-RELEVANCE-856495; SSTeP-KiZ BMG: ZMWI1-2520DAT700.

Type of Publication: In Collection
JRESEARCH_BOOK_TITLE: Journal of Vision September 2025
Publisher: Vision Science Society
Month: September
Lappe, A., Bognár, A., Nejad, G. G., Raman, R., Mukovskiy, A., Martini, L. M. et al (2024). Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity. Journal of Vision September 2024 . Vision Science Society.
Predictive Features in Deep Neural Network Models of Macaque Body Patch Selectivity
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Authors: Lappe, Alexander; Bognár, Anna Nejad, Ghazaleh Ghamkhari Raman, Rajani Mukovskiy, Albert; Martini, Lucas M.; Vogels, Rufin Giese, Martin A.
Type of Publication: In Collection
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A (2024). Macaques show an uncanny valley in body perception. Journal of Vision September 2024 . Vision Science Society.
Macaques show an uncanny valley in body perception
Abstract:

Previous work has shown that neurons from body patches in macaque superior temporal sulcus (STS) respond selectively to images of bodies. However, the visual features leading to this body selectivity remain unclear. METHODS: We conducted experiments using 720 stimuli presenting a monkey avatar in various poses and viewpoints. Spiking activity was recorded from mid-STS (MSB) and anterior-STS (ASB) body patches, previously identified using fMRI. To identify visual features driving the neural responses, we used a model with a deep network as frontend and a linear readout model that was fitted to predict the neuron activities. Computing the gradients of the outputs backwards along the neural network, we identified the image regions that were most influential for the model neuron output. Since previous work suggests that neurons from this area also respond to some extent to images of objects, we used a similar approach to visualize object parts eliciting responses from the model neurons. Based on an object dataset, we identified the shapes that activate each model unit maximally. Computing and combining the pixel-wise gradients of model activations from object and body processing, we were able to identify common visual features driving neural activity in the model. RESULTS: Linear models fit the data well, with mean noise-corrected correlations with neural data of 0.8 in ASB and 0.94 in MSB. Gradient analysis on the body stimuli did not reveal clear preferences of certain body parts and were difficult to interpret visually. However, the joint gradients between objects and bodies traced visually similar features in both images. CONCLUSION: Deep neural networks model STS data well, even though for all tested models, explained variance was substantially lower in the more anterior region. Further work will test if the features that the deep network relies on are also used by body patch neurons.

Type of Publication: In Collection
Lappe, A., Bognár, A., Nejad, G. G., Mukovskiy, A., Martini, L. M., Giese, M. A. et al. (2024). Parallel Backpropagation for Shared-Feature Visualization. Advances in Neural Information Processing Systems(37), 22993-23012.
Parallel Backpropagation for Shared-Feature Visualization
Authors: Lappe, Alexander; Bognár, Anna Nejad, Ghazaleh Ghamkhari Mukovskiy, Albert; Martini, Lucas M.; Giese, Martin A.; Vogels, Rufin
Type of Publication: Article
Journal: Advances in Neural Information Processing Systems
Number: 37
Pages: 22993-23012
Year: 2024
Martini, L. M., Bognár, A., Vogels, R. & Giese, M. A. (2024). MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture. bioRxiv.
MacAction: Realistic 3D macaque body animation based on multi-camera markerless motion capture
Abstract:

Social interaction is crucial for survival in primates. For the study of social vision in monkeys, highly controllable macaque face avatars have recently been developed, while body avatars with realistic motion do not yet exist. Addressing this gap, we developed a pipeline for three-dimensional motion tracking based on synchronized multi-view video recordings, achieving sufficient accuracy for life-like full-body animation. By exploiting data-driven pose estimation models, we track the complete time course of individual actions using a minimal set of hand-labeled keyframes. Our approach tracks single actions more accurately than existing pose estimation pipelines for behavioral tracking of non-human primates, requiring less data and fewer cameras. This efficiency is also confirmed for a state-of-the-art human benchmark dataset. A behavioral experiment with real macaque monkeys demonstrates that animals perceive the generated animations as similar to genuine videos, and establishes an uncanny valley effect for bodies in monkeys.Competing Interest StatementThe authors have declared no competing interest.

Type of Publication: Article
View All

Information

All images and videos displayed on this webpage are protected by copyright law. These copyrights are owned by Computational Sensomotorics.

If you wish to use any of the content featured on this webpage for purposes other than personal viewing, please contact us for permission.

Social Media

We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.