ViCCo Group
Vision and Computational Cognition

Our research

We are able to recognize objects, people, and places without effort and effectively interact with our visual world. This ability is remarkable, as it requires us to map ever-changing visual input to our knowledge of the regularities of the world and the repertoire of actions we can apply to it. But how is the light that hits our retina transformed into representations that enable us to recognize and interact with our environment? And how can we capture visual representations in a way that takes into account the complexities of the visual world and at the same time gives us an understanding of the regularities and laws that govern our visual system? In the Vision and Computational Cognition Group at the Max Planck Institute for Human Cognitive and Brain Sciences, we aim to answer these fundamental questions, using a combination of research tools from psychology, neuroscience, and computer science.


Our approach

Our research aims at understanding vision from three perspectives. From a data science perspective, we collect and analyze large-scale representative datasets in both human behavior and neuroimaging (functional MRI, magnetoencephalography), allowing us to capture much of the complexity of our visual worlds and identify key characteristics that underlie our mental and neural representations. From a cognitive neuroscience perspective, we link these large-scale datasets to hypothesized representational properties and computational models, and we conduct targeted experiments testing the properties of visual recognition we have identified. From a computational perspective, we apply computational models of vision and semantics (deep neural networks, semantic embeddings), multivariate pattern analysis, and advanced machine learning methods to characterize representations in the human brain and behavior and identify interpretable representations in humans and artificial intelligence. To address longstanding questions about vision and computation in the brain, we develop novel analysis tools for a deeper, more powerful, and more fine-grained analysis of behavioral and neuroimaging data.

This Is Us

As an interdisciplinary team, we are spanning a broad bandwidth of science, from Psychology over Cognitive Neuroscience to Computer Science.

Image showing all team members

Florian Mahner

PhD candidate, interprets and compares representational properties of the visual stream to modern deep nets. Enjoys coding and is similarly fascinated by climbing, biking and chess.

Hannes Hansen

Research assistant, Student of Computer Science (M.Sc.) at University of Leipzig. Likes exploring deep AI models, improving online experiments, and spending 6 hours trying to automate something instead of doing it by hand in 5 minutes.

Jonas Perkuhn

Master student / Research assistant, Student of Psychology (M.Sc.) at University of Leipzig. Research Focus: object categorization, systems neuroscience. Interested in cacti, lemons, and the differences between them.

Katja Seeliger

Postdoctoral Reseacher, working with large-scale naturalistic datasets and analyzing them with modern neural networks and machine learning to gain better understanding of sensory representations. Sika deer whisperer.

Laura Stoinski

Master student / Research assistant, among others, expands the THINGS database by programming and conducting online studies. When speaking with her family, she can talk so fast that people think it's another language. 

Lukas Muttenthaler

PhD candidate, working on the THINGSvision library, deciphering latent representations, and using linear algebra for good. Things he loves: neural networks, making code run faster than the speed of light, and RückenFit.

Marco Badwal

MD candidate, co-supervised with Christian Doeller and Johanna Bergmann, trying to deepen our understanding of how the brain constructs relationships between objects/concepts and whether the underlying principles comply with spatial neural codes. Avid meditator, runner, Crossfitter, and soon-to-by psychiatrist.

Marie St-Laurent

Data scientist and cognitive neuroscientist with a background in AI based in Montréal, Canada. Likes to study how representation patterns are transformed by experience. Compulsive collector of graduate degrees.'

Martin Hebart

Principal investigator, loves addressing fundamental questions with a fun team of highly-talented researchers. Also loves spending time making his kids laugh, cycling in nature, deep conversations, and a glass of IPA.

Oliver Contier

PhD candidate, working on linking fMRI data to computational models of vision and behavior. Interested in brain representations underlying object recognition. Fueled by Carbonara, Colakracher, and feline attention.

Philipp Kaniuth

PhD candidate, interested in improving representational similarity analysis and how the link between stages of cortical visual representations and behavior depends on the task. Likes all kinds of lemonade way too much, even while refereeing a basketball game.

Ülkü Tonbuloglu

Research assistant, studies Social, Cognitive, and Affective Neuroscience (M.Sc.) at Freie Universität Berlin. Interested in human visual perception and cross modal interaction. Was a latin dancer before corona :')

Weronika Kłos

Research assistant, studies Data Science (M.Sc.) at Freie Universität Berlin. Interested in deep learning and its applications to neuroscientific research. Likes brain decoding, juggling and lasagna.


THINGSvision paper published at Frontiers in Neuroinformatics!

The paper accompanying the Python library THINGSvision by Lukas Muttenthaler and Martin Hebart has now been published at Frontiers in Neuroinformatics! The library allows extracting activations for a wide range of neural networks using both models based on Pytorch and Tensorflow. Importantly, even randomly initialized network activations can be extracted for a meaningful baseline, as well as activations for networks trained on Ecoset. Congratulations, Lukas!

Julia Norman leaves the lab

Julia is leaving our lab to conduct her Master's thesis at Yale University. All the best for your time there!

Marie St-Laurent joins the lab!

Marie recently joined our lab as a data scientist, funded by Cneuromod and member of both teams! She'll be working on the new THINGS MRI dataset and building computational models of vision and memory. We're very happy to have you, Marie!

Weronika Kłos joins the ViCCo Group!

We're happy to have Weronika Kłos join our lab! She will be working as a research assistant on deep learning and computational neuroscience. Welcome to the lab, Wero!

Three new lab members!

We have three new lab members: Ülkü Tonbuloglu and Carina Daun will be starting as research assistants, and Julia Norman will be starting as an intern and research assistant! Welcome to the VICCo group!

Revealing the multidimensional mental representations of natural objects underlying human similarity judgements

Hebart, M.N., Zheng, C.Y., Pereira, F., & Baker, C.I.

2020, Nature Human Behaviour

Objects can be characterized according to a vast number of possible criteria (such as animacy, shape, colour and function), but some dimensions are more useful than others for making sense of the objects around us. To identify these core dimensions of object representations, we developed a data-driven computational model of similarity judgements for real-world images of 1,854 objects. The model …


THINGSvision: A Python toolbox for streamlining the extraction of activations from deep neural networks

Muttenthaler, L. & Hebart, M.N.

2021, Frontiers in Neuroinformatics

Over the past decade, deep neural network (DNN) models have received a lot of attention due to their near-human object classification performance and their excellent prediction of signals recorded from biological visual systems. To better understand the function of these networks and relate them to hypotheses about brain activity and behavior, researchers need to extract the activations to images across …


From photos to sketches - how humans and deep neural networks process objects across different levels of visual abstraction

Singer, J., Seeliger, K., Kietzmann, T.C., & Hebart, M.N.

2021, PsyArXiv

Line drawings convey meaning with just a few strokes. Despite strong simplifications, humans can recognize objects depicted in such abstracted images without effort. To what degree do deep convolutional neural networks (CNNs) mirror this human ability to generalize to abstracted object images? While CNNs trained on natural images have been shown to exhibit poor classification performance on drawings, other work …


Software and tools from our research

Contact Us

We are open for inquiries! Send us a message below.