Hebart Lab
Computational Cognitive Neuroscience and Quantitative Psychiatry / Vision and Computational Cognition

Our research

We are able to recognize objects, people, and places without effort and effectively interact with our visual world. This ability is remarkable, as it requires us to map ever-changing visual input to our knowledge of the regularities of the world and the repertoire of actions we can apply to it. But how is the light that hits our retina transformed into representations that enable us to recognize and interact with our environment? How can we capture visual representations in a way that takes into account the complexities of the visual world and at the same time gives us an understanding of the regularities and laws that govern our visual system? And how can we translate this knowledge to a better understanding and treatment of psychiatric patients that suffer from visual phenomena, including hallucinations or flashbacks? We aim to answer these fundamental questions, using a combination of research tools from psychology, neuroscience, and computer science.

Our team is part of the Medical Department at Justus Liebig University Giessen and the Max Planck Institute for Human Cognitive and Brain Sciences.

 

Our approach

Our research aims at understanding vision from three perspectives. From a data science perspective, we collect and analyze large-scale representative datasets in both human behavior and neuroimaging (functional MRI, magnetoencephalography), allowing us to capture much of the complexity of our visual worlds and identify key characteristics that underlie our mental and neural representations. From a cognitive neuroscience perspective, we link these large-scale datasets to hypothesized representational properties and computational models, and we conduct targeted experiments testing the properties of visual recognition we have identified. From a computational perspective, we apply computational models of vision and semantics (deep neural networks, semantic embeddings), multivariate pattern analysis, and advanced machine learning methods to characterize representations in the human brain and behavior and identify interpretable representations in humans and artificial intelligence. To address longstanding questions about vision and computation in the brain, we develop novel analysis tools for a deeper, more powerful, and more fine-grained analysis of behavioral and neuroimaging data.

This Is Us

As an interdisciplinary team, we are spanning a broad bandwidth of science, from Psychology over Cognitive Neuroscience to Computer Science.

Image showing all team members

Martin Hebart | MPI JLU


Principal investigator, loves addressing fundamental questions with a fun team of highly-talented researchers. Also loves spending time making his kids laugh, cycling in nature, deep conversations, and a glass of IPA.


Administrative staff

Anna Theiß | JLU

Administrative assistant


Research staff

Susan Ajith | JLU


PhD candidate, co-supervised with Daniel Kaiser. Fascinated by the nature of representations and mechanisms that enable our visual experience. When not delving into matters of the brain, trains in MMA and consumes literature and TV across different genres and languages.

Sander van Bree | JLU


Postdoctoral researcher, studying the neural representations of visual perception in human and non-human primates. Enjoys reading across disciplines during the day and absorbing all the arts at night.

Oliver Contier | MPI


PhD candidate, working on linking fMRI data to computational models of vision and behavior. Interested in brain representations underlying object recognition. Fueled by Carbonara, Colakracher, and feline attention.

Lenny van Dyck | JLU


PhD candidate, co-supervised with Katharina Dobs. Interested in how our brain processes and organizes visual information. Has a passion for all kinds of dimensional approaches, ranging from neural data analysis to distinguishing nuances in coffee flavor. Loves to spend his free time hiking in the mountains or campervanning around the world.

Luca Kämmer | MPI BER


PhD candidate, co-supervised by Martin Rolfs. Interested in image features and how they inform and direct eye movement. Dubs movies and has repeatedly dubbed his own gruesome death

Florian Mahner | MPI


PhD candidate, interprets and compares representational properties of the visual stream to modern deep nets. Enjoys coding and is similarly fascinated by climbing, biking and chess.

Maggie Mae Mell | MPI


Postdoctoral researcher, working on acquiring massive functional and diffusion MRI data to learn more about the structure and function of visual cortex. 

Lukas Muttenthaler | BER


PhD candidate, co-supervised with Klaus-Robert Müller, working on methods for deciphering latent representations and using linear algebra for good. Things he loves: neural networks, making code run faster than the speed of light, and RückenFit.

Johannes Roth | MPI


PhD candidate, working on improving fMRI methodology to enable faster data acquisition. Also interested in deep learning models of the visual system. Wants to own a giant orchard with a chicken coop and an outdoor bouldering wall one day.

Katja Seeliger | MPI


Postdoctoral reseacher, working with large-scale naturalistic datasets and analyzing them with modern neural networks and machine learning to gain better understanding of sensory representations. Sika deer whisperer.

Malin Styrnal | JLU


PhD candidate, interested in how we make sense of the world around us and form representations of it. Currently comparing similarity measures and identifying visual and sematic object dimensions. Likes to visit visual illusion museums.

Marie St-Laurent


Data scientist and cognitive neuroscientist with a background in AI based in Montréal, Canada. Likes to study how representation patterns are transformed by experience. Compulsive collector of graduate degrees.

Laura Stoinski | MPI


PhD candidate, interested in the distinct role of semantic versus visual object features and how they contribute to forming internal object representation. Worked on expanding the THINGS database by collecting large-scale object and image property norms. When speaking with her family, she can talk so fast that people think it's another language.

Johannes Singer | BER


PhD candidate, co-supervised with Radoslaw Cichy at Freie Universität Berlin. Did his Master's project in the lab on the representation of abstract object depictions (e.g. drawings) in the human brain and deep neural networks. Interested in multivariate analyses of electrophysiological data and the temporal dynamics of visual processing. Loves to make and get lost in music.

Josefine Zerbe | MPI


PhD candidate, Interested in visual perception and how to decode its underlying mechanisms with neural nets. Loves climbing and pouring color on canvases, then calling it art.

Tonghe Zhuang | JLU


Postdoctoral researcher, working on semantic knowledge, specifically object and action recognition. Interested in unraveling how our brains allow us to understand the world and operate with remarkable effectiveness. Her current research centers on the relationship between visual and semantic features in occipitotemporal cortex. Enjoys learning, dancing and swimming.

Revealing the multidimensional mental representations of natural objects underlying human similarity judgements

Hebart, M.N., Zheng, C.Y., Pereira, F., & Baker, C.I.

2020, Nature Human Behaviour

Objects can be characterized according to a vast number of possible criteria (such as animacy, shape, colour and function), but some dimensions are more useful than others for making sense of the objects around us. To identify these core dimensions of object representations, we developed a data-driven computational model of similarity judgements for real-world images of 1,854 objects. The model …

URL PDF

THINGSvision: A Python toolbox for streamlining the extraction of activations from deep neural networks

Muttenthaler, L. & Hebart, M.N.

2021, Frontiers in Neuroinformatics

Over the past decade, deep neural network (DNN) models have received a lot of attention due to their near-human object classification performance and their excellent prediction of signals recorded from biological visual systems. To better understand the function of these networks and relate them to hypotheses about brain activity and behavior, researchers need to extract the activations to images across …

URL PDF

Semantic features of object concepts generated with GPT-3

Hansen, H., Hebart, M.N.

2022, Proceedings of the Annual Meeting of the Cognitive Science Society

Semantic features have been playing a central role in investigating the nature of our conceptual representations. Yet the time and effort required to sample features from human raters has restricted their use to a limited set of manually curated concepts. Given recent success of transformer-based language models, we asked whether it was possible to use such models to automatically generate …

URL

The spatiotemporal neural dynamics of object recognition for natural images and line drawings

Singer, J.J., D., Cichy, R.M., Hebart, M.N.

2023, Journal of Neuroscience

Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other …

URL

The features underlying the memorability of objects

Kramer, M.A., Hebart, M.N., Baker, C.I., & Bainbridge, W.A. 

2023, Science Advances

What makes certain images more memorable than others? While much of memory research has focused on participant effects, recent studies employing a stimulus-centric perspective have sparked debate on the determinants of memory, including the roles of semantic and visual features and whether the most prototypical or atypical items are best remembered. Prior studies have typically relied on constrained stimulus sets, …

URL PREPRINT

THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in brain and behavior

Hebart, M.N.*, Contier, O.*, Teichmann, L.*, Rockter, A., Zheng, C.Y., Kidder, A., Corriveau, A., Vaziri-Pashkam, M., & Baker, C.I.

2023, eLife

Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images …

URL

Software and tools from our research

Contact Us

We are open for inquiries! Send us a message below.

To show that you are human, please solve the following equation: