You are here: vision-research.eu » Vision Research » The Young Researchers View » The Young Researcher's View: Arne F. Meyer (Q2-2021)
The mammalian brain’s navigation system is informed in large part by visual signals. For example, we rely on vision to determine where the walls and doors are in the room and we recognize places based on visual landmarks, such as the shop around the corner. Together with path integration, an internal computation that transforms information about self-motion into a sense of location, vision is one of the main pillars of our ability to navigate. Yet, how visual signals are integrated into the brain’s navigation network is poorly understood. The aim of my research is to fill this gap, by investigating how visual images observed by the eyes are transformed in visual and spatial brain areas to support natural functions such as navigation.
I am using a “sensory coding” approach to understand what aspects of an animal’s visual input are encoded by “spatial” cells that represent objects such as walls or landmarks or an animal’s location in space. During my PhD at the Institute of Physics at the University of Oldenburg, I developed and employed computational models to investigate how complex and naturalistic sensory input is transformed into neural responses in subcortical and cortical brain areas [1--4]. These models represent a promising approach to studying vision in navigating animals as they can be applied to a wide range of data, including naturalistic visual images and spatial coding variables.
A major challenge that has stopped progress so far was to measure what an animal sees while it is navigating its environment. A few years ago, I set out to overcome this challenge in the mouse – a species in which vision and navigation have extensively been studied in isolation. By combining experimental and computational work, I hope to provide a step towards understanding how the high-level abstract spatial code in the brain arises in part from transient, fluctuating images on the retinas.
During my postdoc with Jennifer Linden and Maneesh Sahani (both University College London), I developed a miniature lightweight head-mounted video camera system (weight 1.3 grams) combined with movement sensors and chronic neural recordings to obtain simultaneous measurements of multiple behavioral variables, including body, head, and eye movements, and neural activity in freely moving mice [5]. This system allows the measurement of the two main factors that determine what an animal sees – head and eye movements – going beyond measurement of head position, which is typically used in navigation research. It also avoids complications of studying navigation in head-restrained animals. While head restraint facilitates neural recordings and stimulus control, it can lead to qualitative changes in visual (e.g., [5,6]) and spatial processing (e.g., [7,8]). We have made the system openly available and it is now part of the Open Ephys project (http://wwww.open-ephys.org/mousecam), one of the most widely used systems for behavioral electrophysiology.
Together with John O’Keefe (University College London) and Jasper Poort (University of Cambridge), I used the head and eye tracking system to reveal the dynamical structure of head and eye movements (gaze) in freely moving mice [9]. We found that the complex head and eye movement patterns in freely-moving mice can be decomposed into two distinct, independent components each linking eye and head but in different ways (Figure 2). The first type of eye-head coupling supports stabilization of the visual field relative to the ground. The second type allows the mouse to “saccade and fixate” similar to humans non-human primates but mostly parallel to the ground. Mice thus see their environment as a sequence of stable images when moving their heads. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during natural behaviors, such as visually-guided navigation, in the mouse.
In a recent collaboration with Pieter Roelfsema’s group at the Netherlands Institute for Neuroscience, we investigated how gaze dynamics is related to cortical organization [10]. The representation of space in mouse visual cortex was thought to be relatively uniform, with no strong biases towards any particular region of space. This contrasts with the primate visual cortex with its overrepresentation of the fovea, placing potential limits on the translation of research in mice to humans. We identified a previously unexpected organization of the visual cortex of mice that resembles the fovea-centric organization of human visual cortex. Using population receptive field (pRF) mapping techniques, which provide an estimate of aggregate RF sizes, we found that mouse visual cortex contains a region in which pRFs are considerably smaller (Figure 3A). Importantly, eye movements keep this region at strategic locations in front of the animal, centered at the horizon, where natural scenes tend to have features such as target locations and landmarks (Figure 3B,C). This suggests that already at an early cortical level, the mouse visual system is adapted to process information relevant for navigation.
In 2019, I joined the Donders Institute for Brain, Cognition and Behaviour (Radboud University) as a Radboud Excellence Fellow, working with Francesco Battaglia. I also hold a secondary appointment at the Sainsbury Wellcome Centre in London. At the Donders Institute, I am studying cortical brain regions at the intersection of the visual and the navigation system, in particular the posterior parietal cortex and the retrosplenial cortex. These brain regions have long been hypothesized to play a crucial role in converting visual images that constantly move with the eyes into a stable spatial code anchored to the animal (egocentric) or to the world (allocentric). How this transformation is accomplished in the rodent cognitive mapping system is still in the realm of theory [11,12]. By exploiting the methods I developed in previous work, I will investigate the processing steps involved in this transformation with single-cell resolution in freely moving animals. I will also investigate how the knowledge of an animal’s position in the environment modifies processing in visual brain areas.
Donders Institute for Brain, Cognition and Behaviour
Radboud University,
Nijmegen 6525 AJ,
The Netherlands
Sainsbury Wellcome Centre for Neural Circuits and Behaviour
University College London,
London W1T 4JG,
UK
E-mail: a1.meyer[at]donders.ru.nl