Understanding the brain in health and disease is one of today’s biggest scientific challenges with important societal implications. In particular, computational neuroscience seeks to understand how the brain transmits, processes and stores the information that ultimately guides our behavior. The recent progress in machine learning algorithms and brain recordings have made the fields of artificial and biological neural networks mature to a state in which each can inform the other. Our lab of computational neuroscience focuses in exploiting these advances to study how information is represented in the human brain and improve artificial intelligent systems.
To this end we pursue 3 interconnected lines of research: advanced analysis of brain recordings, cognitive artificial intelligence, and complex systems.
The questions we tackle and the methods we use are influenced by an interdisciplinary perspective. Our team consist of neuroscientists, computer scientists, mathematicians, physicists and bioinformaticians. Therefore, we borrow from any method and perspective from these fields that we find interesting and useful.
- Finding the dynamic neural correlates of deep visual processing
Fueled by developments in deep learning computer vision has recently achieved spectacular improvements. At the same time human vision still holds important lessons for computer systems to achieve generalization and robustness. Despite their successes it is unknown how human and artificial neural networks for vision relate to each other. The target of this project is to narrow this gap by transferring knowledge and computational strategies between biological and artificial systems of vision. To that end, we are capitalizing on unique properties of intracranial brain recordings to compare biological and deep learning systems for vision. In particular, we aim to determine the dynamic neural correlates of hierarchical visual processing. This project is performed in collaboration with Juan Vidal from the University of Grenoble (France) providing a high-quality dataset obtained at the University Hospital of Lyon. This dataset consist of intracranial recordings with a signal-to-noise ratio 100 times better than scalp EEG and a precise temporal and spatial resolution, covering 109 patients and a total of >12000 electrodes. During the recordings each patient was presented with 319 natural images which brain response we are analyzing in relation to deep learning models. This study will help us to better characterize the similarities and differences of biological vision to deep neural networks.
- Decoding image categories from brain responses
- Decoding rat position from hippocampal recordings using artificial neural networks
- Oscillatory analysis of brain signals
Spectral analysis, that is representing a signal into components at different frequency bands, is one of the most widely used characterizations of brain recordings. Even the first recordings of human EEG revealed rhythmic fluctuations or neuronal oscillations which since then have being described in many brain areas, and shown to change consistently under different stimuli, brain states and pathologies. However, extreme care must be applied when passing from observations in different frequency bands to interpretations of oscillatory dynamics or neural mechanisms. In particular, our work has been directed to collect a series of caveats and methodological suggestions for a sound analysis of neuronal oscillations and their putative interactions (also known as cross-frequency coupling).
- Multiagent cooperation and competition using deep reinforcement learning
Evolution of cooperation and competition can appear when multiple adaptive 2 agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multi-agent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multi-agent systems coping with high-dimensional environments.
- Learning to take others perspective: training agents via reinforcement learning to develop basic theory of mind
- Partial information decomposition: estimating who knows what
- Using Gromov-Wasserstein distances to explore sets of networks
- Synchronization phenomena
Brain Imaging Center, Goethe University, Frankfurt, Germany
Max-Planck Institute for Brain Research, Frankfurt, Germany
University of the Balearic Islands, Palma, Spain
Instituto de Fisica Interdisciplinar y Sistemas Complejos (IFISC), Palma, Spain
Dirk Oliver Theis
Institute of Computer Science, Tartu, Estonia
Vancouver General Hospital, Vancouver, Canada
Karolinska Institutet, Stockholm, Sweden
ETH, Zurich, Switzerland