Research

Mission statement

Understanding the brain in health and disease is one of today’s biggest scientific challenges with important societal implications. In particular, computational neuroscience seeks to understand how the brain transmits, processes and stores the information that ultimately guides our behavior. The recent progress in machine learning algorithms and brain recordings have made the fields of artificial and biological neural networks mature to a state in which each can inform the other. Our lab of computational neuroscience focuses in exploiting these advances to study how information is represented in the human brain and improve artificial intelligent systems.

To this end we pursue 3 interconnected lines of research: advanced analysis of brain recordings, cognitive artificial intelligence, and complex systems.


The questions we tackle and the methods we use are influenced by an interdisciplinary perspective. Our team consist of neuroscientists, computer scientists, mathematicians, physicists and bioinformaticians. Therefore, we borrow from any method and perspective from these fields that we find interesting and useful. 


Projects


Data Analysis

  • Finding the dynamic neural correlates of deep visual processing

Fueled by developments in deep learning computer vision has recently achieved spectacular improvements. At the same time human vision still holds important lessons for computer systems to achieve generalization and robustness. Despite their successes it is unknown how human and artificial neural networks for vision relate to each other. The target of this project is to narrow this gap by transferring knowledge and computational strategies between biological and artificial systems of vision. To that end, we are capitalizing on unique properties of intracranial brain recordings to compare biological and deep learning systems for vision. In particular, we aim to determine the dynamic neural correlates of hierarchical visual processing. This project is performed in collaboration with Juan Vidal from the University of Grenoble (France) providing a high-quality dataset obtained at the University Hospital of Lyon. This dataset consist of intracranial recordings with a signal-to-noise ratio 100 times better than scalp EEG and a precise temporal and spatial resolution, covering 109 patients and a total of >12000 electrodes. During the recordings each patient was presented with 319 natural images which brain response we are analyzing in relation to deep learning models. This study will help us to better characterize the similarities and differences of biological vision to deep neural networks.

 

  • Decoding image categories from brain responses
Decoding different stimuli from the brain’s activity can reveal where, when, and what information is being represented. To this end we are applying classifiers to intracranial recordings from patients presented with images belonging to 5 different categories (animals, tools, faces, scenes and houses). Finding which electrodes, time windows, and frequency bands contain more predictive power about the image categories reveals important information about how the brain represents images during vision.



  • Decoding rat position from hippocampal recordings using artificial neural networks
Navigation in complex environments is a basic survival skill in most animals. The discovery of place cells (neurons which activate in specific locations of the animal environment) in a brain region known as the hippocampus provided experimental support that such a region acts as some sort of cognitive map used by the animal in its navigation. However, it is not clear how the spatial information distributed across different neurons is actually read out. In this project we aim to predict the location of a rat moving in a 2d environment from the spike trains of tens of hippocampal neurons by training a recurrent artificial neural network (LSTM). This type of read out is a neurally plausible and allows to use a flexible temporal context of the neuronal firing. Analyzing the artificial neural network itself we can also discern which type of neurons and time windows better predict the animal location. This project is being pursued in collaboration with the group of Caswell Barry from University College London. 



  • Oscillatory analysis of brain signals

Spectral analysis, that is representing a signal into components at different frequency bands, is one of the most widely used characterizations of brain recordings. Even the first recordings of human EEG revealed rhythmic fluctuations or neuronal oscillations which since then have being described in many brain areas, and shown to change consistently under different stimuli, brain states and pathologies. However, extreme care must be applied when passing from observations in different frequency bands to interpretations of oscillatory dynamics or neural mechanisms. In particular, our work has been directed to collect a series of caveats and methodological suggestions for a sound analysis of neuronal oscillations and their putative interactions (also known as cross-frequency coupling). 


Cognitive Artificial Intelligence

  • Multiagent cooperation and competition using deep reinforcement learning

Evolution of cooperation and competition can appear when multiple adaptive 2 agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multi-agent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multi-agent systems coping with high-dimensional environments.




  • Learning to take others perspective: training agents via reinforcement learning to develop basic theory of mind
Navigating through the complex social interactions within a group represent one of the more difficult tasks that social animals face. In many primate species, the success for reproduction and survival critically depends on the ability to predict intentions of other members of the group and form coalitions. Indeed anthropology and psychology theories suggest that an important factor in the explosion of human intelligence (as compared to other primates) was the development of a sophisticated theory of mind by the humans. Theory of mind is the ability to assign distinct mental states (beliefs, intents, knowledge,…) to other members. In this project, we aim to teach agents via reinforcement learning to solve a perspective-taking task that requires the agent to consider the perceptual state of another. 




Complex systems

  • Partial information decomposition: estimating who knows what
Mutual information quantifies the amount of information shared by two random variables. Such measure has been extensively applied to quantify information flowing in natural and man-made communication channels. However, it has been argued that an information theoretic description of computation (as opposed to simply communication) requires to quantify the distribution of information that a set of input variables has about an output variable. In particular, such information could be provided individually (unique), redundantly (shared), or exclusively jointly (synergetic). The partial information decomposition (PID) is an axiomatic framework to define such information distributions. In this project together with the group of Dirk Oliver Theis, we are developing a numerical estimator of PID and applying it to understand how information is distributed across parts of different complex systems.    


  • Using Gromov-Wasserstein distances to explore sets of networks
This project focuses on the application and implementation of Gromov-Wasserstein distance, which allows to compare objects presented as metric measure spaces. The current work builds on a series of papers by Facundo Memoli, who introduced a number of Gromov-Wasserstein type of distances. In particular, we focused in obtaining an efficient implementation of such distances and applying them to explore sets of complex networks. This is an ongoing project in collaboration with Dirk Oliver Theis and Victor Eguiluz.


  • Synchronization phenomena
Synchronization refers to the adjustment of rhythms of oscillators due to their coupling and it has been described in a variety of systems ranging from pendulums to neurons. This phenomenon has been extended to explain the coordination of chaotic oscillators giving rise to different types of synchronization. Several ongoing projects in this area aim to describe the geometric and algebraic structure of the current notion of generalized synchronization. 



Collaborators

Michael Wibral 
Brain Imaging Center, Goethe University, Frankfurt, Germany

Wolf Singer
Max-Planck Institute for Brain Research, Frankfurt, Germany

Claudio Mirasso,
University of the Balearic Islands, Palma, Spain

Ingo Fischer,
Instituto de Fisica Interdisciplinar y Sistemas Complejos (IFISC), Palma, Spain

Dirk Oliver Theis
Institute of Computer Science, Tartu, Estonia

Ajmal Zemmar
Vancouver General Hospital, Vancouver, Canada

Davit Bzhalava
Karolinska Institutet, Stockholm, Sweden

Juhan Aru
ETH, Zurich, Switzerland