Project acronym COGNIMUND
Project Cognitive Image Understanding: Image representations and Multimodal learning
Researcher (PI) Tinne Tuytelaars
Host Institution (HI) KATHOLIEKE UNIVERSITEIT LEUVEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Summary
One of the primary and most appealing goals of computer vision is to automatically understand the content of images on a cognitive level. Ultimately we want to have computers interpret images as we humans do, recognizing all the objects, scenes, and people as well as their relations as they appear in natural images or video. With this project, I want to advance the state of the art in this field in two directions, which I believe to be crucial to build the next generation of image understanding tools. First, novel more robust yet descriptive image representations will be designed, that incorporate the intrinsic structure of images. These should already go a long way towards removing irrelevant sources of variability while capturing the essence of the image content. I believe the importance of further research into image representations is currently underestimated within the research community, yet I claim this is a crucial step with lots of opportunities good learning cannot easily make up for bad features. Second, weakly supervised methods to learn from multimodal input (especially the combination of images and text) will be investigated, making it possible to leverage the large amount of weak annotations available via the internet. This is essential if we want to scale the methods to a larger number of object categories (several hundreds instead of a few tens). As more data can be used for training, such weakly supervised methods might in the end even come on par with or outperform supervised schemes. Here we will call upon the latest results in semi-supervised learning, datamining, and computational linguistics.
Max ERC Funding
1 538 380 €
Duration
Start date: 2010-02-01, End date: 2015-01-31
Project acronym COMPLEX REASON
Project The Parameterized Complexity of Reasoning Problems
Researcher (PI) Stefan Szeider
Host Institution (HI) TECHNISCHE UNIVERSITAET WIEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Summary
Reasoning, to derive conclusions from facts, is a fundamental task in Artificial Intelligence, arising in a wide range of applications from Robotics to Expert Systems. The aim of this project is to devise new efficient algorithms for real-world reasoning problems and to get new insights into the question of what makes a reasoning problem hard, and what makes it easy. As key to novel and groundbreaking results we propose to study reasoning problems within the framework of Parameterized Complexity, a new and rapidly emerging field of Algorithms and Complexity. Parameterized Complexity takes structural aspects of problem instances into account which are most significant for empirically observed problem-hardness. Most of the considered reasoning problems are intractable in general, but the real-world context of their origin provides structural information that can be made accessible to algorithms in form of parameters. This makes Parameterized Complexity an ideal setting for the analysis and efficient solution of these problems. A systematic study of the Parameterized Complexity of reasoning problems that covers theoretical and empirical aspects is so far outstanding. This proposal sets out to do exactly this and has therefore a great potential for groundbreaking new results. The proposed research aims at a significant impact on the research culture by setting the grounds for a closer cooperation between theorists and practitioners.
Max ERC Funding
1 421 130 €
Duration
Start date: 2010-01-01, End date: 2014-12-31
Project acronym CONVEXVISION
Project Convex Optimization Methods for Computer Vision and Image Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Optimization methods have become an established paradigm to address most Computer Vision challenges including the
reconstruction of three-dimensional objects from multiple images, or the tracking of a deformable shape over time. Yet, it has
been largely overlooked that optimization approaches are practically useless if they do not come with efficient algorithms to
compute minimizers of respective energies. Most existing formulations give rise to non-convex energies. As a consequence,
solutions highly depend on the choice of minimization scheme and implementational (initialization, time step sizes, etc.), with
little or no guarantees regarding the quality of computed solutions and their robustness to perturbations of the input data.
In the proposed research project, we plan to develop optimization methods for Computer Vision which allow to efficiently
compute globally optimal solutions. Preliminary results indicate that this will drastically leverage the power of optimization
methods and their applicability in a substantially broader context. Specifically we will focus on three lines of research: 1) We
will develop convex formulations for a variety of challenges. While convex formulations are currently being developed for
low-level problems such as image segmentation, our main effort will focus on carrying convex optimization to higher level
problems of image understanding and scene interpretation. 2) We will investigate alternative strategies of global optimization
by means of discrete graph theoretic methods. We will characterize advantages and drawbacks of continuous and discrete
methods and thereby develop novel algorithms combining the advantages of both approaches. 3) We will go beyond convex
formulations, developing relaxation schemes that compute near-optimal solutions for problems that cannot be expressed by
convex functionals.
Summary
Optimization methods have become an established paradigm to address most Computer Vision challenges including the
reconstruction of three-dimensional objects from multiple images, or the tracking of a deformable shape over time. Yet, it has
been largely overlooked that optimization approaches are practically useless if they do not come with efficient algorithms to
compute minimizers of respective energies. Most existing formulations give rise to non-convex energies. As a consequence,
solutions highly depend on the choice of minimization scheme and implementational (initialization, time step sizes, etc.), with
little or no guarantees regarding the quality of computed solutions and their robustness to perturbations of the input data.
In the proposed research project, we plan to develop optimization methods for Computer Vision which allow to efficiently
compute globally optimal solutions. Preliminary results indicate that this will drastically leverage the power of optimization
methods and their applicability in a substantially broader context. Specifically we will focus on three lines of research: 1) We
will develop convex formulations for a variety of challenges. While convex formulations are currently being developed for
low-level problems such as image segmentation, our main effort will focus on carrying convex optimization to higher level
problems of image understanding and scene interpretation. 2) We will investigate alternative strategies of global optimization
by means of discrete graph theoretic methods. We will characterize advantages and drawbacks of continuous and discrete
methods and thereby develop novel algorithms combining the advantages of both approaches. 3) We will go beyond convex
formulations, developing relaxation schemes that compute near-optimal solutions for problems that cannot be expressed by
convex functionals.
Max ERC Funding
1 985 400 €
Duration
Start date: 2010-09-01, End date: 2015-08-31
Project acronym DIADEM
Project Domain-centric Intelligent Automated Data Extraction Methodology
Researcher (PI) Georg Gottlob
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary This proposal is in the area of automated web data extraction and web data management. The aim of our project is to provide the logical, methodological, and algorithmic foundations for the knowledge-based extraction of structured data from web sites belonging to specific domains, such as estate agents, restaurants, travel agencies, car dealers, and so on. One core part of this will be a comprehensive multi-dimensional logical data model that will be used to simultaneously represent both the content of a large website, its structure, inferred user-interaction patterns and all meta-information and knowledge (factual and rule-based) that is necessary to automatically perform the desired extraction tasks. I envision that, based on these new foundations, we will be able to build extremely powerful systems that autonomously explore websites of a given domain, understand their structure and extract and output richly structured data in formats such as XML or RDF. We aim at systems that take as input a URL of a website in a given domain, automatically explore this site and deliver as output a structured data set containing all the relevant information present on that site. As an example, imagine a system specialized in the real-estate domain, that receives as input the URL of any real-estate agent, explores the site automatically and outputs richly structured records of all properties that are currently advertised for sale or for rent on the many web pages of this site. We plan to develop and implement at least two such systems for two different domains, including the one mentioned. The breakthrough in automatic data extraction that we are striving for would enable a quantum leap for two interrelated technologies which are the hottest next topics in web search: vertical search, that is, web search in specialized domains, and object search, that is, the search for web data objects rather than web pages.
Summary
This proposal is in the area of automated web data extraction and web data management. The aim of our project is to provide the logical, methodological, and algorithmic foundations for the knowledge-based extraction of structured data from web sites belonging to specific domains, such as estate agents, restaurants, travel agencies, car dealers, and so on. One core part of this will be a comprehensive multi-dimensional logical data model that will be used to simultaneously represent both the content of a large website, its structure, inferred user-interaction patterns and all meta-information and knowledge (factual and rule-based) that is necessary to automatically perform the desired extraction tasks. I envision that, based on these new foundations, we will be able to build extremely powerful systems that autonomously explore websites of a given domain, understand their structure and extract and output richly structured data in formats such as XML or RDF. We aim at systems that take as input a URL of a website in a given domain, automatically explore this site and deliver as output a structured data set containing all the relevant information present on that site. As an example, imagine a system specialized in the real-estate domain, that receives as input the URL of any real-estate agent, explores the site automatically and outputs richly structured records of all properties that are currently advertised for sale or for rent on the many web pages of this site. We plan to develop and implement at least two such systems for two different domains, including the one mentioned. The breakthrough in automatic data extraction that we are striving for would enable a quantum leap for two interrelated technologies which are the hottest next topics in web search: vertical search, that is, web search in specialized domains, and object search, that is, the search for web data objects rather than web pages.
Max ERC Funding
2 402 846 €
Duration
Start date: 2010-04-01, End date: 2015-03-31
Project acronym E-SWARM
Project Engineering Swarm Intelligence Systems
Researcher (PI) Marco Dorigo
Host Institution (HI) UNIVERSITE LIBRE DE BRUXELLES
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Summary
Swarm intelligence is the discipline that deals with natural and artificial systems composed of many individuals that coordinate using decentralized control and self-organization. In this project, we focus on the design and implementation of artificial swarm intelligence systems for the solution of complex problems. Our current understanding of how to use swarms of artificial agents largely relies on rules of thumb and intuition based on the experience of individual researchers. This is not sufficient for us to design swarm intelligence systems at the level of complexity required by many real-world applications, or to accurately predict the behavior of the systems we design. The goal of the E-SWARM is to develop a rigorous engineering methodology for the design and implementation of artificial swarm intelligence systems. We believe that in the future, swarm intelligence will be an important tool for researchers and engineers interested in solving certain classes of complex problems. To build the foundations of this discipline and to develop an appropriate methodology, we will proceed in parallel both at an abstract level and by tackling a number of challenging problems in selected research domains. The research domains we have chosen are optimization, robotics, networks, and data mining.
Max ERC Funding
2 016 000 €
Duration
Start date: 2010-06-01, End date: 2015-05-31
Project acronym END2ENDSECURITY
Project Practical design and analysis of certifiably secure protocols - theory and tools for end-to-end security
Researcher (PI) Michael Backes
Host Institution (HI) UNIVERSITAT DES SAARLANDES
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary State-of-the-art technologies struggle to keep pace with possible security vulnerabilities. The lack of a consistent methodology and tools for analyzing security protocols throughout the various stages of their design hinders the detection and prevention of vulnerabilities and comprehensive protocol analysis. Moreover, state-of-the-art verification tools typically only address particular narrow aspects of a protocol's security and require expert knowledge; hence they do not help protocol designers. The challenge is to guarantee end-to-end security - from high-level specifications of the desired security requirements, to a specification of a security protocol that relies on innovative cryptographic primitives, to a secure, executable program. This proposal addresses key steps of this challenge: our goal is to develop a general methodology for automatically devising security protocols and programs based on high-level specifications of selected security requirements and protocol tasks. This includes developing a user-friendly interface for specifying the protocol's intended behavior and high-level security requirements, devising suitable abstract protocols, selecting suitable cryptographic instantiations, and generating a secure, streamlined implementation. This methodology will also include novel verification techniques that complement all design phases along with a theory which propagates verification results from phase to phase with the ultimate goal of certified end-to-end security. This includes developing type systems for analyzing abstract protocols, a general framework for conducting cryptographic proofs, and techniques for reasoning about executable code. The tools we develop should be automated and usable by non-experts.
Summary
State-of-the-art technologies struggle to keep pace with possible security vulnerabilities. The lack of a consistent methodology and tools for analyzing security protocols throughout the various stages of their design hinders the detection and prevention of vulnerabilities and comprehensive protocol analysis. Moreover, state-of-the-art verification tools typically only address particular narrow aspects of a protocol's security and require expert knowledge; hence they do not help protocol designers. The challenge is to guarantee end-to-end security - from high-level specifications of the desired security requirements, to a specification of a security protocol that relies on innovative cryptographic primitives, to a secure, executable program. This proposal addresses key steps of this challenge: our goal is to develop a general methodology for automatically devising security protocols and programs based on high-level specifications of selected security requirements and protocol tasks. This includes developing a user-friendly interface for specifying the protocol's intended behavior and high-level security requirements, devising suitable abstract protocols, selecting suitable cryptographic instantiations, and generating a secure, streamlined implementation. This methodology will also include novel verification techniques that complement all design phases along with a theory which propagates verification results from phase to phase with the ultimate goal of certified end-to-end security. This includes developing type systems for analyzing abstract protocols, a general framework for conducting cryptographic proofs, and techniques for reasoning about executable code. The tools we develop should be automated and usable by non-experts.
Max ERC Funding
1 074 807 €
Duration
Start date: 2010-02-01, End date: 2013-10-31
Project acronym EXPLORERS
Project EXPLORERS Exploring epigenetic robotics: raising intelligence in machines
Researcher (PI) Pierre-Yves Oudeyer
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary In spite of considerable work in artificial intelligence, machine learning, and pattern recognition in the past 50 years, we have no machine capable of adapting to the physical and social environment with the flexibility, robustness and versatility of a 6-months old human child. Instead of trying to simulate directly the adult s intelligence, EXPLORERS proposes to focus on the developmental principles that give rise to intelligence in infants by re-implementing them in machines. Framed in the developmental/epigenetic robotics research agenda, and grounded in research in developmental psychology, its main target is to build robotic machines capable of autonomously learning and re-using a variety of skills and know-how that were not specified at design time, and with initially limited knowledge of the body and of the environment in which it will operate. This implies several fundamental issues: How can a robot discover its body and its relationships with the physical and social environment? How can it learn new skills without the intervention of an engineer? What internal motivations shall guide its exploration of vast spaces of skills? Can it learn through natural social interactions with humans? How to represent the learnt skills and how can they be re-used? EXPLORERS attacks directly those questions by proposing a series of fundamental scientific and technological advances, including computational intrinsic motivation systems for learning basic sensorimotor skills reused for grounded acquisition of the meaning of new words. This project not only addresses fundamental scientific questions, but also relates to important societal issues: personal home robots are bound to become part of everyday life in the 21st century, in particular as helpful social companions in an aging society. EXPLORERS objectives converge to the challenges implied by this vision: robots will have to be able to adapt and learn new skills in the unknown homes of users who are not engineers.
Summary
In spite of considerable work in artificial intelligence, machine learning, and pattern recognition in the past 50 years, we have no machine capable of adapting to the physical and social environment with the flexibility, robustness and versatility of a 6-months old human child. Instead of trying to simulate directly the adult s intelligence, EXPLORERS proposes to focus on the developmental principles that give rise to intelligence in infants by re-implementing them in machines. Framed in the developmental/epigenetic robotics research agenda, and grounded in research in developmental psychology, its main target is to build robotic machines capable of autonomously learning and re-using a variety of skills and know-how that were not specified at design time, and with initially limited knowledge of the body and of the environment in which it will operate. This implies several fundamental issues: How can a robot discover its body and its relationships with the physical and social environment? How can it learn new skills without the intervention of an engineer? What internal motivations shall guide its exploration of vast spaces of skills? Can it learn through natural social interactions with humans? How to represent the learnt skills and how can they be re-used? EXPLORERS attacks directly those questions by proposing a series of fundamental scientific and technological advances, including computational intrinsic motivation systems for learning basic sensorimotor skills reused for grounded acquisition of the meaning of new words. This project not only addresses fundamental scientific questions, but also relates to important societal issues: personal home robots are bound to become part of everyday life in the 21st century, in particular as helpful social companions in an aging society. EXPLORERS objectives converge to the challenges implied by this vision: robots will have to be able to adapt and learn new skills in the unknown homes of users who are not engineers.
Max ERC Funding
1 572 215 €
Duration
Start date: 2009-12-01, End date: 2015-05-31
Project acronym LAST
Project Large Scale Privacy-Preserving Technology in the Digital World - Infrastructure and Applications
Researcher (PI) Yehuda Lindell
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2009-StG
Summary Data mining provides large benefits to the commercial, government and homeland security sectors, but the aggregation and storage of huge amounts of data about citizens inevitably leads to erosion of privacy. To achieve the benefits that data mining has to offer, while at the same time enhancing privacy, we need technological solutions that simultaneously enable data mining while preserving privacy. The current state of the art has focused on providing privacy-preserving solutions for very specific problems, and has thus taken a local perspective. Although this is an important first step in the development of privacy-preserving solutions, it is time for a global perspective on the problem that aims for providing full integrated solutions. Our goal in this research is to study privacy and develop comprehensive solutions for enhancing it in the digital era. Our proposed research project includes foundational research on privacy, an infrastructure level for achieving anonymity over the Internet, key cryptographic tools for constructing privacy-preserving protocols, and development of large-scale applications that are built on top of all of the above. The novelty of our research is in our focus on fundamental issues towards comprehensive solutions that are aimed for large-scale data sources. The project s outcome will allow migration from local solutions for specific problems that are suited for small to medium scale data sources to comprehensive privacy-preserving database and data mining solutions for large scale data warehouses. Achieving this great challenge carries immense scientific, technological and societal rewards.
Summary
Data mining provides large benefits to the commercial, government and homeland security sectors, but the aggregation and storage of huge amounts of data about citizens inevitably leads to erosion of privacy. To achieve the benefits that data mining has to offer, while at the same time enhancing privacy, we need technological solutions that simultaneously enable data mining while preserving privacy. The current state of the art has focused on providing privacy-preserving solutions for very specific problems, and has thus taken a local perspective. Although this is an important first step in the development of privacy-preserving solutions, it is time for a global perspective on the problem that aims for providing full integrated solutions. Our goal in this research is to study privacy and develop comprehensive solutions for enhancing it in the digital era. Our proposed research project includes foundational research on privacy, an infrastructure level for achieving anonymity over the Internet, key cryptographic tools for constructing privacy-preserving protocols, and development of large-scale applications that are built on top of all of the above. The novelty of our research is in our focus on fundamental issues towards comprehensive solutions that are aimed for large-scale data sources. The project s outcome will allow migration from local solutions for specific problems that are suited for small to medium scale data sources to comprehensive privacy-preserving database and data mining solutions for large scale data warehouses. Achieving this great challenge carries immense scientific, technological and societal rewards.
Max ERC Funding
1 921 316 €
Duration
Start date: 2009-10-01, End date: 2014-09-30
Project acronym MATHFOR
Project Formalization of Constructive Mathematics
Researcher (PI) Thierry Coquand
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary The general theme is to explore the connections between reasoning and computations in mathematics. There are two main research directions. The first research direction is a refomulation of Hilbert's program, using ideas from formal, or pointfree topology. We have shown, with multiple examples, that this allows a partial realization of this program in commutative algebra, and a new way to formulate constructive mathematics. The second research direction explores the computational content using type theory and the Curry-Howard correspondence between proofs and programs. Type theory allows us to represent constructive mathematics in a formal way, and provides key insight for the design of proof systems helping in the analysis of the logical structure of mathematical proofs. The interest of this program is well illustrated by the recent work of G. Gonthier on the formalization of the 4 color theorem.
Summary
The general theme is to explore the connections between reasoning and computations in mathematics. There are two main research directions. The first research direction is a refomulation of Hilbert's program, using ideas from formal, or pointfree topology. We have shown, with multiple examples, that this allows a partial realization of this program in commutative algebra, and a new way to formulate constructive mathematics. The second research direction explores the computational content using type theory and the Curry-Howard correspondence between proofs and programs. Type theory allows us to represent constructive mathematics in a formal way, and provides key insight for the design of proof systems helping in the analysis of the logical structure of mathematical proofs. The interest of this program is well illustrated by the recent work of G. Gonthier on the formalization of the 4 color theorem.
Max ERC Funding
1 912 288 €
Duration
Start date: 2010-04-01, End date: 2015-03-31
Project acronym MICRONANO
Project Modeling Brain Circuitry using Scales Ranging from Micrometer to Nanometer
Researcher (PI) Pascal Fua
Host Institution (HI) ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE
Call Details Advanced Grant (AdG), PE6, ERC-2009-AdG
Summary If we are ever to unravel the mysteries of brain function at its most fundamental level, we will need a precise understanding of how its component neurons connect to each other. Furthermore, given the many recent advances in genetic engineering, viral targeting, and immunohistochemical labeling of specific cellular structures, there is a growing need for automated quantitative assessment of neuron morphology and connectivity. Electron microscopes can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the arborescence and the connections are integral parts of the brain's wiring diagram, combining these two modalities is critically important. In fact, these microscopes now routinely produce high-resolution imagery in such large quantities that the bottleneck becomes automated processing and interpretation, which is needed for such data to be exploited to its full potential. We will therefore use our Computer Vision expertise to provide not only the necessary tools to process images acquired using a specific modality but also those required to create an integrated representation using all available modalities. This is a radical departure from earlier approaches to applying Computer Vision techniques in this field, which have tended to focus on narrow problems. State-of-the-art methods have not reached the level of reliability and integration that would allow automated processing and interpretation of the massive amounts of data that are required for a true leap of our understanding of how the brain works. In other words, we cannot yet exploit the full potential of our imaging technology and that is what we intend to change.
Summary
If we are ever to unravel the mysteries of brain function at its most fundamental level, we will need a precise understanding of how its component neurons connect to each other. Furthermore, given the many recent advances in genetic engineering, viral targeting, and immunohistochemical labeling of specific cellular structures, there is a growing need for automated quantitative assessment of neuron morphology and connectivity. Electron microscopes can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the arborescence and the connections are integral parts of the brain's wiring diagram, combining these two modalities is critically important. In fact, these microscopes now routinely produce high-resolution imagery in such large quantities that the bottleneck becomes automated processing and interpretation, which is needed for such data to be exploited to its full potential. We will therefore use our Computer Vision expertise to provide not only the necessary tools to process images acquired using a specific modality but also those required to create an integrated representation using all available modalities. This is a radical departure from earlier approaches to applying Computer Vision techniques in this field, which have tended to focus on narrow problems. State-of-the-art methods have not reached the level of reliability and integration that would allow automated processing and interpretation of the massive amounts of data that are required for a true leap of our understanding of how the brain works. In other words, we cannot yet exploit the full potential of our imaging technology and that is what we intend to change.
Max ERC Funding
2 495 982 €
Duration
Start date: 2010-04-01, End date: 2016-03-31