Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym APEG
Project Algorithmic Performance Guarantees: Foundations and Applications
Researcher (PI) Susanne ALBERS
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Summary
Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Max ERC Funding
2 404 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym ARCA
Project Analysis and Representation of Complex Activities in Videos
Researcher (PI) Juergen Gall
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Summary
The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Max ERC Funding
1 499 875 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym BIGCODE
Project Learning from Big Code: Probabilistic Models, Analysis and Synthesis
Researcher (PI) Martin Vechev
Host Institution (HI) EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Summary
The goal of this proposal is to fundamentally change the way we build and reason about software. We aim to develop new kinds of statistical programming systems that provide probabilistically likely solutions to tasks that are difficult or impossible to solve with traditional approaches.
These statistical programming systems will be based on probabilistic models of massive codebases (also known as ``Big Code'') built via a combination of advanced programming languages and powerful machine learning and natural language processing techniques. To solve a particular challenge, a statistical programming system will query a probabilistic model, compute the most likely predictions, and present those to the developer.
Based on probabilistic models of ``Big Code'', we propose to investigate new statistical techniques in the context of three fundamental research directions: i) statistical program synthesis where we develop techniques that automatically synthesize and predict new programs, ii) statistical prediction of program properties where we develop new techniques that can predict important facts (e.g., types) about programs, and iii) statistical translation of programs where we investigate new techniques for statistical translation of programs (e.g., from one programming language to another, or to a natural language).
We believe the research direction outlined in this interdisciplinary proposal opens a new and exciting area of computer science. This area will combine sophisticated statistical learning and advanced programming language techniques for building the next-generation statistical programming systems.
We expect the results of this proposal to have an immediate impact upon millions of developers worldwide, triggering a paradigm shift in the way tomorrow's software is built, as well as a long-lasting impact on scientific fields such as machine learning, natural language processing, programming languages and software engineering.
Max ERC Funding
1 500 000 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym BroadSem
Project Induction of Broad-Coverage Semantic Parsers
Researcher (PI) Ivan Titov
Host Institution (HI) THE UNIVERSITY OF EDINBURGH
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Summary
In the last one or two decades, language technology has achieved a number of important successes, for example, producing functional machine translation systems and beating humans in quiz games. The key bottleneck which prevents further progress in these and many other natural language processing (NLP) applications (e.g., text summarization, information retrieval, opinion mining, dialog and tutoring systems) is the lack of accurate methods for producing meaning representations of texts. Accurately predicting such meaning representations on an open domain with an automatic parser is a challenging and unsolved problem, primarily because of language variability and ambiguity. The reason for the unsatisfactory performance is reliance on supervised learning (learning from annotated resources), with the amounts of annotation required for accurate open-domain parsing exceeding what is practically feasible. Moreover, representations defined in these resources typically do not provide abstractions suitable for reasoning.
In this project, we will induce semantic representations from large amounts of unannotated data (i.e. text which has not been labeled by humans) while guided by information contained in human-annotated data and other forms of linguistic knowledge. This will allow us to scale our approach to many domains and across languages. We will specialize meaning representations for reasoning by modeling relations (e.g., facts) appearing across sentences in texts (document-level modeling), across different texts, and across texts and knowledge bases. Learning to predict this linked data is closely related to learning to reason, including learning the notions of semantic equivalence and entailment. We will jointly induce semantic parsers (e.g., log-linear feature-rich models) and reasoning models (latent factor models) relying on this data, thus, ensuring that the semantic representations are informative for applications requiring reasoning.
Max ERC Funding
1 457 185 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym BUNGEE-TOOLS
Project Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies
Researcher (PI) Juan Eugenio Iglesias
Host Institution (HI) UNIVERSITY COLLEGE LONDON
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Summary
Recent advances in magnetic resonance (MR) acquisition technology are providing us with images of the human brain of increasing detail and resolution. While these images hold promise to greatly increase our understanding of such a complex organ, the neuroimaging community relies on tools (e.g. SPM, FSL, FreeSurfer) which, being over a decade old, were designed to work at much lower resolutions. These tools do not consider brain substructures that are visible in present-day scans, and this inability to capitalize on the vast improvement of MR is hampering progress in the neuroimaging field.
In this ambitious project, which lies at the nexus of medical histology, neuroscience, biomedical imaging, computer vision and statistics, we propose to build a set of next-generation computational tools that will enable neuroimaging studies to take full advantage of the increased resolution of modern MR technology. The core of the tools will be an ultra-high resolution probabilistic atlas of the human brain, built upon multimodal data combining from histology and ex vivo MR. The resulting atlas will be used to analyze in vivo brain MR scans, which will require the development of Bayesian segmentation methods beyond the state of the art.
The developed tools, which will be made freely available to the scientific community, will enable the analysis of MR data at a superior level of structural detail, opening completely new opportunities of research in neuroscience. Therefore, we expect the tools to have a tremendous impact on the quest to understand the human brain (in health and in disease), and ultimately on public health and the economy.
Max ERC Funding
1 450 075 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CHAMELEON
Project Intuitive editing of visual appearance from real-world datasets
Researcher (PI) Diego Gutierrez Pérez
Host Institution (HI) UNIVERSIDAD DE ZARAGOZA
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Computer-generated imagery is now ubiquitous in our society, spanning fields such as games and movies, architecture, engineering, or virtual prototyping, while also helping create novel ones such as computational materials. With the increase in computational power and the improvement of acquisition techniques, there has been a paradigm shift in the field towards data-driven techniques, which has yielded an unprecedented level of realism in visual appearance. Unfortunately, this leads to a series of problems, identified in this proposal: First, there is a disconnect between the mathematical representation of the data and any meaningful parameters that humans understand; the captured data is machine-friendly, but not human friendly. Second, the many different acquisition systems lead to heterogeneous formats and very large datasets. And third, real-world appearance functions are usually nonlinear and high-dimensional. As a result, visual appearance datasets are increasingly unfit to editing operations, which limits the creative process for scientists, engineers, artists and practitioners in general. There is an immense gap between the complexity, realism and richness of the captured data, and the flexibility to edit such data.
We believe that the current research path leads to a fragmented space of isolated solutions, each tailored to a particular dataset and problem. We propose a research plan at the theoretical, algorithmic and application levels, putting the user at the core. We will learn key relevant appearance features in terms humans understand, from which intuitive, predictable editing spaces, algorithms, and workflows will be defined. In order to ensure usability and foster creativity, we will also extend our research to efficient simulation of visual appearance, exploiting the extra dimensionality of the captured datasets. Achieving our goals will finally enable us to reach the true potential of real-world captured datasets in many aspects of society.
Summary
Computer-generated imagery is now ubiquitous in our society, spanning fields such as games and movies, architecture, engineering, or virtual prototyping, while also helping create novel ones such as computational materials. With the increase in computational power and the improvement of acquisition techniques, there has been a paradigm shift in the field towards data-driven techniques, which has yielded an unprecedented level of realism in visual appearance. Unfortunately, this leads to a series of problems, identified in this proposal: First, there is a disconnect between the mathematical representation of the data and any meaningful parameters that humans understand; the captured data is machine-friendly, but not human friendly. Second, the many different acquisition systems lead to heterogeneous formats and very large datasets. And third, real-world appearance functions are usually nonlinear and high-dimensional. As a result, visual appearance datasets are increasingly unfit to editing operations, which limits the creative process for scientists, engineers, artists and practitioners in general. There is an immense gap between the complexity, realism and richness of the captured data, and the flexibility to edit such data.
We believe that the current research path leads to a fragmented space of isolated solutions, each tailored to a particular dataset and problem. We propose a research plan at the theoretical, algorithmic and application levels, putting the user at the core. We will learn key relevant appearance features in terms humans understand, from which intuitive, predictable editing spaces, algorithms, and workflows will be defined. In order to ensure usability and foster creativity, we will also extend our research to efficient simulation of visual appearance, exploiting the extra dimensionality of the captured datasets. Achieving our goals will finally enable us to reach the true potential of real-world captured datasets in many aspects of society.
Max ERC Funding
1 629 519 €
Duration
Start date: 2016-11-01, End date: 2021-10-31
Project acronym CIRCUS
Project An end-to-end verification architecture for building Certified Implementations of Robust, Cryptographically Secure web applications
Researcher (PI) Karthikeyan Bhargavan
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The security of modern web applications depends on a variety of critical components including cryptographic libraries, Transport Layer Security (TLS), browser security mechanisms, and single sign-on protocols. Although these components are widely used, their security guarantees remain poorly understood, leading to subtle bugs and frequent attacks.
Rather than fixing one attack at a time, we advocate the use of formal security verification to identify and eliminate entire classes of vulnerabilities in one go. With the aid of my ERC starting grant, I have built a team that has already achieved landmark results in this direction. We built the first TLS implementation with a cryptographic proof of security. We discovered high-profile vulnerabilities such as the recent Triple Handshake and FREAK attacks, both of which triggered critical security updates to all major web browsers and TLS libraries.
So far, our security theorems only apply to carefully-written standalone reference implementations. CIRCUS proposes to take on the next great challenge: verifying the end-to-end security of web applications running in mainstream software. The key idea is to identify the core security components of web browsers and servers and replace them by rigorously verified components that offer the same functionality but with robust security guarantees.
Our goal is ambitious and there are many challenges to overcome, but we believe this is an opportune time for this proposal. In response to the Snowden reports, many cryptographic libraries and protocols are currently being audited and redesigned. Standards bodies and software developers are inviting researchers to help analyse their designs and code. Responding to their call requires a team of researchers who are willing to deal with the messy details of nascent standards and legacy code, and at the same time prove strong security theorems based on precise cryptographic assumptions. We are able, we are willing, and the time is now.
Summary
The security of modern web applications depends on a variety of critical components including cryptographic libraries, Transport Layer Security (TLS), browser security mechanisms, and single sign-on protocols. Although these components are widely used, their security guarantees remain poorly understood, leading to subtle bugs and frequent attacks.
Rather than fixing one attack at a time, we advocate the use of formal security verification to identify and eliminate entire classes of vulnerabilities in one go. With the aid of my ERC starting grant, I have built a team that has already achieved landmark results in this direction. We built the first TLS implementation with a cryptographic proof of security. We discovered high-profile vulnerabilities such as the recent Triple Handshake and FREAK attacks, both of which triggered critical security updates to all major web browsers and TLS libraries.
So far, our security theorems only apply to carefully-written standalone reference implementations. CIRCUS proposes to take on the next great challenge: verifying the end-to-end security of web applications running in mainstream software. The key idea is to identify the core security components of web browsers and servers and replace them by rigorously verified components that offer the same functionality but with robust security guarantees.
Our goal is ambitious and there are many challenges to overcome, but we believe this is an opportune time for this proposal. In response to the Snowden reports, many cryptographic libraries and protocols are currently being audited and redesigned. Standards bodies and software developers are inviting researchers to help analyse their designs and code. Responding to their call requires a team of researchers who are willing to deal with the messy details of nascent standards and legacy code, and at the same time prove strong security theorems based on precise cryptographic assumptions. We are able, we are willing, and the time is now.
Max ERC Funding
1 885 248 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym CoBCoM
Project Computational Brain Connectivity Mapping
Researcher (PI) Rachid DERICHE
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary One third of the burden of all the diseases in Europe is due to problems caused by diseases affecting brain. Although exceptional progress has been obtained for exploring it during the past decades, the brain is still terra-incognita and calls for specic research efforts to better understand its architecture and functioning.
CoBCoM is our response to this great challenge of modern science with the overall goal to develop a joint Dynamical Structural-Functional Brain Connectivity Network (DSF-BCN) solidly grounded on advanced and integrated methods for diffusion Magnetic Resonance Imaging (dMRI) and Electro & Magneto-Encephalography (EEG & MEG).
To take up this grand challenge and achieve new frontiers for brain connectivity mapping, we will develop a new generation of computational models and methods for identifying and characterizing the structural and functional connectivities that will be at the heart of the DSF-BCN. Our strategy is to break with the tradition to incrementally and separately contributing to structure or function and develop a global approach involving strong interactions between structural and functional connectivities. To solve the limited view of the brain provided just by one imaging modality, our models will be developed under a rigorous computational framework integrating complementary non invasive imaging modalities: dMRI, EEG and MEG.
CoBCoM will push far forward the state-of-the-art in these modalities, developing innovative models and ground-breaking processing tools to provide in-fine a joint DSF-BCN solidly grounded on a detailed mapping of the brain connectivity, both in space and time.
Capitalizing on the strengths of dMRI, MEG & EEG methodologies and building on the bio- physical and mathematical foundations of our new generation of computational models, CoBCoM will be applied to high-impact diseases, and its ground-breaking computational nature and added clinical value will open new perspectives in neuroimaging.
Summary
One third of the burden of all the diseases in Europe is due to problems caused by diseases affecting brain. Although exceptional progress has been obtained for exploring it during the past decades, the brain is still terra-incognita and calls for specic research efforts to better understand its architecture and functioning.
CoBCoM is our response to this great challenge of modern science with the overall goal to develop a joint Dynamical Structural-Functional Brain Connectivity Network (DSF-BCN) solidly grounded on advanced and integrated methods for diffusion Magnetic Resonance Imaging (dMRI) and Electro & Magneto-Encephalography (EEG & MEG).
To take up this grand challenge and achieve new frontiers for brain connectivity mapping, we will develop a new generation of computational models and methods for identifying and characterizing the structural and functional connectivities that will be at the heart of the DSF-BCN. Our strategy is to break with the tradition to incrementally and separately contributing to structure or function and develop a global approach involving strong interactions between structural and functional connectivities. To solve the limited view of the brain provided just by one imaging modality, our models will be developed under a rigorous computational framework integrating complementary non invasive imaging modalities: dMRI, EEG and MEG.
CoBCoM will push far forward the state-of-the-art in these modalities, developing innovative models and ground-breaking processing tools to provide in-fine a joint DSF-BCN solidly grounded on a detailed mapping of the brain connectivity, both in space and time.
Capitalizing on the strengths of dMRI, MEG & EEG methodologies and building on the bio- physical and mathematical foundations of our new generation of computational models, CoBCoM will be applied to high-impact diseases, and its ground-breaking computational nature and added clinical value will open new perspectives in neuroimaging.
Max ERC Funding
2 469 123 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym COLORAMAP
Project Constrained Low-Rank Matrix Approximations: Theoretical and Algorithmic Developments for Practitioners
Researcher (PI) Nicolas Benoit P Gillis
Host Institution (HI) UNIVERSITE DE MONS
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Summary
Low-rank matrix approximation (LRA) techniques such as principal component analysis (PCA) are powerful tools for the representation and analysis of high dimensional data, and are used in a wide variety of areas such as machine learning, signal and image processing, data mining, and optimization. Without any constraints and using the least squares error, LRA can be solved via the singular value decomposition. However, in practice, this model is often not suitable mainly because (i) the data might be contaminated with outliers, missing data and non-Gaussian noise, and (ii) the low-rank factors of the decomposition might have to satisfy some specific constraints. Hence, in recent years, many variants of LRA have been introduced, using different constraints on the factors and using different objective functions to assess the quality of the approximation; e.g., sparse PCA, PCA with missing data, independent component analysis and nonnegative matrix factorization. Although these new constrained LRA models have become very popular and standard in some fields, there is still a significant gap between theory and practice. In this project, our goal is to reduce this gap by attacking the problem in an integrated way making connections between LRA variants, and by using four very different but complementary perspectives: (1) computational complexity issues, (2) provably correct algorithms, (3) heuristics for difficult instances, and (4) application-oriented aspects. This unified and multi-disciplinary approach will enable us to understand these problems better, to develop and analyze new and existing algorithms and to then use them for applications. Our ultimate goal is to provide practitioners with new tools and to allow them to decide which method to use in which situation and to know what to expect from it.
Max ERC Funding
1 291 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31