Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym APEG
Project Algorithmic Performance Guarantees: Foundations and Applications
Researcher (PI) Susanne ALBERS
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Summary
Optimization problems are ubiquitous in computer science. Almost every problem involves the optimization of some objective function. However a major part of these problems cannot be solved to optimality. Therefore, algorithms that achieve provably good performance guarantees are of immense importance. Considerable progress has already been made, but great challenges remain: Some fundamental problems are not well understood. Moreover, for central problems arising in new applications, no solutions are known at all.
The goal of APEG is to significantly advance the state of the art on algorithmic performance guarantees. Specifically, the project has two missions: First, it will develop new algorithmic techniques, breaking new ground in the areas of online algorithms, approximations algorithms and algorithmic game theory. Second, it will apply these techniques to solve fundamental problems that are central in these algorithmic disciplines. APEG will attack long-standing open problems, some of which have been unresolved for several decades. Furthermore, it will formulate and investigate new algorithmic problems that arise in modern applications. The research agenda encompasses a broad spectrum of classical and timely topics including (a) resource allocation in computer systems, (b) data structuring, (c) graph problems, with relations to Internet advertising, (d) complex networks and (e) massively parallel systems. In addition to basic optimization objectives, the project will also study the new performance metric of energy minimization in computer systems.
Overall, APEG pursues cutting-edge algorithms research, focusing on both foundational problems and applications. Any progress promises to be a breakthrough or significant contribution.
Max ERC Funding
2 404 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym ARCA
Project Analysis and Representation of Complex Activities in Videos
Researcher (PI) Juergen Gall
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Summary
The goal of the project is to automatically analyse human activities observed in videos. Any solution to this problem will allow the development of novel applications. It could be used to create short videos that summarize daily activities to support patients suffering from Alzheimer's disease. It could also be used for education, e.g., by providing a video analysis for a trainee in the hospital that shows if the tasks have been correctly executed.
The analysis of complex activities in videos, however, is very challenging since activities vary in temporal duration between minutes and hours, involve interactions with several objects that change their appearance and shape, e.g., food during cooking, and are composed of many sub-activities, which can happen at the same time or in various orders.
While the majority of recent works in action recognition focuses on developing better feature encoding techniques for classifying sub-activities in short video clips of a few seconds, this project moves forward and aims to develop a higher level representation of complex activities to overcome the limitations of current approaches. This includes the handling of large time variations and the ability to recognize and locate complex activities in videos. To this end, we aim to develop a unified model that provides detailed information about the activities and sub-activities in terms of time and spatial location, as well as involved pose motion, objects and their transformations.
Another aspect of the project is to learn a representation from videos that is not tied to a specific source of videos or limited to a specific application. Instead we aim to learn a representation that is invariant to a perspective change, e.g., from a third-person perspective to an egocentric perspective, and can be applied to various modalities like videos or depth data without the need of collecting massive training data for all modalities. In other words, we aim to learn the essence of activities.
Max ERC Funding
1 499 875 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym CIRCUS
Project An end-to-end verification architecture for building Certified Implementations of Robust, Cryptographically Secure web applications
Researcher (PI) Karthikeyan Bhargavan
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The security of modern web applications depends on a variety of critical components including cryptographic libraries, Transport Layer Security (TLS), browser security mechanisms, and single sign-on protocols. Although these components are widely used, their security guarantees remain poorly understood, leading to subtle bugs and frequent attacks.
Rather than fixing one attack at a time, we advocate the use of formal security verification to identify and eliminate entire classes of vulnerabilities in one go. With the aid of my ERC starting grant, I have built a team that has already achieved landmark results in this direction. We built the first TLS implementation with a cryptographic proof of security. We discovered high-profile vulnerabilities such as the recent Triple Handshake and FREAK attacks, both of which triggered critical security updates to all major web browsers and TLS libraries.
So far, our security theorems only apply to carefully-written standalone reference implementations. CIRCUS proposes to take on the next great challenge: verifying the end-to-end security of web applications running in mainstream software. The key idea is to identify the core security components of web browsers and servers and replace them by rigorously verified components that offer the same functionality but with robust security guarantees.
Our goal is ambitious and there are many challenges to overcome, but we believe this is an opportune time for this proposal. In response to the Snowden reports, many cryptographic libraries and protocols are currently being audited and redesigned. Standards bodies and software developers are inviting researchers to help analyse their designs and code. Responding to their call requires a team of researchers who are willing to deal with the messy details of nascent standards and legacy code, and at the same time prove strong security theorems based on precise cryptographic assumptions. We are able, we are willing, and the time is now.
Summary
The security of modern web applications depends on a variety of critical components including cryptographic libraries, Transport Layer Security (TLS), browser security mechanisms, and single sign-on protocols. Although these components are widely used, their security guarantees remain poorly understood, leading to subtle bugs and frequent attacks.
Rather than fixing one attack at a time, we advocate the use of formal security verification to identify and eliminate entire classes of vulnerabilities in one go. With the aid of my ERC starting grant, I have built a team that has already achieved landmark results in this direction. We built the first TLS implementation with a cryptographic proof of security. We discovered high-profile vulnerabilities such as the recent Triple Handshake and FREAK attacks, both of which triggered critical security updates to all major web browsers and TLS libraries.
So far, our security theorems only apply to carefully-written standalone reference implementations. CIRCUS proposes to take on the next great challenge: verifying the end-to-end security of web applications running in mainstream software. The key idea is to identify the core security components of web browsers and servers and replace them by rigorously verified components that offer the same functionality but with robust security guarantees.
Our goal is ambitious and there are many challenges to overcome, but we believe this is an opportune time for this proposal. In response to the Snowden reports, many cryptographic libraries and protocols are currently being audited and redesigned. Standards bodies and software developers are inviting researchers to help analyse their designs and code. Responding to their call requires a team of researchers who are willing to deal with the messy details of nascent standards and legacy code, and at the same time prove strong security theorems based on precise cryptographic assumptions. We are able, we are willing, and the time is now.
Max ERC Funding
1 885 248 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym CoBCoM
Project Computational Brain Connectivity Mapping
Researcher (PI) Rachid DERICHE
Host Institution (HI) INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary One third of the burden of all the diseases in Europe is due to problems caused by diseases affecting brain. Although exceptional progress has been obtained for exploring it during the past decades, the brain is still terra-incognita and calls for specic research efforts to better understand its architecture and functioning.
CoBCoM is our response to this great challenge of modern science with the overall goal to develop a joint Dynamical Structural-Functional Brain Connectivity Network (DSF-BCN) solidly grounded on advanced and integrated methods for diffusion Magnetic Resonance Imaging (dMRI) and Electro & Magneto-Encephalography (EEG & MEG).
To take up this grand challenge and achieve new frontiers for brain connectivity mapping, we will develop a new generation of computational models and methods for identifying and characterizing the structural and functional connectivities that will be at the heart of the DSF-BCN. Our strategy is to break with the tradition to incrementally and separately contributing to structure or function and develop a global approach involving strong interactions between structural and functional connectivities. To solve the limited view of the brain provided just by one imaging modality, our models will be developed under a rigorous computational framework integrating complementary non invasive imaging modalities: dMRI, EEG and MEG.
CoBCoM will push far forward the state-of-the-art in these modalities, developing innovative models and ground-breaking processing tools to provide in-fine a joint DSF-BCN solidly grounded on a detailed mapping of the brain connectivity, both in space and time.
Capitalizing on the strengths of dMRI, MEG & EEG methodologies and building on the bio- physical and mathematical foundations of our new generation of computational models, CoBCoM will be applied to high-impact diseases, and its ground-breaking computational nature and added clinical value will open new perspectives in neuroimaging.
Summary
One third of the burden of all the diseases in Europe is due to problems caused by diseases affecting brain. Although exceptional progress has been obtained for exploring it during the past decades, the brain is still terra-incognita and calls for specic research efforts to better understand its architecture and functioning.
CoBCoM is our response to this great challenge of modern science with the overall goal to develop a joint Dynamical Structural-Functional Brain Connectivity Network (DSF-BCN) solidly grounded on advanced and integrated methods for diffusion Magnetic Resonance Imaging (dMRI) and Electro & Magneto-Encephalography (EEG & MEG).
To take up this grand challenge and achieve new frontiers for brain connectivity mapping, we will develop a new generation of computational models and methods for identifying and characterizing the structural and functional connectivities that will be at the heart of the DSF-BCN. Our strategy is to break with the tradition to incrementally and separately contributing to structure or function and develop a global approach involving strong interactions between structural and functional connectivities. To solve the limited view of the brain provided just by one imaging modality, our models will be developed under a rigorous computational framework integrating complementary non invasive imaging modalities: dMRI, EEG and MEG.
CoBCoM will push far forward the state-of-the-art in these modalities, developing innovative models and ground-breaking processing tools to provide in-fine a joint DSF-BCN solidly grounded on a detailed mapping of the brain connectivity, both in space and time.
Capitalizing on the strengths of dMRI, MEG & EEG methodologies and building on the bio- physical and mathematical foundations of our new generation of computational models, CoBCoM will be applied to high-impact diseases, and its ground-breaking computational nature and added clinical value will open new perspectives in neuroimaging.
Max ERC Funding
2 469 123 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CoVeCe
Project Coinduction for Verification and Certification
Researcher (PI) Damien Gabriel Jacques Pous
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Software and hardware bugs cost hundreds of millions of euros every year to companies and administrations. Formal methods like verification provide automatic means of finding some of these bugs. Certification, using proof assistants like Coq or Isabelle/HOL, make it possible to guarantee the absence of bugs (up to a certain point).
These two kinds of tools are crucial in order to design safer programs and machines. Unfortunately, state-of-the art tools are not yet satisfactory. Verification tools often face state-explosion problems and require more efficient algorithms; certification tools need more automation: they currently require too much time and expertise, even for basic tasks that could be handled easily through verification.
In recent work with Bonchi, we have shown that an extremely simple idea from concurrency theory could give rise to algorithms that are often exponentially faster than the algorithms currently used in verification tools.
My claim is that this idea could scale to richer models, revolutionising existing verification tools and providing algorithms for problems whose decidability is still open.
Moreover, the expected simplicity of those algorithms will make it possible to implement them inside certification tools such as Coq, to provide powerful automation techniques based on verification techniques. In the end, we will thus provide efficient and certified verification tools going beyond the state-of-the-art, but also the ability to use such tools inside the Coq proof assistant, to alleviate the cost of certification tasks.
Summary
Software and hardware bugs cost hundreds of millions of euros every year to companies and administrations. Formal methods like verification provide automatic means of finding some of these bugs. Certification, using proof assistants like Coq or Isabelle/HOL, make it possible to guarantee the absence of bugs (up to a certain point).
These two kinds of tools are crucial in order to design safer programs and machines. Unfortunately, state-of-the art tools are not yet satisfactory. Verification tools often face state-explosion problems and require more efficient algorithms; certification tools need more automation: they currently require too much time and expertise, even for basic tasks that could be handled easily through verification.
In recent work with Bonchi, we have shown that an extremely simple idea from concurrency theory could give rise to algorithms that are often exponentially faster than the algorithms currently used in verification tools.
My claim is that this idea could scale to richer models, revolutionising existing verification tools and providing algorithms for problems whose decidability is still open.
Moreover, the expected simplicity of those algorithms will make it possible to implement them inside certification tools such as Coq, to provide powerful automation techniques based on verification techniques. In the end, we will thus provide efficient and certified verification tools going beyond the state-of-the-art, but also the ability to use such tools inside the Coq proof assistant, to alleviate the cost of certification tasks.
Max ERC Funding
1 407 413 €
Duration
Start date: 2016-04-01, End date: 2021-03-31
Project acronym CSP-Infinity
Project Homogeneous Structures, Constraint Satisfaction Problems, and Topological Clones
Researcher (PI) Manuel Bodirsky
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Summary
The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Max ERC Funding
1 416 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym EPoCH
Project Exploring and Preventing Cryptographic Hardware Backdoors: Protecting the Internet of Things against Next-Generation Attacks
Researcher (PI) Christof PAAR
Host Institution (HI) RUHR-UNIVERSITAET BOCHUM
Call Details Advanced Grant (AdG), PE6, ERC-2015-AdG
Summary The digital landscape is currently undergoing an evolution towards the Internet of Things. The IoT comes with a dramatically increased threat potential, as attacks can endanger human life and can lead to a massive loss of privacy of (European) citizens. A particular dangerous class of attacks manipulates the cryptographic algorithms in the underlying hardware. Backdoors in the cryptography of IoT devices can lead to system-wide loss of security. This proposal has the ambitious goal to comprehensively understand and counter low-level backdoor attacks. The required research consists of two major modules:
1) The development of an encompassing understanding of how hardware manipulations of cryptographic functions can actually be performed, and what the consequences are for the system security. Exploring attacks is fundamental for designing strong countermeasures, analogous to the role of cryptanalysis in cryptology.
2) The development of hardware countermeasures that provide systematic protection against malicious manipulations. In contrast to detection-based methods which dominate the literature, our approach will be pro-active. We will develop solutions for instances of important problems, including hardware reverse engineering and hardware hiding. Little is known about the limits of and optimum approaches to both problems in specific settings.
Beyond prevention of hardware Trojans, the research will have applications in IP protection and will spark research in the theory of computer science community.
Summary
The digital landscape is currently undergoing an evolution towards the Internet of Things. The IoT comes with a dramatically increased threat potential, as attacks can endanger human life and can lead to a massive loss of privacy of (European) citizens. A particular dangerous class of attacks manipulates the cryptographic algorithms in the underlying hardware. Backdoors in the cryptography of IoT devices can lead to system-wide loss of security. This proposal has the ambitious goal to comprehensively understand and counter low-level backdoor attacks. The required research consists of two major modules:
1) The development of an encompassing understanding of how hardware manipulations of cryptographic functions can actually be performed, and what the consequences are for the system security. Exploring attacks is fundamental for designing strong countermeasures, analogous to the role of cryptanalysis in cryptology.
2) The development of hardware countermeasures that provide systematic protection against malicious manipulations. In contrast to detection-based methods which dominate the literature, our approach will be pro-active. We will develop solutions for instances of important problems, including hardware reverse engineering and hardware hiding. Little is known about the limits of and optimum approaches to both problems in specific settings.
Beyond prevention of hardware Trojans, the research will have applications in IP protection and will spark research in the theory of computer science community.
Max ERC Funding
2 498 286 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym FACTORY
Project New paradigms for latent factor estimation
Researcher (PI) Cédric Févotte
Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix in two factors. The first factor yields recurring patterns characteristic of the data. The second factor describes in which proportions each data sample is made of these patterns. Latent factor estimation (LFE) is the problem of finding such a factorisation, usually under given constraints. LFE appears under other domain-specific names such as dictionary learning, low-rank approximation, factor analysis or latent semantic analysis. It is used for tasks such as dimensionality reduction, unmixing, soft clustering, coding or matrix completion in very diverse fields.
In this project, I propose to explore three new paradigms that push the frontiers of traditional LFE. First, I want to break beyond the ubiquitous Gaussian assumption, a practical choice that too rarely complies with the nature and geometry of the data. Estimation in non-Gaussian models is more difficult, but recent work in audio and text processing has shown that it pays off in practice. Second, in traditional settings the data matrix is often a collection of features computed from raw data. These features are computed with generic off-the-shelf transforms that loosely preprocess the data, setting a limit to performance. I propose a new paradigm in which an optimal low-rank inducing transform is learnt together with the factors in a single step. Thirdly, I show that the dominant deterministic approach to LFE should be reconsidered and I propose a novel statistical estimation paradigm, based on the marginal likelihood, with enhanced capabilities. The new methodology is applied to real-world problems with societal impact in audio signal processing (speech enhancement, music remastering), remote sensing (Earth observation, cosmic object discovery) and data mining (multimodal information retrieval, user recommendation).
Summary
Data is often available in matrix form, in which columns are samples, and processing of such data often entails finding an approximate factorisation of the matrix in two factors. The first factor yields recurring patterns characteristic of the data. The second factor describes in which proportions each data sample is made of these patterns. Latent factor estimation (LFE) is the problem of finding such a factorisation, usually under given constraints. LFE appears under other domain-specific names such as dictionary learning, low-rank approximation, factor analysis or latent semantic analysis. It is used for tasks such as dimensionality reduction, unmixing, soft clustering, coding or matrix completion in very diverse fields.
In this project, I propose to explore three new paradigms that push the frontiers of traditional LFE. First, I want to break beyond the ubiquitous Gaussian assumption, a practical choice that too rarely complies with the nature and geometry of the data. Estimation in non-Gaussian models is more difficult, but recent work in audio and text processing has shown that it pays off in practice. Second, in traditional settings the data matrix is often a collection of features computed from raw data. These features are computed with generic off-the-shelf transforms that loosely preprocess the data, setting a limit to performance. I propose a new paradigm in which an optimal low-rank inducing transform is learnt together with the factors in a single step. Thirdly, I show that the dominant deterministic approach to LFE should be reconsidered and I propose a novel statistical estimation paradigm, based on the marginal likelihood, with enhanced capabilities. The new methodology is applied to real-world problems with societal impact in audio signal processing (speech enhancement, music remastering), remote sensing (Earth observation, cosmic object discovery) and data mining (multimodal information retrieval, user recommendation).
Max ERC Funding
1 931 776 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym FOVEDIS
Project Formal specification and verification of distributed data structures
Researcher (PI) Constantin Enea
Host Institution (HI) UNIVERSITE PARIS DIDEROT - PARIS 7
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The future of the computing technology relies on fast access, transformation, and exchange of data across large-scale networks such as the Internet. The design of software systems that support high-frequency parallel accesses to high-quantity data is a fundamental challenge. As more scalable alternatives to traditional relational databases, distributed data structures (DDSs) are at the basis of a wide range of automated services, for now, and for the foreseeable future.
This proposal aims to improve our understanding of the theoretical foundations of DDSs. The design and the usage of DDSs are based on new principles, for which we currently lack rigorous engineering methodologies. Specifically, we lack design procedures based on precise specifications, and automated reasoning techniques for enhancing the reliability of the engineering process.
The targeted breakthrough of this proposal is developing automated formal methods for rigorous engineering of DDSs. A first objective is to define coherent formal specifications that provide precise requirements at design time and explicit guarantees during their usage. Then, we will investigate practical programming principles, compatible with these specifications, for building applications that use DDSs. Finally, we will develop efficient automated reasoning techniques for debugging or validating DDS implementations against their specifications. The principles underlying automated reasoning are also important for identifying best practices in the design of these complex systems to increase confidence in their correctness. The developed methodologies based on formal specifications will thus benefit both the conception and automated validation of DDS implementations and the applications that use them.
Summary
The future of the computing technology relies on fast access, transformation, and exchange of data across large-scale networks such as the Internet. The design of software systems that support high-frequency parallel accesses to high-quantity data is a fundamental challenge. As more scalable alternatives to traditional relational databases, distributed data structures (DDSs) are at the basis of a wide range of automated services, for now, and for the foreseeable future.
This proposal aims to improve our understanding of the theoretical foundations of DDSs. The design and the usage of DDSs are based on new principles, for which we currently lack rigorous engineering methodologies. Specifically, we lack design procedures based on precise specifications, and automated reasoning techniques for enhancing the reliability of the engineering process.
The targeted breakthrough of this proposal is developing automated formal methods for rigorous engineering of DDSs. A first objective is to define coherent formal specifications that provide precise requirements at design time and explicit guarantees during their usage. Then, we will investigate practical programming principles, compatible with these specifications, for building applications that use DDSs. Finally, we will develop efficient automated reasoning techniques for debugging or validating DDS implementations against their specifications. The principles underlying automated reasoning are also important for identifying best practices in the design of these complex systems to increase confidence in their correctness. The developed methodologies based on formal specifications will thus benefit both the conception and automated validation of DDS implementations and the applications that use them.
Max ERC Funding
1 300 000 €
Duration
Start date: 2016-05-01, End date: 2021-04-30