Project acronym 3D Reloaded
Project 3D Reloaded: Novel Algorithms for 3D Shape Inference and Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Summary
Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Max ERC Funding
2 000 000 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ALUNIF
Project Algorithms and Lower Bounds: A Unified Approach
Researcher (PI) Rahul Santhanam
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Summary
One of the fundamental goals of theoretical computer science is to
understand the possibilities and limits of efficient computation. This
quest has two dimensions. The
theory of algorithms focuses on finding efficient solutions to
problems, while computational complexity theory aims to understand when
and why problems are hard to solve. These two areas have different
philosophies and use different sets of techniques. However, in recent
years there have been indications of deep and mysterious connections
between them.
In this project, we propose to explore and develop the connections between
algorithmic analysis and complexity lower bounds in a systematic way.
On the one hand, we plan to use complexity lower bound techniques as inspiration
to design new and improved algorithms for Satisfiability and other
NP-complete problems, as well as to analyze existing algorithms better.
On the other hand, we plan to strengthen implications yielding circuit
lower bounds from non-trivial algorithms for Satisfiability, and to derive
new circuit lower bounds using these stronger implications.
This project has potential for massive impact in both the areas of algorithms
and computational complexity. Improved algorithms for Satisfiability could lead
to improved SAT solvers, and the new analytical tools would lead to a better
understanding of existing heuristics. Complexity lower bound questions are
fundamental
but notoriously difficult, and new lower bounds would open the way to
unconditionally secure cryptographic protocols and derandomization of
probabilistic algorithms. More broadly, this project aims to initiate greater
dialogue between the two areas, with an exchange of ideas and techniques
which leads to accelerated progress in both, as well as a deeper understanding
of the nature of efficient computation.
Max ERC Funding
1 274 496 €
Duration
Start date: 2014-03-01, End date: 2019-02-28
Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym AVS-ISS
Project Analysis, Verification, and Synthesis for Infinite-State Systems
Researcher (PI) Joel Olivier Ouaknine
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The central objective of this project is to investigate key algorithmic verification questions concerning two fundamental mathematical structures used to model and analyse infinite-state systems, namely discrete linear dynamical systems and counter automata, in both ordinary and parametric form. Motivated especially by applications to software model checking (more specifically the termination of linear loops and predicate abstraction computations), as well as parametric real-time reasoning and the verification of Markov chains, we will focus on model-checking, module-checking, and synthesis problems for linear dynamical systems and one-counter automata against various fragments and extensions of Linear Temporal Logic (LTL) specifications. The key deliverables will be novel verification algorithms along with a map of the complexity landscape. A second objective is then to transfer algorithmic insights into practical verification methodologies and tools, in collaboration with colleagues in academia and industrial research laboratories.
We will build on a series of recent advances and breakthroughs in these areas (some of which from the PI’s team) to attack a range of specific algorithmic problems. We believe that this line of research will not only result in fundamental theoretical contributions and insights in their own right—potentially answering mathematical questions that have been open for years or even decades—but will also impact the practice of formal verification and lead to new and more powerful methods and tools for the use of engineers and programmers.
Summary
The central objective of this project is to investigate key algorithmic verification questions concerning two fundamental mathematical structures used to model and analyse infinite-state systems, namely discrete linear dynamical systems and counter automata, in both ordinary and parametric form. Motivated especially by applications to software model checking (more specifically the termination of linear loops and predicate abstraction computations), as well as parametric real-time reasoning and the verification of Markov chains, we will focus on model-checking, module-checking, and synthesis problems for linear dynamical systems and one-counter automata against various fragments and extensions of Linear Temporal Logic (LTL) specifications. The key deliverables will be novel verification algorithms along with a map of the complexity landscape. A second objective is then to transfer algorithmic insights into practical verification methodologies and tools, in collaboration with colleagues in academia and industrial research laboratories.
We will build on a series of recent advances and breakthroughs in these areas (some of which from the PI’s team) to attack a range of specific algorithmic problems. We believe that this line of research will not only result in fundamental theoretical contributions and insights in their own right—potentially answering mathematical questions that have been open for years or even decades—but will also impact the practice of formal verification and lead to new and more powerful methods and tools for the use of engineers and programmers.
Max ERC Funding
1 834 975 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym BIGBAYES
Project Rich, Structured and Efficient Learning of Big Bayesian Models
Researcher (PI) Yee Whye Teh
Host Institution (HI) THE CHANCELLOR, MASTERS AND SCHOLARS OF THE UNIVERSITY OF OXFORD
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Summary
As datasets grow ever larger in scale, complexity and variety, there is an increasing need for powerful machine learning and statistical techniques that are capable of learning from such data. Bayesian nonparametrics is a promising approach to data analysis that is increasingly popular in machine learning and statistics. Bayesian nonparametric models are highly flexible models with infinite-dimensional parameter spaces that can be used to directly parameterise and learn about functions, densities, conditional distributions etc, and have been successfully applied to regression, survival analysis, language modelling, time series analysis, and visual scene analysis among others. However, to successfully use Bayesian nonparametric models to analyse the high-dimensional and structured datasets now commonly encountered in the age of Big Data, we will have to overcome a number of challenges. Namely, we need to develop Bayesian nonparametric models that can learn rich representations from structured data, and we need computational methodologies that can scale effectively to the large and complex models of the future. We will ground our developments in relevant applications, particularly to natural language processing (learning distributed representations for language modelling and compositional semantics) and genetics (modelling genetic variations arising from population, genealogical and spatial structures).
Max ERC Funding
1 918 092 €
Duration
Start date: 2014-05-01, End date: 2019-04-30
Project acronym CODA
Project Custom-Made Ontology Based Data Access
Researcher (PI) Carsten Lutz
Host Institution (HI) UNIVERSITAET BREMEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The emerging and vibrant area of ontology-based data access (OBDA) is currently establishing itself as an important paradigm for processing incomplete and heterogeneous data. The goal of the CODA project is to make OBDA radically more useful for real-world applications by taking a ground-breaking new perspective on its foundations, algorithms, and tools. The project will rest on an ultimately fine-grained complexity analysis that allows to identify islands of tractability inside practically important ontology and query languages that are otherwise intractable. Based on these islands, novel OBDA querying tools will be developed that are custom-made for ontologies from applications in the sense that high computational cost is incurred only when unavoidable for the concrete ontology used (`pay as you go' behaviour). The key deliverables of the project are a set of tailor-made OBDA querying tools that form a precision tool belt for real-world OBDA applications, theoretical results regarding the structure and computational complexity of important islands of tractability, efficient algorithms that allow to put these to work in practice, and optimization techniques and heuristics that support the algorithms in the tools developed. We will also collect and make available a library of case studies for evaluating OBDA tools. The project is both timely and essential. It is timely because our economy and society are currently experiencing a revolution in data processing and availability, and dealing with incompleteness and heterogeneity is one of the major arising challenges. The project is essential because it has become apparent now that current OBDA tools cannot satisfy industry requirements. In particular, they do not adequately support the limited use of expressive features (`a little bit of disjunction') which intuitively should not result in high computational cost, but with current technology often does.
Summary
The emerging and vibrant area of ontology-based data access (OBDA) is currently establishing itself as an important paradigm for processing incomplete and heterogeneous data. The goal of the CODA project is to make OBDA radically more useful for real-world applications by taking a ground-breaking new perspective on its foundations, algorithms, and tools. The project will rest on an ultimately fine-grained complexity analysis that allows to identify islands of tractability inside practically important ontology and query languages that are otherwise intractable. Based on these islands, novel OBDA querying tools will be developed that are custom-made for ontologies from applications in the sense that high computational cost is incurred only when unavoidable for the concrete ontology used (`pay as you go' behaviour). The key deliverables of the project are a set of tailor-made OBDA querying tools that form a precision tool belt for real-world OBDA applications, theoretical results regarding the structure and computational complexity of important islands of tractability, efficient algorithms that allow to put these to work in practice, and optimization techniques and heuristics that support the algorithms in the tools developed. We will also collect and make available a library of case studies for evaluating OBDA tools. The project is both timely and essential. It is timely because our economy and society are currently experiencing a revolution in data processing and availability, and dealing with incompleteness and heterogeneity is one of the major arising challenges. The project is essential because it has become apparent now that current OBDA tools cannot satisfy industry requirements. In particular, they do not adequately support the limited use of expressive features (`a little bit of disjunction') which intuitively should not result in high computational cost, but with current technology often does.
Max ERC Funding
1 922 115 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym CompDB
Project The Computational Database for Real World Awareness
Researcher (PI) Thomas NEUMANN
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Summary
Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Max ERC Funding
1 918 750 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym CSP-Infinity
Project Homogeneous Structures, Constraint Satisfaction Problems, and Topological Clones
Researcher (PI) Manuel Bodirsky
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Summary
The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Max ERC Funding
1 416 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym D-SynMA
Project Distributed Synthesis: from Single to Multiple Agents
Researcher (PI) Nir PITERMAN
Host Institution (HI) GOETEBORGS UNIVERSITET
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary Computing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.
Summary
Computing is changing from living on our desktops and in dedicated devices to being everywhere. In phones, sensors, appliances, and robots – computers (from now on devices) are everywhere and affecting all aspects of our lives. The techniques to make them safe and reliable are investigated and are starting to emerge and consolidate. However, these techniques enable devices to work in isolation or co-exist. We currently do not have techniques that enable development of real autonomous collaboration between devices. Such techniques will revolutionize all usage of devices and, as consequence, our lives. Manufacturing, supply chain, transportation, infrastructures, and earth- and space exploration would all transform using techniques that enable development of collaborating devices.
When considering isolated (and co-existing) devices, reactive synthesis – automatic production of plans from high level specification – is emerging as a viable tool for the development of robots and reactive software. This is especially important in the context of safety-critical systems, where assurances are required and systems need to have guarantees on performance. The techniques that are developed today to support robust, assured, reliable, and adaptive devices rely on a major change in focus of reactive synthesis. The revolution of correct-by-construction systems from specifications is occurring and is being pushed forward.
However, to take this approach forward to work also for real collaboration between devices the theoretical frameworks that will enable distributed synthesis are required. Such foundations will enable the correct-by-construction revolution to unleash its potential and allow a multiplicative increase of utility by cooperative computation.
d-SynMA will take distributed synthesis to this new frontier by considering novel interaction and communication concepts that would create an adaptable framework of correct-by-construction application of collaborating devices.
Max ERC Funding
1 871 272 €
Duration
Start date: 2018-05-01, End date: 2023-04-30