Project acronym 3D Reloaded
Project 3D Reloaded: Novel Algorithms for 3D Shape Inference and Analysis
Researcher (PI) Daniel Cremers
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Summary
Despite their amazing success, we believe that computer vision algorithms have only scratched the surface of what can be done in terms of modeling and understanding our world from images. We believe that novel image analysis techniques will be a major enabler and driving force behind next-generation technologies, enhancing everyday life and opening up radically new possibilities. And we believe that the key to achieving this is to develop algorithms for reconstructing and analyzing the 3D structure of our world.
In this project, we will focus on three lines of research:
A) We will develop algorithms for 3D reconstruction from standard color cameras and from RGB-D cameras. In particular, we will promote real-time-capable direct and dense methods. In contrast to the classical two-stage approach of sparse feature-point based motion estimation and subsequent dense reconstruction, these methods optimally exploit all color information to jointly estimate dense geometry and camera motion.
B) We will develop algorithms for 3D shape analysis, including rigid and non-rigid matching, decomposition and interpretation of 3D shapes. We will focus on algorithms which are optimal or near-optimal. One of the major computational challenges lies in generalizing existing 2D shape analysis techniques to shapes in 3D and 4D (temporal evolutions of 3D shape).
C) We will develop shape priors for 3D reconstruction. These can be learned from sample shapes or acquired during the reconstruction process. For example, when reconstructing a larger office algorithms may exploit the geometric self-similarity of the scene, storing a model of a chair and its multiple instances only once rather than multiple times.
Advancing the state of the art in geometric reconstruction and geometric analysis will have a profound impact well beyond computer vision. We strongly believe that we have the necessary competence to pursue this project. Preliminary results have been well received by the community.
Max ERC Funding
2 000 000 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym AdaptoSCOPE
Project Using cis-regulatory mutations to highlight polygenic adaptation in natural plant systems
Researcher (PI) Juliette de Meaux
Host Institution (HI) UNIVERSITAET ZU KOELN
Call Details Consolidator Grant (CoG), LS8, ERC-2014-CoG
Summary The goal of this project is to demonstrate that novel aspects of the molecular basis of Darwinian adaptation can be discovered if the polygenic basis of adaptation is taken into account. This project will use the genome-wide distribution of cis-regulatory variants to discover the molecular pathways that are optimized during adaptation via accumulation of small effect mutations. Current approaches include scans for outlier genes with strong population genetics signatures of selection, or large effect QTL associating with fitness. They can only reveal a small subset of the molecular changes recruited along adaptive paths. Here, instead, the distribution of small effect mutations will be used to make inferences on the targets of polygenic adaptation across divergent populations in each of the two closely related species, A. thaliana and A. lyrata. These species are both found at diverse latitudes and show sign of local adaptation to climatic differences. Mutations affecting cis-regulation will be identified in leaves of plants exposed to various temperature regimes triggering phenotypic responses of adaptive relevance. Their distribution in clusters of functionally connected genes will be quantified. The phylogeographic differences in the distribution of the mutations will be used to disentangle neutral from adaptive clusters of functionally connected genes in each of the two species. This project will identify the molecular pathways subjected collectively to natural selection and provide a completely novel view on adaptive landscapes. It will further examine whether local adaptation occurs by convergent evolution of molecular systems in plants. This approach has the potential to find broad applications in ecology and agriculture.
Summary
The goal of this project is to demonstrate that novel aspects of the molecular basis of Darwinian adaptation can be discovered if the polygenic basis of adaptation is taken into account. This project will use the genome-wide distribution of cis-regulatory variants to discover the molecular pathways that are optimized during adaptation via accumulation of small effect mutations. Current approaches include scans for outlier genes with strong population genetics signatures of selection, or large effect QTL associating with fitness. They can only reveal a small subset of the molecular changes recruited along adaptive paths. Here, instead, the distribution of small effect mutations will be used to make inferences on the targets of polygenic adaptation across divergent populations in each of the two closely related species, A. thaliana and A. lyrata. These species are both found at diverse latitudes and show sign of local adaptation to climatic differences. Mutations affecting cis-regulation will be identified in leaves of plants exposed to various temperature regimes triggering phenotypic responses of adaptive relevance. Their distribution in clusters of functionally connected genes will be quantified. The phylogeographic differences in the distribution of the mutations will be used to disentangle neutral from adaptive clusters of functionally connected genes in each of the two species. This project will identify the molecular pathways subjected collectively to natural selection and provide a completely novel view on adaptive landscapes. It will further examine whether local adaptation occurs by convergent evolution of molecular systems in plants. This approach has the potential to find broad applications in ecology and agriculture.
Max ERC Funding
1 683 120 €
Duration
Start date: 2015-09-01, End date: 2020-08-31
Project acronym AMPLIFY
Project Amplifying Human Perception Through Interactive Digital Technologies
Researcher (PI) Albrecht Schmidt
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Summary
Current technical sensor systems offer capabilities that are superior to human perception. Cameras can capture a spectrum that is wider than visible light, high-speed cameras can show movements that are invisible to the human eye, and directional microphones can pick up sounds at long distances. The vision of this project is to lay a foundation for the creation of digital technologies that provide novel sensory experiences and new perceptual capabilities for humans that are natural and intuitive to use. In a first step, the project will assess the feasibility of creating artificial human senses that provide new perceptual channels to the human mind, without increasing the experienced cognitive load. A particular focus is on creating intuitive and natural control mechanisms for amplified senses using eye gaze, muscle activity, and brain signals. Through the creation of a prototype that provides mildly unpleasant stimulations in response to perceived information, the feasibility of implementing an artificial reflex will be experimentally explored. The project will quantify the effectiveness of new senses and artificial perceptual aids compared to the baseline of unaugmented perception. The overall objective is to systematically research, explore, and model new means for increasing the human intake of information in order to lay the foundation for new and improved human senses enabled through digital technologies and to enable artificial reflexes. The ground-breaking contributions of this project are (1) to demonstrate the feasibility of reliably implementing amplified senses and new perceptual capabilities, (2) to prove the possibility of creating an artificial reflex, (3) to provide an example implementation of amplified cognition that is empirically validated, and (4) to develop models, concepts, components, and platforms that will enable and ease the creation of interactive systems that measurably increase human perceptual capabilities.
Max ERC Funding
1 925 250 €
Duration
Start date: 2016-07-01, End date: 2021-06-30
Project acronym AVS-ISS
Project Analysis, Verification, and Synthesis for Infinite-State Systems
Researcher (PI) Joel Olivier Ouaknine
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The central objective of this project is to investigate key algorithmic verification questions concerning two fundamental mathematical structures used to model and analyse infinite-state systems, namely discrete linear dynamical systems and counter automata, in both ordinary and parametric form. Motivated especially by applications to software model checking (more specifically the termination of linear loops and predicate abstraction computations), as well as parametric real-time reasoning and the verification of Markov chains, we will focus on model-checking, module-checking, and synthesis problems for linear dynamical systems and one-counter automata against various fragments and extensions of Linear Temporal Logic (LTL) specifications. The key deliverables will be novel verification algorithms along with a map of the complexity landscape. A second objective is then to transfer algorithmic insights into practical verification methodologies and tools, in collaboration with colleagues in academia and industrial research laboratories.
We will build on a series of recent advances and breakthroughs in these areas (some of which from the PI’s team) to attack a range of specific algorithmic problems. We believe that this line of research will not only result in fundamental theoretical contributions and insights in their own right—potentially answering mathematical questions that have been open for years or even decades—but will also impact the practice of formal verification and lead to new and more powerful methods and tools for the use of engineers and programmers.
Summary
The central objective of this project is to investigate key algorithmic verification questions concerning two fundamental mathematical structures used to model and analyse infinite-state systems, namely discrete linear dynamical systems and counter automata, in both ordinary and parametric form. Motivated especially by applications to software model checking (more specifically the termination of linear loops and predicate abstraction computations), as well as parametric real-time reasoning and the verification of Markov chains, we will focus on model-checking, module-checking, and synthesis problems for linear dynamical systems and one-counter automata against various fragments and extensions of Linear Temporal Logic (LTL) specifications. The key deliverables will be novel verification algorithms along with a map of the complexity landscape. A second objective is then to transfer algorithmic insights into practical verification methodologies and tools, in collaboration with colleagues in academia and industrial research laboratories.
We will build on a series of recent advances and breakthroughs in these areas (some of which from the PI’s team) to attack a range of specific algorithmic problems. We believe that this line of research will not only result in fundamental theoretical contributions and insights in their own right—potentially answering mathematical questions that have been open for years or even decades—but will also impact the practice of formal verification and lead to new and more powerful methods and tools for the use of engineers and programmers.
Max ERC Funding
1 834 975 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym CAUSALPATH
Project Next Generation Causal Analysis: Inspired by the Induction of Biological Pathways from Cytometry Data
Researcher (PI) Ioannis Tsamardinos
Host Institution (HI) PANEPISTIMIO KRITIS
Call Details Consolidator Grant (CoG), PE6, ERC-2013-CoG
Summary Discovering the causal mechanisms of a complex system of interacting components is necessary in order to control it. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations under certain conditions from observational data alone or with a limited number of interventions/manipulations.
An important, challenging biological problem that may take decades of experimental work is the induction of biological cellular pathways; pathways are informal causal models indispensable in biological research and drug design. Recent exciting advances in flow/mass cytometry biotechnology allow the generation of large-sample datasets containing measurements on single cells, thus setting the problem of pathway learning suitable for CD methods.
CAUSALPATH builds upon and further advances recent breakthrough developments in CD methods to enable the induction of biological pathways from cytometry and other omics data. As a testbed problem we focus on the differentiation of human T-cells; these are involved in autoimmune and inflammatory diseases, as well as cancer and thus, are targets of new drug development for a range of chronic diseases. The biological problem acts as our campus for general novel formalisms, practical algorithms, and useful tools development, pointing to fundamental CD problems: presence of feedback cycles, presence of latent confounding variables, CD from time-course data, Integrative Causal Analysis (INCA) of heterogeneous datasets and others.
Three features complement CAUSALPATH’s approach: (A) methods development will co-evolve with biological wet-lab experiments periodically testing the algorithmic postulates, (B) Open-source tools will be developed for the non-expert, and (C) Commercial exploitation of the results will be sought out.
CAUSALPATH brings together an interdisciplinary team, committed to this vision. It builds upon the PI’s group recent important results on INCA algorithms.
Summary
Discovering the causal mechanisms of a complex system of interacting components is necessary in order to control it. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations under certain conditions from observational data alone or with a limited number of interventions/manipulations.
An important, challenging biological problem that may take decades of experimental work is the induction of biological cellular pathways; pathways are informal causal models indispensable in biological research and drug design. Recent exciting advances in flow/mass cytometry biotechnology allow the generation of large-sample datasets containing measurements on single cells, thus setting the problem of pathway learning suitable for CD methods.
CAUSALPATH builds upon and further advances recent breakthrough developments in CD methods to enable the induction of biological pathways from cytometry and other omics data. As a testbed problem we focus on the differentiation of human T-cells; these are involved in autoimmune and inflammatory diseases, as well as cancer and thus, are targets of new drug development for a range of chronic diseases. The biological problem acts as our campus for general novel formalisms, practical algorithms, and useful tools development, pointing to fundamental CD problems: presence of feedback cycles, presence of latent confounding variables, CD from time-course data, Integrative Causal Analysis (INCA) of heterogeneous datasets and others.
Three features complement CAUSALPATH’s approach: (A) methods development will co-evolve with biological wet-lab experiments periodically testing the algorithmic postulates, (B) Open-source tools will be developed for the non-expert, and (C) Commercial exploitation of the results will be sought out.
CAUSALPATH brings together an interdisciplinary team, committed to this vision. It builds upon the PI’s group recent important results on INCA algorithms.
Max ERC Funding
1 724 000 €
Duration
Start date: 2015-01-01, End date: 2019-12-31
Project acronym CODA
Project Custom-Made Ontology Based Data Access
Researcher (PI) Carsten Lutz
Host Institution (HI) UNIVERSITAET BREMEN
Call Details Consolidator Grant (CoG), PE6, ERC-2014-CoG
Summary The emerging and vibrant area of ontology-based data access (OBDA) is currently establishing itself as an important paradigm for processing incomplete and heterogeneous data. The goal of the CODA project is to make OBDA radically more useful for real-world applications by taking a ground-breaking new perspective on its foundations, algorithms, and tools. The project will rest on an ultimately fine-grained complexity analysis that allows to identify islands of tractability inside practically important ontology and query languages that are otherwise intractable. Based on these islands, novel OBDA querying tools will be developed that are custom-made for ontologies from applications in the sense that high computational cost is incurred only when unavoidable for the concrete ontology used (`pay as you go' behaviour). The key deliverables of the project are a set of tailor-made OBDA querying tools that form a precision tool belt for real-world OBDA applications, theoretical results regarding the structure and computational complexity of important islands of tractability, efficient algorithms that allow to put these to work in practice, and optimization techniques and heuristics that support the algorithms in the tools developed. We will also collect and make available a library of case studies for evaluating OBDA tools. The project is both timely and essential. It is timely because our economy and society are currently experiencing a revolution in data processing and availability, and dealing with incompleteness and heterogeneity is one of the major arising challenges. The project is essential because it has become apparent now that current OBDA tools cannot satisfy industry requirements. In particular, they do not adequately support the limited use of expressive features (`a little bit of disjunction') which intuitively should not result in high computational cost, but with current technology often does.
Summary
The emerging and vibrant area of ontology-based data access (OBDA) is currently establishing itself as an important paradigm for processing incomplete and heterogeneous data. The goal of the CODA project is to make OBDA radically more useful for real-world applications by taking a ground-breaking new perspective on its foundations, algorithms, and tools. The project will rest on an ultimately fine-grained complexity analysis that allows to identify islands of tractability inside practically important ontology and query languages that are otherwise intractable. Based on these islands, novel OBDA querying tools will be developed that are custom-made for ontologies from applications in the sense that high computational cost is incurred only when unavoidable for the concrete ontology used (`pay as you go' behaviour). The key deliverables of the project are a set of tailor-made OBDA querying tools that form a precision tool belt for real-world OBDA applications, theoretical results regarding the structure and computational complexity of important islands of tractability, efficient algorithms that allow to put these to work in practice, and optimization techniques and heuristics that support the algorithms in the tools developed. We will also collect and make available a library of case studies for evaluating OBDA tools. The project is both timely and essential. It is timely because our economy and society are currently experiencing a revolution in data processing and availability, and dealing with incompleteness and heterogeneity is one of the major arising challenges. The project is essential because it has become apparent now that current OBDA tools cannot satisfy industry requirements. In particular, they do not adequately support the limited use of expressive features (`a little bit of disjunction') which intuitively should not result in high computational cost, but with current technology often does.
Max ERC Funding
1 922 115 €
Duration
Start date: 2015-08-01, End date: 2020-07-31
Project acronym CompDB
Project The Computational Database for Real World Awareness
Researcher (PI) Thomas NEUMANN
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2016-COG
Summary Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Summary
Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future.
This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges.
Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
Max ERC Funding
1 918 750 €
Duration
Start date: 2017-06-01, End date: 2022-05-31
Project acronym CSP-Infinity
Project Homogeneous Structures, Constraint Satisfaction Problems, and Topological Clones
Researcher (PI) Manuel Bodirsky
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Summary
The complexity of constraint satisfaction problems (CSPs) is a field in rapid development, and involves central questions in graph homomorphisms, finite model theory, reasoning in artificial intelligence, and, last but not least, universal algebra. In previous work, it was shown that a substantial part of the results and tools for the study of the computational complexity of CSPs can be generalised to infinite domains when the constraints are definable over a homogeneous structure. There are many computational problems, in particular in temporal and spatial reasoning, that can be modelled in this way, but not over finite domains. Also in finite model theory and descriptive complexity, CSPs over infinite domains arise systematically as problems in monotone fragments of existential second-order logic.
In this project, we will advance in three directions:
(a) Further develop the universal-algebraic approach for CSPs over homogeneous structures. E.g., provide evidence for a universal-algebraic tractability conjecture for such CSPs.
(b) Apply the universal-algebraic approach. In particular, classify the complexity of all problems in guarded monotone SNP, a logic discovered independently in finite model theory and ontology-based data-access.
(c) Investigate the complexity of CSPs over those infinite domains that are most relevant in computer science, namely the integers, the rationals, and the reals. Can we adapt the universal-algebraic approach to this setting?
Max ERC Funding
1 416 250 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym DeciGUT
Project A Grand Unified Theory of Decidability in Logic-Based Knowledge Representation
Researcher (PI) Sebastian Rudolph
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "Logic-based knowledge representation (KR) constitutes a vital area of IT. The field inspires and guides scientific and technological developments enabling intelligent management of large and complex knowledge resources. Elaborate languages for specifying knowledge (so-called ontology languages) and querying it have been defined and standardized. Algorithms for automated reasoning and intelligent querying over knowledge resources are being developed, implemented and practically deployed on a wide scale.
Thereby, decidability investigations play a pivotal role to characterize what reasoning or querying tasks are at all computationally solvable.
Past decades have seen a proliferation of new decidable formalisms for KR, dominated by two major paradigms: description logics and rule-based approaches, most notably existential rules. Recently, these research lines have started to converge and first progress has been made toward identifying commonalities among the various formalisms. Still, the underlying principles for establishing their decidability remain disparate, ranging from proof-theoretic notions to model-theoretic ones.
DeciGUT will accomplish a major breakthrough in the field by establishing a ""Grand Unified Theory"" of decidability. We will provide a novel, powerful model-theoretic criterion inspired by advanced graph-theoretic notions. We will prove that the criterion indeed ensures decidability and that it subsumes most of (if not all) currently known decidable formalisms in the KR field.
We will exploit our results toward the definition of novel decidable KR languages of unprecedented expressivity. We will ultimately extend our framework to encompass more advanced KR features beyond standard first order logic such as counting and non-monotonic aspects.
Our research will draw from and significantly impact the scientific fields of AI, Database Theory and Logic, but also give rise to drastically improved practical information management technology."
Summary
"Logic-based knowledge representation (KR) constitutes a vital area of IT. The field inspires and guides scientific and technological developments enabling intelligent management of large and complex knowledge resources. Elaborate languages for specifying knowledge (so-called ontology languages) and querying it have been defined and standardized. Algorithms for automated reasoning and intelligent querying over knowledge resources are being developed, implemented and practically deployed on a wide scale.
Thereby, decidability investigations play a pivotal role to characterize what reasoning or querying tasks are at all computationally solvable.
Past decades have seen a proliferation of new decidable formalisms for KR, dominated by two major paradigms: description logics and rule-based approaches, most notably existential rules. Recently, these research lines have started to converge and first progress has been made toward identifying commonalities among the various formalisms. Still, the underlying principles for establishing their decidability remain disparate, ranging from proof-theoretic notions to model-theoretic ones.
DeciGUT will accomplish a major breakthrough in the field by establishing a ""Grand Unified Theory"" of decidability. We will provide a novel, powerful model-theoretic criterion inspired by advanced graph-theoretic notions. We will prove that the criterion indeed ensures decidability and that it subsumes most of (if not all) currently known decidable formalisms in the KR field.
We will exploit our results toward the definition of novel decidable KR languages of unprecedented expressivity. We will ultimately extend our framework to encompass more advanced KR features beyond standard first order logic such as counting and non-monotonic aspects.
Our research will draw from and significantly impact the scientific fields of AI, Database Theory and Logic, but also give rise to drastically improved practical information management technology."
Max ERC Funding
1 814 937 €
Duration
Start date: 2018-10-01, End date: 2023-09-30