Project acronym 4DRepLy
Project Closing the 4D Real World Reconstruction Loop
Researcher (PI) Christian THEOBALT
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary 4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Summary
4D reconstruction, the camera-based dense dynamic scene reconstruction, is a grand challenge in computer graphics and computer vision. Despite great progress, 4D capturing the complex, diverse real world outside a studio is still far from feasible. 4DRepLy builds a new generation of high-fidelity 4D reconstruction (4DRecon) methods. They will be the first to efficiently capture all types of deformable objects (humans and other types) in crowded real world scenes with a single color or depth camera. They capture space-time coherent deforming geometry, motion, high-frequency reflectance and illumination at unprecedented detail, and will be the first to handle difficult occlusions, topology changes and large groups of interacting objects. They automatically adapt to new scene types, yet deliver models with meaningful, interpretable parameters. This requires far reaching contributions: First, we develop groundbreaking new plasticity-enhanced model-based 4D reconstruction methods that automatically adapt to new scenes. Second, we develop radically new machine learning-based dense 4D reconstruction methods. Third, these model- and learning-based methods are combined in two revolutionary new classes of 4DRecon methods: 1) advanced fusion-based methods and 2) methods with deep architectural integration. Both, 1) and 2), are automatically designed in the 4D Real World Reconstruction Loop, a revolutionary new design paradigm in which 4DRecon methods refine and adapt themselves while continuously processing unlabeled real world input. This overcomes the previously unbreakable scalability barrier to real world scene diversity, complexity and generality. This paradigm shift opens up a new research direction in graphics and vision and has far reaching relevance across many scientific fields. It enables new applications of profound social pervasion and significant economic impact, e.g., for visual media and virtual/augmented reality, and for future autonomous and robotic systems.
Max ERC Funding
1 977 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym AMAREC
Project Amenability, Approximation and Reconstruction
Researcher (PI) Wilhelm WINTER
Host Institution (HI) WESTFAELISCHE WILHELMS-UNIVERSITAET MUENSTER
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other.
Summary
Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other.
Max ERC Funding
1 596 017 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym ANTICIPATE
Project Anticipatory Human-Computer Interaction
Researcher (PI) Andreas BULLING
Host Institution (HI) UNIVERSITAET STUTTGART
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Summary
Even after three decades of research on human-computer interaction (HCI), current general-purpose user interfaces (UI) still lack the ability to attribute mental states to their users, i.e. they fail to understand users' intentions and needs and to anticipate their actions. This drastically restricts their interactive capabilities.
ANTICIPATE aims to establish the scientific foundations for a new generation of user interfaces that pro-actively adapt to users' future input actions by monitoring their attention and predicting their interaction intentions - thereby significantly improving the naturalness, efficiency, and user experience of the interactions. Realising this vision of anticipatory human-computer interaction requires groundbreaking advances in everyday sensing of user attention from eye and brain activity. We will further pioneer methods to predict entangled user intentions and forecast interactive behaviour with fine temporal granularity during interactions in everyday stationary and mobile settings. Finally, we will develop fundamental interaction paradigms that enable anticipatory UIs to pro-actively adapt to users' attention and intentions in a mindful way. The new capabilities will be demonstrated in four challenging cases: 1) mobile information retrieval, 2) intelligent notification management, 3) Autism diagnosis and monitoring, and 4) computer-based training.
Anticipatory human-computer interaction offers a strong complement to existing UI paradigms that only react to user input post-hoc. If successful, ANTICIPATE will deliver the first important building blocks for implementing Theory of Mind in general-purpose UIs. As such, the project has the potential to drastically improve the billions of interactions we perform with computers every day, to trigger a wide range of follow-up research in HCI as well as adjacent areas within and outside computer science, and to act as a key technical enabler for new applications, e.g. in healthcare and education.
Max ERC Funding
1 499 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym AV-SMP
Project Algorithmic Verification of String Manipulating Programs
Researcher (PI) Anthony LIN
Host Institution (HI) TECHNISCHE UNIVERSITAET KAISERSLAUTERN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary String is among the most fundamental and commonly used data types in virtually all modern programming languages, especially with the rapidly growing popularity of scripting languages (e.g. JavaScript and Python). Programs written in such languages tend to perform heavy string manipulations, which are complex to reason about and could easily lead to programming mistakes. In some cases, such mistakes could have serious consequences, e.g., in the case of client-side web applications, cross-site scripting (XSS) attacks that could lead to a security breach by a malicious user.
The central objective of the proposed project is to develop novel verification algorithms for analysing the correctness (esp. with respect to safety and termination properties) of programs with string variables, and transform them into robust verification tools. To meet this key objective, we will make fundamental breakthroughs on both theoretical and tool implementation challenges. On the theoretical side, we address two important problems: (1) design expressive constraint languages over strings (in combination with other data types like integers) that permit decidability with good complexity, and (2) design generic semi-algorithms for verifying string programs that have strong theoretical performance guarantee. On the implementation side, we will address the challenging problem of designing novel implementation methods that can substantially speed up the basic string analysis procedures in practice. Finally, as a proof of concept, we will apply our technologies to two key application domains: (1) automatic detection of XSS vulnerabilities in web applications, and (2) automatic grading systems for a programming course.
The project will not only make fundamental theoretical contributions — potentially solving long-standing open problems in the area — but also yield powerful methods that can be used in various applications.
Summary
String is among the most fundamental and commonly used data types in virtually all modern programming languages, especially with the rapidly growing popularity of scripting languages (e.g. JavaScript and Python). Programs written in such languages tend to perform heavy string manipulations, which are complex to reason about and could easily lead to programming mistakes. In some cases, such mistakes could have serious consequences, e.g., in the case of client-side web applications, cross-site scripting (XSS) attacks that could lead to a security breach by a malicious user.
The central objective of the proposed project is to develop novel verification algorithms for analysing the correctness (esp. with respect to safety and termination properties) of programs with string variables, and transform them into robust verification tools. To meet this key objective, we will make fundamental breakthroughs on both theoretical and tool implementation challenges. On the theoretical side, we address two important problems: (1) design expressive constraint languages over strings (in combination with other data types like integers) that permit decidability with good complexity, and (2) design generic semi-algorithms for verifying string programs that have strong theoretical performance guarantee. On the implementation side, we will address the challenging problem of designing novel implementation methods that can substantially speed up the basic string analysis procedures in practice. Finally, as a proof of concept, we will apply our technologies to two key application domains: (1) automatic detection of XSS vulnerabilities in web applications, and (2) automatic grading systems for a programming course.
The project will not only make fundamental theoretical contributions — potentially solving long-standing open problems in the area — but also yield powerful methods that can be used in various applications.
Max ERC Funding
1 496 687 €
Duration
Start date: 2017-11-01, End date: 2022-10-31
Project acronym BigEarth
Project Accurate and Scalable Processing of Big Data in Earth Observation
Researcher (PI) Begüm Demir
Host Institution (HI) TECHNISCHE UNIVERSITAT BERLIN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary During the last decade, a huge number of earth observation (EO) satellites with optical and Synthetic Aperture Radar sensors onboard have been launched and advances in satellite systems have increased the amount, variety and spatial/spectral resolution of EO data. This has led to massive EO data archives with huge amount of remote sensing (RS) images, from which mining and retrieving useful information are challenging. In view of that, content based image retrieval (CBIR) has attracted great attention in the RS community. However, existing RS CBIR systems have limitations on: i) characterization of high-level semantic content and spectral information present in RS images, and ii) large-scale RS CBIR problems since their search mechanism is time-demanding and not scalable in operational applications. The BigEarth project aims to develop highly innovative feature extraction and content based retrieval methods and tools for RS images, which can significantly improve the state-of-the-art both in the theory and in the tools currently available. To this end, very important scientific and practical problems will be addressed by focusing on the main challenges of Big EO data on RS image characterization, indexing and search from massive archives. In particular, novel methods and tools will be developed, aiming to: 1) characterize and exploit high level semantic content and spectral information present in RS images; 2) extract features directly from the compressed RS images; 3) achieve accurate and scalable RS image indexing and retrieval; and 4) integrate feature representations of different RS image sources into a unified form of feature representation. Moreover, a benchmark archive with high amount of multi-source RS images will be constructed. From an application point of view, the developed methodologies and tools will have a significant impact on many EO data applications, such as accurate and scalable retrieval of: specific man-made structures and burned forest areas.
Summary
During the last decade, a huge number of earth observation (EO) satellites with optical and Synthetic Aperture Radar sensors onboard have been launched and advances in satellite systems have increased the amount, variety and spatial/spectral resolution of EO data. This has led to massive EO data archives with huge amount of remote sensing (RS) images, from which mining and retrieving useful information are challenging. In view of that, content based image retrieval (CBIR) has attracted great attention in the RS community. However, existing RS CBIR systems have limitations on: i) characterization of high-level semantic content and spectral information present in RS images, and ii) large-scale RS CBIR problems since their search mechanism is time-demanding and not scalable in operational applications. The BigEarth project aims to develop highly innovative feature extraction and content based retrieval methods and tools for RS images, which can significantly improve the state-of-the-art both in the theory and in the tools currently available. To this end, very important scientific and practical problems will be addressed by focusing on the main challenges of Big EO data on RS image characterization, indexing and search from massive archives. In particular, novel methods and tools will be developed, aiming to: 1) characterize and exploit high level semantic content and spectral information present in RS images; 2) extract features directly from the compressed RS images; 3) achieve accurate and scalable RS image indexing and retrieval; and 4) integrate feature representations of different RS image sources into a unified form of feature representation. Moreover, a benchmark archive with high amount of multi-source RS images will be constructed. From an application point of view, the developed methodologies and tools will have a significant impact on many EO data applications, such as accurate and scalable retrieval of: specific man-made structures and burned forest areas.
Max ERC Funding
1 491 479 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym CGinsideNP
Project Complexity Inside NP - A Computational Geometry Perspective
Researcher (PI) Wolfgang MULZER
Host Institution (HI) FREIE UNIVERSITAET BERLIN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Traditional complexity theory focuses on the dichotomy between P and NP-hard
problems. Lately, it has become increasingly clear that this misses a major part
of the picture. Results by the PI and others offer glimpses on a fascinating structure
hiding inside NP: new computational problems that seem to lie between polynomial
and NP-hard have been identified; new conditional lower bounds for problems with
large polynomial running times have been found; long-held beliefs on the difficulty
of problems in P have been overturned. Computational geometry plays a major role
in these developments, providing some of the main questions and concepts.
We propose to explore this fascinating landscape inside NP from the perspective
of computational geometry, guided by three complementary questions:
(A) What can we say about the complexity of search problems derived from
existence theorems in discrete geometry? These problems offer a new
perspective on complexity classes previously studied in algorithmic game
theory (PPAD, PLS, CLS). Preliminary work indicates that they have the
potential to answer long-standing open questions on these classes.
(B) Can we provide meaningful conditional lower bounds on geometric
problems for which we have only algorithms with large polynomial running
time? Prompted by a question raised by the PI and collaborators, such lower
bounds were developed for the Frechet distance. Are similar results possible
for problems not related to distance measures? If so, this could dramatically
extend the traditional theory based on 3SUM-hardness to a much more
diverse and nuanced picture.
(C) Can we find subquadratic decision trees and faster algorithms for
3SUM-hard problems? After recent results by Pettie and Gronlund on
3SUM and by the PI and collaborators on the Frechet distance, we
have the potential to gain new insights on this large class of well-studied
problems and to improve long-standing complexity bounds for them.
Summary
Traditional complexity theory focuses on the dichotomy between P and NP-hard
problems. Lately, it has become increasingly clear that this misses a major part
of the picture. Results by the PI and others offer glimpses on a fascinating structure
hiding inside NP: new computational problems that seem to lie between polynomial
and NP-hard have been identified; new conditional lower bounds for problems with
large polynomial running times have been found; long-held beliefs on the difficulty
of problems in P have been overturned. Computational geometry plays a major role
in these developments, providing some of the main questions and concepts.
We propose to explore this fascinating landscape inside NP from the perspective
of computational geometry, guided by three complementary questions:
(A) What can we say about the complexity of search problems derived from
existence theorems in discrete geometry? These problems offer a new
perspective on complexity classes previously studied in algorithmic game
theory (PPAD, PLS, CLS). Preliminary work indicates that they have the
potential to answer long-standing open questions on these classes.
(B) Can we provide meaningful conditional lower bounds on geometric
problems for which we have only algorithms with large polynomial running
time? Prompted by a question raised by the PI and collaborators, such lower
bounds were developed for the Frechet distance. Are similar results possible
for problems not related to distance measures? If so, this could dramatically
extend the traditional theory based on 3SUM-hardness to a much more
diverse and nuanced picture.
(C) Can we find subquadratic decision trees and faster algorithms for
3SUM-hard problems? After recent results by Pettie and Gronlund on
3SUM and by the PI and collaborators on the Frechet distance, we
have the potential to gain new insights on this large class of well-studied
problems and to improve long-standing complexity bounds for them.
Max ERC Funding
1 486 800 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym COCAN
Project Complexity and Condition in Algebra and Numerics
Researcher (PI) Peter BÜRGISSER
Host Institution (HI) TECHNISCHE UNIVERSITAT BERLIN
Call Details Advanced Grant (AdG), PE1, ERC-2017-ADG
Summary "This proposal connects three areas that are considered distant from each other: computational complexity, algebraic geometry, and numerics. In the last decade, it became clear that the fundamental questions of computational complexity (P vs NP) should be studied in algebraic settings, linking them to problems in algebraic geometry. Recent progress on this challenging and very difficult questions led to surprising progress in computational invariant theory, which we want to explore thoroughly. We expect this to lead to solutions of computational problems in invariant theory that currently are considered infeasible. The complexity of Hilbert's null cone (the set of ""singular objects'') appears of paramount importance here. These investigations will also shed new light on the foundational questions of algebraic complexity theory. As an essential new ingredient to achieve this, we will tackle the arising algebraic computational problems by means of approximate numeric computations, taking into account the concept of numerical condition.
A related goal of the proposal is to develop a theory of efficient and numerically stable algorithms in algebraic geometry that reflects the properties of structured systems of polynomial equations, possibly with singularities. While there are various heuristics, a satisfactory theory so far only exists for unstructured systems over the complex numbers (recent solution of Smale's 17th problem), which seriously limits its range of applications. In this framework, the quality of numerical algorithms is gauged by a probabilistic analysis that shows small average (or smoothed) running time. One of the main challenges here consists of a probabilistic study of random structured polynomial systems. We will also develop and analyze numerical algorithms for finding or describing the set of real solutions, e.g., in terms of their homology.
"
Summary
"This proposal connects three areas that are considered distant from each other: computational complexity, algebraic geometry, and numerics. In the last decade, it became clear that the fundamental questions of computational complexity (P vs NP) should be studied in algebraic settings, linking them to problems in algebraic geometry. Recent progress on this challenging and very difficult questions led to surprising progress in computational invariant theory, which we want to explore thoroughly. We expect this to lead to solutions of computational problems in invariant theory that currently are considered infeasible. The complexity of Hilbert's null cone (the set of ""singular objects'') appears of paramount importance here. These investigations will also shed new light on the foundational questions of algebraic complexity theory. As an essential new ingredient to achieve this, we will tackle the arising algebraic computational problems by means of approximate numeric computations, taking into account the concept of numerical condition.
A related goal of the proposal is to develop a theory of efficient and numerically stable algorithms in algebraic geometry that reflects the properties of structured systems of polynomial equations, possibly with singularities. While there are various heuristics, a satisfactory theory so far only exists for unstructured systems over the complex numbers (recent solution of Smale's 17th problem), which seriously limits its range of applications. In this framework, the quality of numerical algorithms is gauged by a probabilistic analysis that shows small average (or smoothed) running time. One of the main challenges here consists of a probabilistic study of random structured polynomial systems. We will also develop and analyze numerical algorithms for finding or describing the set of real solutions, e.g., in terms of their homology.
"
Max ERC Funding
2 297 163 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym DeciGUT
Project A Grand Unified Theory of Decidability in Logic-Based Knowledge Representation
Researcher (PI) Sebastian Rudolph
Host Institution (HI) TECHNISCHE UNIVERSITAET DRESDEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "Logic-based knowledge representation (KR) constitutes a vital area of IT. The field inspires and guides scientific and technological developments enabling intelligent management of large and complex knowledge resources. Elaborate languages for specifying knowledge (so-called ontology languages) and querying it have been defined and standardized. Algorithms for automated reasoning and intelligent querying over knowledge resources are being developed, implemented and practically deployed on a wide scale.
Thereby, decidability investigations play a pivotal role to characterize what reasoning or querying tasks are at all computationally solvable.
Past decades have seen a proliferation of new decidable formalisms for KR, dominated by two major paradigms: description logics and rule-based approaches, most notably existential rules. Recently, these research lines have started to converge and first progress has been made toward identifying commonalities among the various formalisms. Still, the underlying principles for establishing their decidability remain disparate, ranging from proof-theoretic notions to model-theoretic ones.
DeciGUT will accomplish a major breakthrough in the field by establishing a ""Grand Unified Theory"" of decidability. We will provide a novel, powerful model-theoretic criterion inspired by advanced graph-theoretic notions. We will prove that the criterion indeed ensures decidability and that it subsumes most of (if not all) currently known decidable formalisms in the KR field.
We will exploit our results toward the definition of novel decidable KR languages of unprecedented expressivity. We will ultimately extend our framework to encompass more advanced KR features beyond standard first order logic such as counting and non-monotonic aspects.
Our research will draw from and significantly impact the scientific fields of AI, Database Theory and Logic, but also give rise to drastically improved practical information management technology."
Summary
"Logic-based knowledge representation (KR) constitutes a vital area of IT. The field inspires and guides scientific and technological developments enabling intelligent management of large and complex knowledge resources. Elaborate languages for specifying knowledge (so-called ontology languages) and querying it have been defined and standardized. Algorithms for automated reasoning and intelligent querying over knowledge resources are being developed, implemented and practically deployed on a wide scale.
Thereby, decidability investigations play a pivotal role to characterize what reasoning or querying tasks are at all computationally solvable.
Past decades have seen a proliferation of new decidable formalisms for KR, dominated by two major paradigms: description logics and rule-based approaches, most notably existential rules. Recently, these research lines have started to converge and first progress has been made toward identifying commonalities among the various formalisms. Still, the underlying principles for establishing their decidability remain disparate, ranging from proof-theoretic notions to model-theoretic ones.
DeciGUT will accomplish a major breakthrough in the field by establishing a ""Grand Unified Theory"" of decidability. We will provide a novel, powerful model-theoretic criterion inspired by advanced graph-theoretic notions. We will prove that the criterion indeed ensures decidability and that it subsumes most of (if not all) currently known decidable formalisms in the KR field.
We will exploit our results toward the definition of novel decidable KR languages of unprecedented expressivity. We will ultimately extend our framework to encompass more advanced KR features beyond standard first order logic such as counting and non-monotonic aspects.
Our research will draw from and significantly impact the scientific fields of AI, Database Theory and Logic, but also give rise to drastically improved practical information management technology."
Max ERC Funding
1 814 937 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym DeeViSe
Project Deep Learning for Dynamic 3D Visual Scene Understanding
Researcher (PI) Bastian LEIBE
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary Over the past 5 years, deep learning has exercised a tremendous and transformational effect on the field of computer vision. However, deep neural networks (DNNs) can only realize their full potential when applied in an end-to-end manner, i.e., when every stage of the processing pipeline is differentiable with respect to the network’s parameters, such that all of those parameters can be optimized together. Such end-to-end learning solutions are still rare for computer vision problems, in particular for dynamic visual scene understanding tasks. Moreover, feed-forward processing, as done in most DNN-based vision approaches, is only a tiny fraction of what the human brain can do. Feedback processes, temporal information processing, and memory mechanisms form an important part of our human scene understanding capabilities. Those mechanisms are currently underexplored in computer vision.
The goal of this proposal is to remove this bottleneck and to design end-to-end deep learning approaches that can realize the full potential of DNNs for dynamic visual scene understanding. We will make use of the positive interactions and feedback processes between multiple vision modalities and combine them to work towards a common goal. In addition, we will impart deep learning approaches with a notion of what it means to move through a 3D world by incorporating temporal continuity constraints, as well as by developing novel deep associative and spatial memory mechanisms.
The results of this research will enable deep neural networks to reach significantly improved dynamic scene understanding capabilities compared to today’s methods. This will have an immediate positive effect for applications in need for such capabilities, most notably for mobile robotics and intelligent vehicles.
Summary
Over the past 5 years, deep learning has exercised a tremendous and transformational effect on the field of computer vision. However, deep neural networks (DNNs) can only realize their full potential when applied in an end-to-end manner, i.e., when every stage of the processing pipeline is differentiable with respect to the network’s parameters, such that all of those parameters can be optimized together. Such end-to-end learning solutions are still rare for computer vision problems, in particular for dynamic visual scene understanding tasks. Moreover, feed-forward processing, as done in most DNN-based vision approaches, is only a tiny fraction of what the human brain can do. Feedback processes, temporal information processing, and memory mechanisms form an important part of our human scene understanding capabilities. Those mechanisms are currently underexplored in computer vision.
The goal of this proposal is to remove this bottleneck and to design end-to-end deep learning approaches that can realize the full potential of DNNs for dynamic visual scene understanding. We will make use of the positive interactions and feedback processes between multiple vision modalities and combine them to work towards a common goal. In addition, we will impart deep learning approaches with a notion of what it means to move through a 3D world by incorporating temporal continuity constraints, as well as by developing novel deep associative and spatial memory mechanisms.
The results of this research will enable deep neural networks to reach significantly improved dynamic scene understanding capabilities compared to today’s methods. This will have an immediate positive effect for applications in need for such capabilities, most notably for mobile robotics and intelligent vehicles.
Max ERC Funding
2 000 000 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym DLT
Project Deep Learning Theory: Geometric Analysis of Capacity, Optimization, and Generalization for Improving Learning in Deep Neural Networks
Researcher (PI) Guido Francisco MONTUFAR CUARTAS
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Deep Learning is one of the most vibrant areas of contemporary machine learning and one of the most promising approaches to Artificial Intelligence. Deep Learning drives the latest systems for image, text, and audio processing, as well as an increasing number of new technologies. The goal of this project is to advance on key open problems in Deep Learning, specifically regarding the capacity, optimization, and regularization of these algorithms. The idea is to consolidate a theoretical basis that allows us to pin down the inner workings of the present success of Deep Learning and make it more widely applicable, in particular in situations with limited data and challenging problems in reinforcement learning. The approach is based on the geometry of neural networks and exploits innovative mathematics, drawing on information geometry and algebraic statistics. This is a quite timely and unique proposal which holds promise to vastly streamline the progress of Deep Learning into new frontiers.
Summary
Deep Learning is one of the most vibrant areas of contemporary machine learning and one of the most promising approaches to Artificial Intelligence. Deep Learning drives the latest systems for image, text, and audio processing, as well as an increasing number of new technologies. The goal of this project is to advance on key open problems in Deep Learning, specifically regarding the capacity, optimization, and regularization of these algorithms. The idea is to consolidate a theoretical basis that allows us to pin down the inner workings of the present success of Deep Learning and make it more widely applicable, in particular in situations with limited data and challenging problems in reinforcement learning. The approach is based on the geometry of neural networks and exploits innovative mathematics, drawing on information geometry and algebraic statistics. This is a quite timely and unique proposal which holds promise to vastly streamline the progress of Deep Learning into new frontiers.
Max ERC Funding
1 500 000 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym DYMO
Project Dynamic dialogue modelling
Researcher (PI) Milica GASIC
Host Institution (HI) HEINRICH-HEINE-UNIVERSITAET DUESSELDORF
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary With the prevalence of information technology in our daily lives, our ability to interact with machines in increasingly simplified and more human-like ways has become paramount. Information is becoming ever more abundant but our access to it is limited not least by technological restraints. Spoken dialogue systems address this issue by providing an intelligent speech interface that facilitates swift, human-like acquisition of information.
The advantages of speech interfaces are already evident from the rise of personal assistants such as Siri, Google Assistant, Cortana or Amazon Alexa. In these systems, however, the user is limited to a simple query, and the systems attempt to provide an answer within one or two turns of dialogue. To date, significant parts of these systems are rule-based and do not readily scale to changes in the domain of operation. Furthermore, rule-based systems can be brittle when speech recognition errors occur.
The vision of this project is to develop novel dialogue models that provide natural human-computer interaction beyond simple information-seeking dialogues and that continuously evolve as they are being used by exploiting both dialogue and non-dialogue data. Building such robust and intelligent spoken dialogue systems poses serious challenges in artificial intelligence and machine learning. The project will tackle four bottleneck areas that require fundamental research: automated knowledge acquisition, optimisation of complex behaviour, realistic user models and sentiment awareness. Taken together, the proposed solutions have the potential to transform the way we access information in areas as diverse as e-commerce, government, healthcare and education.
Summary
With the prevalence of information technology in our daily lives, our ability to interact with machines in increasingly simplified and more human-like ways has become paramount. Information is becoming ever more abundant but our access to it is limited not least by technological restraints. Spoken dialogue systems address this issue by providing an intelligent speech interface that facilitates swift, human-like acquisition of information.
The advantages of speech interfaces are already evident from the rise of personal assistants such as Siri, Google Assistant, Cortana or Amazon Alexa. In these systems, however, the user is limited to a simple query, and the systems attempt to provide an answer within one or two turns of dialogue. To date, significant parts of these systems are rule-based and do not readily scale to changes in the domain of operation. Furthermore, rule-based systems can be brittle when speech recognition errors occur.
The vision of this project is to develop novel dialogue models that provide natural human-computer interaction beyond simple information-seeking dialogues and that continuously evolve as they are being used by exploiting both dialogue and non-dialogue data. Building such robust and intelligent spoken dialogue systems poses serious challenges in artificial intelligence and machine learning. The project will tackle four bottleneck areas that require fundamental research: automated knowledge acquisition, optimisation of complex behaviour, realistic user models and sentiment awareness. Taken together, the proposed solutions have the potential to transform the way we access information in areas as diverse as e-commerce, government, healthcare and education.
Max ERC Funding
1 499 956 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym ECHO
Project Practical Imaging and Inversion of Transient Light Transport
Researcher (PI) Matthias HULLIN
Host Institution (HI) RHEINISCHE FRIEDRICH-WILHELMS-UNIVERSITAT BONN
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary The automated analysis of visual data is a key enabler for industrial and consumer technologies and of immense economic
and social importance. Its main challenge is in the inherent ambiguity of images due to the very mechanism of image
capture: light reaching a pixel on different paths or at different times is mixed irreversibly. Consequently, even after
decades of extensive research, problems like deblurring or descattering, geometry/material estimation or motion tracking
are still largely unsolved and will remain so in the foreseeable future.
Transient imaging (TI) tackles this problem by recording ultrafast optical echoes that unmix light contributions by the total
pathlength. So far, TI used to require high-end measurement setups. By introducing computational TI (CTI), we paved the
way for a lightweight capture of transient data using consumer hardware. We showed the potential of CTI in scenarios like
robust range measurement, descattering and imaging of objects outside the line of sight – tasks that had been considered
difficult to impossible so far.
The ECHO project is rooted in computer graphics and computational imaging. In it, we will overcome the practical limitations that are hampering a large-scale deployment of TI: the time required for data capture and to reconstruct the
desired information, both in the order of seconds to minutes, a lack of dedicated image priors and of quality guarantees for
the reconstruction, the limited accuracy and performance of forward models and the lack of ground-truth data and
benchmark methods.
Over the course of ECHO, we will pioneer advanced capture setups and strategies, signal formation models, priors and numerical
methods, for the first time enabling real-time reconstruction and analysis of transient light transport in complex and dynamic
scenes. The methodology developed in this far-reaching project will turn TI from a research technology into a family of
practical tools that will immediately benefit many applications.
Summary
The automated analysis of visual data is a key enabler for industrial and consumer technologies and of immense economic
and social importance. Its main challenge is in the inherent ambiguity of images due to the very mechanism of image
capture: light reaching a pixel on different paths or at different times is mixed irreversibly. Consequently, even after
decades of extensive research, problems like deblurring or descattering, geometry/material estimation or motion tracking
are still largely unsolved and will remain so in the foreseeable future.
Transient imaging (TI) tackles this problem by recording ultrafast optical echoes that unmix light contributions by the total
pathlength. So far, TI used to require high-end measurement setups. By introducing computational TI (CTI), we paved the
way for a lightweight capture of transient data using consumer hardware. We showed the potential of CTI in scenarios like
robust range measurement, descattering and imaging of objects outside the line of sight – tasks that had been considered
difficult to impossible so far.
The ECHO project is rooted in computer graphics and computational imaging. In it, we will overcome the practical limitations that are hampering a large-scale deployment of TI: the time required for data capture and to reconstruct the
desired information, both in the order of seconds to minutes, a lack of dedicated image priors and of quality guarantees for
the reconstruction, the limited accuracy and performance of forward models and the lack of ground-truth data and
benchmark methods.
Over the course of ECHO, we will pioneer advanced capture setups and strategies, signal formation models, priors and numerical
methods, for the first time enabling real-time reconstruction and analysis of transient light transport in complex and dynamic
scenes. The methodology developed in this far-reaching project will turn TI from a research technology into a family of
practical tools that will immediately benefit many applications.
Max ERC Funding
1 525 840 €
Duration
Start date: 2018-12-01, End date: 2023-11-30
Project acronym ENGAGES
Project Next generation algorithms for grabbing and exploiting symmetry
Researcher (PI) Pascal Schweitzer
Host Institution (HI) TECHNISCHE UNIVERSITAET KAISERSLAUTERN
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Symmetry is a phenomenon that appears in many different contexts.
Algorithmic symmetry detection and exploitation is the concept of finding intrinsic symmetries of a given object and then using these symmetries to our advantage. Application areas of algorithmic symmetry detection and exploitation range from convolutional neural networks in machine learning to computer graphics, chemical data bases and beyond.
In contrast to this widespread use, our understanding of the theoretical foundation (namely the graph isomorphism problem) is incomplete and current algorithmic symmetry tools are inadequate for big data applications. Hence, EngageS addresses these key challenges in the field using a systematic approach to the theory and practice of symmetry detection. It thereby also fixes the existing lack of interplay between theory and practice, which is part of the problem.
EngageS' main aims are to tackle the classical and descriptive complexity of the graph isomorphism problem and to design the next generation of symmetry detection algorithms. As key ideas to resolve the complexity, EngageS offers three new approaches on how to prove lower bounds and a new method to settle the descriptive complexity.
EngageS will also develop practical symmetry detection algorithms for big data, exploiting parallelism and memory hierarchies of modern machines, and will introduce the concept of and a road map to exploiting absence of symmetry. Overall EngageS will establish a comprehensive software library that will serve as a platform for integrated research on the algorithmic treatment of symmetry.
In summary, EngageS will develop fast, efficient and accessible symmetry detection tools that will be used to solve complex algorithmic problems in a range of fields including combinatorial algorithms, generation problems, and canonization.
Summary
Symmetry is a phenomenon that appears in many different contexts.
Algorithmic symmetry detection and exploitation is the concept of finding intrinsic symmetries of a given object and then using these symmetries to our advantage. Application areas of algorithmic symmetry detection and exploitation range from convolutional neural networks in machine learning to computer graphics, chemical data bases and beyond.
In contrast to this widespread use, our understanding of the theoretical foundation (namely the graph isomorphism problem) is incomplete and current algorithmic symmetry tools are inadequate for big data applications. Hence, EngageS addresses these key challenges in the field using a systematic approach to the theory and practice of symmetry detection. It thereby also fixes the existing lack of interplay between theory and practice, which is part of the problem.
EngageS' main aims are to tackle the classical and descriptive complexity of the graph isomorphism problem and to design the next generation of symmetry detection algorithms. As key ideas to resolve the complexity, EngageS offers three new approaches on how to prove lower bounds and a new method to settle the descriptive complexity.
EngageS will also develop practical symmetry detection algorithms for big data, exploiting parallelism and memory hierarchies of modern machines, and will introduce the concept of and a road map to exploiting absence of symmetry. Overall EngageS will establish a comprehensive software library that will serve as a platform for integrated research on the algorithmic treatment of symmetry.
In summary, EngageS will develop fast, efficient and accessible symmetry detection tools that will be used to solve complex algorithmic problems in a range of fields including combinatorial algorithms, generation problems, and canonization.
Max ERC Funding
1 999 094 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym FairSocialComputing
Project Foundations for Fair Social Computing
Researcher (PI) Krishna GUMMADI
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Social computing represents a societal-scale symbiosis of humans and computational systems, where humans interact via and with computers, actively providing inputs to influence and being influenced by, the outputs of the computations. Recently, several concerns have been raised about the unfairness of social computations pervading our lives ranging from the potential for discrimination in machine learning based predictive analytics and implicit biases in online search and recommendations to their general lack of transparency on what sensitive data about users they use or how they use them.
In this proposal, I propose ten fairness principles for social computations. They span across all three main categories of organizational justice, including distributive (fairness of the outcomes or ends of computations), procedural (fairness of the process or means of computations), and informational fairness (transparency of the outcomes and process of computations) and they cover a variety of unfairness perceptions about social computations.
I describe the fundamental and novel technical challenges that arise when applying these principles to social computations. These challenges are related to operationalization (measurement), synthesis and analysis of fairness in computations. Tackling these requires applying methodologies from a number of sub-areas within CS, including learning, datamining, IR, game-theory, privacy, and distributed systems.
I discuss our recent breakthroughs in tackling some of these challenges, particularly our idea of fairness constraints, a flexible mechanism that allows us to constrain learning models to synthesize fair computations that are non-discriminatory, the first of our ten principles. I outline our plans to build upon our results to tackle the challenges that arise from the other nine fairness principles. Successful execution of the proposal will provide the foundations for fair social computing in the future.
Summary
Social computing represents a societal-scale symbiosis of humans and computational systems, where humans interact via and with computers, actively providing inputs to influence and being influenced by, the outputs of the computations. Recently, several concerns have been raised about the unfairness of social computations pervading our lives ranging from the potential for discrimination in machine learning based predictive analytics and implicit biases in online search and recommendations to their general lack of transparency on what sensitive data about users they use or how they use them.
In this proposal, I propose ten fairness principles for social computations. They span across all three main categories of organizational justice, including distributive (fairness of the outcomes or ends of computations), procedural (fairness of the process or means of computations), and informational fairness (transparency of the outcomes and process of computations) and they cover a variety of unfairness perceptions about social computations.
I describe the fundamental and novel technical challenges that arise when applying these principles to social computations. These challenges are related to operationalization (measurement), synthesis and analysis of fairness in computations. Tackling these requires applying methodologies from a number of sub-areas within CS, including learning, datamining, IR, game-theory, privacy, and distributed systems.
I discuss our recent breakthroughs in tackling some of these challenges, particularly our idea of fairness constraints, a flexible mechanism that allows us to constrain learning models to synthesize fair computations that are non-discriminatory, the first of our ten principles. I outline our plans to build upon our results to tackle the challenges that arise from the other nine fairness principles. Successful execution of the proposal will provide the foundations for fair social computing in the future.
Max ERC Funding
2 487 500 €
Duration
Start date: 2018-07-01, End date: 2023-06-30
Project acronym FRAPPANT
Project Formal Reasoning About Probabilistic Programs: Breaking New Ground for Automation
Researcher (PI) Joost Pieter KATOEN
Host Institution (HI) RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Probabilistic programs describe recipes on how to infer statistical conclusions about data from a complex mixture of uncertain data and real-world observations. They can represent probabilistic graphical models far beyond the capabilities of Bayesian networks and are expected to have a major impact on machine intelligence.
Probabilistic programs are ubiquitous. They steer autonomous robots and self-driving cars, are key to describe security mechanisms, naturally code up randomised algorithms for solving NP-hard problems, and are rapidly encroaching AI. Probabilistic programming aims to make probabilistic modeling and machine learning accessible to the programmer.
Probabilistic programs, though typically relatively small in size, are hard to grasp, let alone automatically checkable. Are they doing the right thing? What’s their precision? These questions are notoriously hard — even the most elementary question “does a program halt with probability one?” is “more undecidable” than the halting problem — and can (if at all) be answered with statistical evidence only. Bugs thus easily occur. Hard guarantees are called for. The objective of this project is to enable predictable probabilistic programming. We do so by developing formal verification techniques.
Whereas program correctness is pivotal in computer science, the formal verification of probabilistic programs is in its infancy. The project aims to fill this barren landscape by developing program analysis techniques, leveraging model checking, deductive verification, and static analysis. Challenging problems such as checking program equivalence, loop-invariant and parameter synthesis, program repair, program robustness and exact inference using weakest precondition reasoning will be tackled. The techniques will be evaluated in the context of probabilistic graphical models, randomised algorithms, and autonomous robots.
FRAPPANT will spearhead formally verifiable probabilistic programming.
Summary
Probabilistic programs describe recipes on how to infer statistical conclusions about data from a complex mixture of uncertain data and real-world observations. They can represent probabilistic graphical models far beyond the capabilities of Bayesian networks and are expected to have a major impact on machine intelligence.
Probabilistic programs are ubiquitous. They steer autonomous robots and self-driving cars, are key to describe security mechanisms, naturally code up randomised algorithms for solving NP-hard problems, and are rapidly encroaching AI. Probabilistic programming aims to make probabilistic modeling and machine learning accessible to the programmer.
Probabilistic programs, though typically relatively small in size, are hard to grasp, let alone automatically checkable. Are they doing the right thing? What’s their precision? These questions are notoriously hard — even the most elementary question “does a program halt with probability one?” is “more undecidable” than the halting problem — and can (if at all) be answered with statistical evidence only. Bugs thus easily occur. Hard guarantees are called for. The objective of this project is to enable predictable probabilistic programming. We do so by developing formal verification techniques.
Whereas program correctness is pivotal in computer science, the formal verification of probabilistic programs is in its infancy. The project aims to fill this barren landscape by developing program analysis techniques, leveraging model checking, deductive verification, and static analysis. Challenging problems such as checking program equivalence, loop-invariant and parameter synthesis, program repair, program robustness and exact inference using weakest precondition reasoning will be tackled. The techniques will be evaluated in the context of probabilistic graphical models, randomised algorithms, and autonomous robots.
FRAPPANT will spearhead formally verifiable probabilistic programming.
Max ERC Funding
2 491 250 €
Duration
Start date: 2018-11-01, End date: 2023-10-31
Project acronym KAPIBARA
Project Homotopy Theory of Algebraic Varieties and Wild Ramification
Researcher (PI) Piotr ACHINGER
Host Institution (HI) INSTYTUT MATEMATYCZNY POLSKIEJ AKADEMII NAUK
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary The aim of the proposed research is to study the homotopy theory of algebraic varieties and other algebraically defined geometric objects, especially over fields other than the complex numbers. A noticeable emphasis will be put on fundamental groups and on K(pi, 1) spaces, which serve as building blocks for more complicated objects. The most important source of both motivation and methodology is my recent discovery of the K(pi, 1) property of affine schemes in positive characteristic and its relation to wild ramification phenomena.
The central goal is the study of etale homotopy types in positive characteristic, where we hope to use the aforementioned discovery to yield new results beyond the affine case and a better understanding of the fundamental group of affine schemes. The latter goal is closely tied to Grothendieck's anabelian geometry program, which we would like to extend beyond its usual scope of hyperbolic curves.
There are two bridges going out of this central point. The first is the analogy between wild ramification and irregular singularities of algebraic integrable connections, which prompts us to translate our results to the latter setting, and to define a wild homotopy type whose fundamental group encodes the category of connections.
The second bridge is the theory of perfectoid spaces, allowing one to pass between characteristic p and p-adic geometry, which we plan to use to shed some new light on the homotopy theory of adic spaces. At the same time, we address the related question: when is the universal cover of a p-adic variety a perfectoid space? We expect a connection between this question and the Shafarevich conjecture and varieties with large fundamental group.
The last part of the project deals with varieties over the field of formal Laurent series over C, where we want to construct a Betti homotopy realization using logarithmic geometry. The need for such a construction is motivated by certain questions in mirror symmetry.
Summary
The aim of the proposed research is to study the homotopy theory of algebraic varieties and other algebraically defined geometric objects, especially over fields other than the complex numbers. A noticeable emphasis will be put on fundamental groups and on K(pi, 1) spaces, which serve as building blocks for more complicated objects. The most important source of both motivation and methodology is my recent discovery of the K(pi, 1) property of affine schemes in positive characteristic and its relation to wild ramification phenomena.
The central goal is the study of etale homotopy types in positive characteristic, where we hope to use the aforementioned discovery to yield new results beyond the affine case and a better understanding of the fundamental group of affine schemes. The latter goal is closely tied to Grothendieck's anabelian geometry program, which we would like to extend beyond its usual scope of hyperbolic curves.
There are two bridges going out of this central point. The first is the analogy between wild ramification and irregular singularities of algebraic integrable connections, which prompts us to translate our results to the latter setting, and to define a wild homotopy type whose fundamental group encodes the category of connections.
The second bridge is the theory of perfectoid spaces, allowing one to pass between characteristic p and p-adic geometry, which we plan to use to shed some new light on the homotopy theory of adic spaces. At the same time, we address the related question: when is the universal cover of a p-adic variety a perfectoid space? We expect a connection between this question and the Shafarevich conjecture and varieties with large fundamental group.
The last part of the project deals with varieties over the field of formal Laurent series over C, where we want to construct a Betti homotopy realization using logarithmic geometry. The need for such a construction is motivated by certain questions in mirror symmetry.
Max ERC Funding
1 007 500 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym MaMBoQ
Project Macroscopic Behavior of Many-Body Quantum Systems
Researcher (PI) Marcello PORTA
Host Institution (HI) EBERHARD KARLS UNIVERSITAET TUEBINGEN
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary This project is devoted to the analysis of large quantum systems. It is divided in two parts: Part A focuses on the transport properties of interacting lattice models, while Part B concerns the derivation of effective evolution equations for many-body quantum systems. The common theme is the concept of emergent effective theory: simplified models capturing the macroscopic behavior of complex systems. Different systems might share the same effective theory, a phenomenon called universality. A central goal of mathematical physics is to validate these approximations, and to understand the emergence of universality from first principles.
Part A: Transport in interacting condensed matter systems. I will study charge and spin transport in 2d systems, such as graphene and topological insulators. These materials attracted enormous interest, because of their remarkable conduction properties. Neglecting many-body interactions, some of these properties can be explained mathematically. In real samples, however, electrons do interact. In order to deal with such complex systems, physicists often rely on uncontrolled expansions, numerical methods, or formal mappings in exactly solvable models. The goal is to rigorously understand the effect of many-body interactions, and to explain the emergence of universality.
Part B: Effective dynamics of interacting fermionic systems. I will work on the derivation of effective theories for interacting fermions, in suitable scaling regimes. In the last 18 years, there has been great progress on the rigorous validity of celebrated effective models, e.g. Hartree and Gross-Pitaevskii theory. A lot is known for interacting bosons, for the dynamics and for the equilibrium low energy properties. Much less is known for fermions. The goal is fill the gap by proving the validity of some well-known fermionic effective theories, such as Hartree-Fock and BCS theory in the mean-field scaling, and the quantum Boltzmann equation in the kinetic scaling.
Summary
This project is devoted to the analysis of large quantum systems. It is divided in two parts: Part A focuses on the transport properties of interacting lattice models, while Part B concerns the derivation of effective evolution equations for many-body quantum systems. The common theme is the concept of emergent effective theory: simplified models capturing the macroscopic behavior of complex systems. Different systems might share the same effective theory, a phenomenon called universality. A central goal of mathematical physics is to validate these approximations, and to understand the emergence of universality from first principles.
Part A: Transport in interacting condensed matter systems. I will study charge and spin transport in 2d systems, such as graphene and topological insulators. These materials attracted enormous interest, because of their remarkable conduction properties. Neglecting many-body interactions, some of these properties can be explained mathematically. In real samples, however, electrons do interact. In order to deal with such complex systems, physicists often rely on uncontrolled expansions, numerical methods, or formal mappings in exactly solvable models. The goal is to rigorously understand the effect of many-body interactions, and to explain the emergence of universality.
Part B: Effective dynamics of interacting fermionic systems. I will work on the derivation of effective theories for interacting fermions, in suitable scaling regimes. In the last 18 years, there has been great progress on the rigorous validity of celebrated effective models, e.g. Hartree and Gross-Pitaevskii theory. A lot is known for interacting bosons, for the dynamics and for the equilibrium low energy properties. Much less is known for fermions. The goal is fill the gap by proving the validity of some well-known fermionic effective theories, such as Hartree-Fock and BCS theory in the mean-field scaling, and the quantum Boltzmann equation in the kinetic scaling.
Max ERC Funding
982 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym NewtonStrat
Project Newton strata - geometry and representations
Researcher (PI) Eva VIEHMANN
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary The Langlands programme is a far-reaching web of conjectural or proven correspondences joining the fields of representation theory and of number theory. It is one of the centerpieces of arithmetic geometry, and
has in the past decades produced many spectacular breakthroughs, for example the proof of Fermat’s Last Theorem by Taylor and Wiles.
The most successful approach to prove instances of Langlands’ conjectures is via algebraic geometry, by studying suitable moduli spaces such as Shimura varieties. Their cohomology carries actions both of a linear algebraic group (such as GLn) and a Galois group associated with the number field one is studying. A central tool in the study of the arithmetic properties of these moduli spaces is the Newton stratification, a natural decomposition based on the moduli description of the space. Recently the theory of Newton strata has seen two major new developments: Representation-theoretic methods and results have been successfully established to describe their geometry and cohomology. Furthermore, an adic version of the Newton stratification has been defined and is already of prime importance in new approaches within the Langlands programme.
This project aims at uniting these two novel developments to obtain new results in both contexts with direct applications to the Langlands programme, as well as a close relationship and dictionary between the classical and the adic stratifications. It is subdivided into three parts which mutually benefit from each other: Firstly we investigate the geometry of Newton strata in loop groups and Shimura varieties, and representations in their cohomology. Secondly, we study corresponding geometric and cohomological properties of adic Newton strata. Finally, we establish closer ties between the two contexts. Here we want to obtain analogues to results on one side for the other, but more importantly aim at a direct comparison that explains the similar behaviour directly.
Summary
The Langlands programme is a far-reaching web of conjectural or proven correspondences joining the fields of representation theory and of number theory. It is one of the centerpieces of arithmetic geometry, and
has in the past decades produced many spectacular breakthroughs, for example the proof of Fermat’s Last Theorem by Taylor and Wiles.
The most successful approach to prove instances of Langlands’ conjectures is via algebraic geometry, by studying suitable moduli spaces such as Shimura varieties. Their cohomology carries actions both of a linear algebraic group (such as GLn) and a Galois group associated with the number field one is studying. A central tool in the study of the arithmetic properties of these moduli spaces is the Newton stratification, a natural decomposition based on the moduli description of the space. Recently the theory of Newton strata has seen two major new developments: Representation-theoretic methods and results have been successfully established to describe their geometry and cohomology. Furthermore, an adic version of the Newton stratification has been defined and is already of prime importance in new approaches within the Langlands programme.
This project aims at uniting these two novel developments to obtain new results in both contexts with direct applications to the Langlands programme, as well as a close relationship and dictionary between the classical and the adic stratifications. It is subdivided into three parts which mutually benefit from each other: Firstly we investigate the geometry of Newton strata in loop groups and Shimura varieties, and representations in their cohomology. Secondly, we study corresponding geometric and cohomological properties of adic Newton strata. Finally, we establish closer ties between the two contexts. Here we want to obtain analogues to results on one side for the other, but more importantly aim at a direct comparison that explains the similar behaviour directly.
Max ERC Funding
1 202 500 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym PANAMA
Project Probabilistic Automated Numerical Analysis in Machine learning and Artificial intelligence
Researcher (PI) Philipp HENNIG
Host Institution (HI) EBERHARD KARLS UNIVERSITAET TUEBINGEN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Numerical tasks - integration, linear algebra, optimization, the solution of differential equations - form the computational basis of machine intelligence. Currently, human designers pick methods for these tasks from toolboxes. The generic algorithms assembled in such collections tend to be inefficient on any specific task, and can be unsafe when used incorrectly on problems they were not designed for. Research in numerical methods thus carries carries the potential for groundbreaking advancements in the performance and quality of AI.
Project PANAMA will develop a framework within which numerical methods can be constructed in an increasingly automated fashion; and within which numerical methods can assess their own suitability, and adapt both model and computations to the task, at runtime. The key tenet is that numerical methods, since they perform tractable computations to estimate a latent quantity, can themselves be interpreted explicitly as active inference agents; thus concepts from machine learning can be translated to the numerical domain. Groundwork for this paradigm - probabilistic numerics - has recently been developed into a rigorous mathematical framework by the PI and others. The proposed research will simultaneously deliver new general theory for the computations of learning machines, and concrete new algorithms for core areas of machine learning. In doing so, Project PANAMA will improve the efficiency and safety of artificial intelligence, addressing scientific, technological and societal challenges affecting Europeans today.
Summary
Numerical tasks - integration, linear algebra, optimization, the solution of differential equations - form the computational basis of machine intelligence. Currently, human designers pick methods for these tasks from toolboxes. The generic algorithms assembled in such collections tend to be inefficient on any specific task, and can be unsafe when used incorrectly on problems they were not designed for. Research in numerical methods thus carries carries the potential for groundbreaking advancements in the performance and quality of AI.
Project PANAMA will develop a framework within which numerical methods can be constructed in an increasingly automated fashion; and within which numerical methods can assess their own suitability, and adapt both model and computations to the task, at runtime. The key tenet is that numerical methods, since they perform tractable computations to estimate a latent quantity, can themselves be interpreted explicitly as active inference agents; thus concepts from machine learning can be translated to the numerical domain. Groundwork for this paradigm - probabilistic numerics - has recently been developed into a rigorous mathematical framework by the PI and others. The proposed research will simultaneously deliver new general theory for the computations of learning machines, and concrete new algorithms for core areas of machine learning. In doing so, Project PANAMA will improve the efficiency and safety of artificial intelligence, addressing scientific, technological and societal challenges affecting Europeans today.
Max ERC Funding
1 450 000 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym PaVeS
Project Parametrized Verification and Synthesis
Researcher (PI) Javier ESPARZA
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Parameterized systems consist of an arbitrary number of replicated agents with limited computational power, interacting to achieve common goals. They pervade computer science. Classical examples include families of digital circuits, distributed algorithms for leader election or byzantine agreement, routing algorithms, and multithreaded programs. Modern examples exhibit stochastic interaction between mobile agents, and include robot swarms, molecular computers, and cooperating ant colonies.
A parameterized system is in fact an infinite collection of systems, one for each number of agents. Current verification technology of industrial strength can only check correctness of a few instances of this collection. For example, model checkers can automatically prove a distributed algorithm correct for a small number of processes, but not for any number. While substantial progress has been made on the theory and applications of parameterized verification, in order to achieve large impact the field has to face three ``grand challenges'':
- Develop novel algorithms and tools for p-verification of classical p-systems that bypass the high complexity of current techniques.
-Develop the first algorithms and tools for p-verification of modern stochastic p-systems.
-Develop the first algorithms and tools for synthesis of correct-by-construction p-systems.
Addressing these challenges requires fundamentally new lines of attack. The starting point of PaVeS are two recent breakthroughs in the theory of Petri nets and Vector Addition Systems, one of them achieved by the PI and his co-authors. PaVeS will develop these lines into theory, algorithms, and tools for p-verification and p-synthesis, leading to a new generation of verifiers and synthesizers.
Summary
Parameterized systems consist of an arbitrary number of replicated agents with limited computational power, interacting to achieve common goals. They pervade computer science. Classical examples include families of digital circuits, distributed algorithms for leader election or byzantine agreement, routing algorithms, and multithreaded programs. Modern examples exhibit stochastic interaction between mobile agents, and include robot swarms, molecular computers, and cooperating ant colonies.
A parameterized system is in fact an infinite collection of systems, one for each number of agents. Current verification technology of industrial strength can only check correctness of a few instances of this collection. For example, model checkers can automatically prove a distributed algorithm correct for a small number of processes, but not for any number. While substantial progress has been made on the theory and applications of parameterized verification, in order to achieve large impact the field has to face three ``grand challenges'':
- Develop novel algorithms and tools for p-verification of classical p-systems that bypass the high complexity of current techniques.
-Develop the first algorithms and tools for p-verification of modern stochastic p-systems.
-Develop the first algorithms and tools for synthesis of correct-by-construction p-systems.
Addressing these challenges requires fundamentally new lines of attack. The starting point of PaVeS are two recent breakthroughs in the theory of Petri nets and Vector Addition Systems, one of them achieved by the PI and his co-authors. PaVeS will develop these lines into theory, algorithms, and tools for p-verification and p-synthesis, leading to a new generation of verifiers and synthesizers.
Max ERC Funding
2 354 000 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym PTRCSP
Project Phase Transitions in Random Constraint Satisfaction Problems
Researcher (PI) Konstantinos PANAGIOTOU
Host Institution (HI) LUDWIG-MAXIMILIANS-UNIVERSITAET MUENCHEN
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary The systematic investigation of random discrete structures and processes was initiated by Erdős and Rényi in a seminal paper about random graphs in 1960. Since then the study of such objects has become an important topic that has remarkable applications not only in combinatorics, but also in computer science and statistical physics.
Random discrete objects have two striking characteristics. First, they often exhibit phase transitions, meaning that only small changes in some typically local control parameter result in dramatic changes of the global structure. Second, several statistics of the models concentrate, that is, although the support of the underlying distribution is large, the random variables usually take values in a small set only. A central topic is the investigation of the fine behaviour, namely the determination of the limiting distribution.
Although the current knowledge about random discrete structures is broad, there are many fundamental and long-standing questions with respect to the two key characteristics. In particular, up to a small number of notable exceptions, several well-studied models undoubtedly exhibit phase transitions, but we are not able to understand them from a mathematical viewpoint nor to investigate their fine properties. The goal of the proposed project is to study some prominent open problems whose solution will improve significantly our general understanding of phase transitions and of the fine behaviour in random discrete structures. The objectives include the establishment of phase transitions in random constraint satisfaction problems and the analysis of the limiting distribution of central parameters, like the chromatic number in dense random graphs. All these problems are known to be difficult and fundamental, and the results of this project will open up new avenues for the study of random discrete objects, both sparse and dense.
Summary
The systematic investigation of random discrete structures and processes was initiated by Erdős and Rényi in a seminal paper about random graphs in 1960. Since then the study of such objects has become an important topic that has remarkable applications not only in combinatorics, but also in computer science and statistical physics.
Random discrete objects have two striking characteristics. First, they often exhibit phase transitions, meaning that only small changes in some typically local control parameter result in dramatic changes of the global structure. Second, several statistics of the models concentrate, that is, although the support of the underlying distribution is large, the random variables usually take values in a small set only. A central topic is the investigation of the fine behaviour, namely the determination of the limiting distribution.
Although the current knowledge about random discrete structures is broad, there are many fundamental and long-standing questions with respect to the two key characteristics. In particular, up to a small number of notable exceptions, several well-studied models undoubtedly exhibit phase transitions, but we are not able to understand them from a mathematical viewpoint nor to investigate their fine properties. The goal of the proposed project is to study some prominent open problems whose solution will improve significantly our general understanding of phase transitions and of the fine behaviour in random discrete structures. The objectives include the establishment of phase transitions in random constraint satisfaction problems and the analysis of the limiting distribution of central parameters, like the chromatic number in dense random graphs. All these problems are known to be difficult and fundamental, and the results of this project will open up new avenues for the study of random discrete objects, both sparse and dense.
Max ERC Funding
1 219 462 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym QUADAG
Project Quadratic refinements in algebraic geometry
Researcher (PI) Marc Levine
Host Institution (HI) UNIVERSITAET DUISBURG-ESSEN
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary Enumerative geometry, the mathematics of counting numbers of solutions to geometric problems, and its modern descendents, Gromov-Witten theory, Donaldson-Thomas theory, quantum cohomology and many other related fields, analyze geometric problems by computing numerical invariants, such as intersection numbers or degrees of characteristic classes. This essentially algebraic approach has been successful mainly in the study of problems over the complex numbers and other algebraically closed fields. There has been progress in attacking enumerative problems over the real numbers; the methods are mainly non-algebraic. Arithmetic content underlying the numerical invariants is hidden when analyzed by these non-algebraic methods. Recent work by the PI and others has opened the door to a new, purely algebraic approach to enumerative geometry that recovers results in both the complex and real cases in one package and reveals this arithmetic content over arbitrary fields. Building on these new developments, the goals of this proposal are, firstly, to use motivic homotopy theory, algebraic geometry and symplectic geometry to develop new purely algebraic methods for handling enumerative problems over an arbitrary field, secondly, to apply these methods to central enumerative problems, recovering and unifying known results over both C and R and thirdly, to use this new approach to reveal the hidden arithmetic nature of enumerative problems. In 2009 R. Pandharipande and I applied algebraic cobordism to prove the degree zero MNOP conjecture in Donaldson-Thomas theory. More recently, I have developed several aspects of the theory of quadratic invariants using motivic homotopy theory.
Summary
Enumerative geometry, the mathematics of counting numbers of solutions to geometric problems, and its modern descendents, Gromov-Witten theory, Donaldson-Thomas theory, quantum cohomology and many other related fields, analyze geometric problems by computing numerical invariants, such as intersection numbers or degrees of characteristic classes. This essentially algebraic approach has been successful mainly in the study of problems over the complex numbers and other algebraically closed fields. There has been progress in attacking enumerative problems over the real numbers; the methods are mainly non-algebraic. Arithmetic content underlying the numerical invariants is hidden when analyzed by these non-algebraic methods. Recent work by the PI and others has opened the door to a new, purely algebraic approach to enumerative geometry that recovers results in both the complex and real cases in one package and reveals this arithmetic content over arbitrary fields. Building on these new developments, the goals of this proposal are, firstly, to use motivic homotopy theory, algebraic geometry and symplectic geometry to develop new purely algebraic methods for handling enumerative problems over an arbitrary field, secondly, to apply these methods to central enumerative problems, recovering and unifying known results over both C and R and thirdly, to use this new approach to reveal the hidden arithmetic nature of enumerative problems. In 2009 R. Pandharipande and I applied algebraic cobordism to prove the degree zero MNOP conjecture in Donaldson-Thomas theory. More recently, I have developed several aspects of the theory of quadratic invariants using motivic homotopy theory.
Max ERC Funding
2 124 663 €
Duration
Start date: 2019-09-01, End date: 2024-08-31
Project acronym REWOCRYPT
Project Theoretically-Sound Real-World Cryptography
Researcher (PI) Tibor JAGER
Host Institution (HI) UNIVERSITAET PADERBORN
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Novel technologies like Cloud Computing, Ubiquitous Computing, Big Data, Industry 4.0, and the Internet
of Things do not only come with a huge demand for practical and efficient cryptosystems, but also with many novel attack surfaces. The security properties required from cryptographic building blocks for these innovative applications go beyond classical security goals.
Modern theoretical cryptography has very successfully developed powerful techniques that enable the design and rigorous formal analysis of cryptosystems in theoretical security models. Now that these techniques are readily available, we have to take the next important step: the evolution of these techniques from idealized theoretical settings to the demands of real-world applications.
The REWOCRYPT project will tackle this main research challenge at the intersection of theoretical and real-world cryptography. It will provide a solid foundation for the design and mathematically rigorous security analysis of the next generation of cryptosystems that provably meet real-world security requirements and can safely be used to realize secure communication in trustworthy services and products for a modern interconnected society.
Summary
Novel technologies like Cloud Computing, Ubiquitous Computing, Big Data, Industry 4.0, and the Internet
of Things do not only come with a huge demand for practical and efficient cryptosystems, but also with many novel attack surfaces. The security properties required from cryptographic building blocks for these innovative applications go beyond classical security goals.
Modern theoretical cryptography has very successfully developed powerful techniques that enable the design and rigorous formal analysis of cryptosystems in theoretical security models. Now that these techniques are readily available, we have to take the next important step: the evolution of these techniques from idealized theoretical settings to the demands of real-world applications.
The REWOCRYPT project will tackle this main research challenge at the intersection of theoretical and real-world cryptography. It will provide a solid foundation for the design and mathematically rigorous security analysis of the next generation of cryptosystems that provably meet real-world security requirements and can safely be used to realize secure communication in trustworthy services and products for a modern interconnected society.
Max ERC Funding
1 498 638 €
Duration
Start date: 2019-04-01, End date: 2024-03-31
Project acronym Scan2CAD
Project Scan2CAD: Learning to Digitize the Real World
Researcher (PI) Matthias NIESSNER
Host Institution (HI) TECHNISCHE UNIVERSITAET MUENCHEN
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary "One of the most fundamental challenges in the digital age is to automatically create accurate, high-quality 3D representations of the world around us. This would have far-reaching impact, from enabling advances entertainment and immersive technologies (e.g., mixed reality) to medical applications and industrial manufacturing pipelines. Despite remarkable progress in scanning devices and 3D reconstruction algorithms, the resulting models remain highly impractical for display or use in virtual environments. This is due to the limited quality of these reconstructed 3D models, which is still far from the quality of assets designed by professional artists in countless of working hours. We believe that the key to addressing these shortcomings is understanding the design process of artist-created assets. We can then learn the correlation to real-world observations and replicate the process conditioned on these real-world input scans. In this proposal, we will answer the question ""How can we turn 3D scans into CAD-quality 3D assets?"".
"
Summary
"One of the most fundamental challenges in the digital age is to automatically create accurate, high-quality 3D representations of the world around us. This would have far-reaching impact, from enabling advances entertainment and immersive technologies (e.g., mixed reality) to medical applications and industrial manufacturing pipelines. Despite remarkable progress in scanning devices and 3D reconstruction algorithms, the resulting models remain highly impractical for display or use in virtual environments. This is due to the limited quality of these reconstructed 3D models, which is still far from the quality of assets designed by professional artists in countless of working hours. We believe that the key to addressing these shortcomings is understanding the design process of artist-created assets. We can then learn the correlation to real-world observations and replicate the process conditioned on these real-world input scans. In this proposal, we will answer the question ""How can we turn 3D scans into CAD-quality 3D assets?"".
"
Max ERC Funding
1 500 000 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym ScienceGraph
Project Knowledge Graph based Representation, Augmentation and Exploration of Scholarly Communication
Researcher (PI) Sören AUER
Host Institution (HI) GOTTFRIED WILHELM LEIBNIZ UNIVERSITAET HANNOVER
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis.
In ScienceGRAPH we aim to develop a novel model for representing, analysing, augmenting and exploiting scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. The model is based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting ScienceGRAPH base.
Currently, knowledge graphs are still confined to representing encyclopaedic, factual information. ScienceGRAPH advances the state-of-the-art by enabling to represent complex interdisciplinary scientific information including fine-grained provenance preservation, discourse capture, evolution tracing and concept drift. Also, we will demonstrate that we can synergistically combine automated extraction and augmentation techniques, with large-scale collaboration to reach an unprecedented level of knowledge graph breadth and depth.
As a result, we expect a paradigm shift in the methods of academic discourse towards knowledge-based information flows, which facilitate completely new ways of search and exploration. The efficiency and effectiveness of scholarly communication will significant increase, since ambiguities are reduced, reproducibility is facilitated, redundancy is avoided, provenance and contributions can be better traced and the interconnections of research contributions are made more explicit and transparent.
Summary
Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis.
In ScienceGRAPH we aim to develop a novel model for representing, analysing, augmenting and exploiting scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. The model is based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting ScienceGRAPH base.
Currently, knowledge graphs are still confined to representing encyclopaedic, factual information. ScienceGRAPH advances the state-of-the-art by enabling to represent complex interdisciplinary scientific information including fine-grained provenance preservation, discourse capture, evolution tracing and concept drift. Also, we will demonstrate that we can synergistically combine automated extraction and augmentation techniques, with large-scale collaboration to reach an unprecedented level of knowledge graph breadth and depth.
As a result, we expect a paradigm shift in the methods of academic discourse towards knowledge-based information flows, which facilitate completely new ways of search and exploration. The efficiency and effectiveness of scholarly communication will significant increase, since ambiguities are reduced, reproducibility is facilitated, redundancy is avoided, provenance and contributions can be better traced and the interconnections of research contributions are made more explicit and transparent.
Max ERC Funding
1 996 250 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym SYZYGY
Project Syzygies, moduli and topological invariants of groups
Researcher (PI) Gavril FARKAS
Host Institution (HI) HUMBOLDT-UNIVERSITAET ZU BERLIN
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary This is a proposal aimed at harvesting interconnections between algebraic geometry and geometric group theory using syzygies. The impetus of the proposal is the recent breakthrough in which, inspired by rational homotopy theory, we introduced Koszul modules as novel homological objects establishing striking connections between algebraic geometry and geometric group theory. Deep statements in geometric group theory have startling counterparts in algebraic geometry and these connection led to a recent proof of Green's Conjecture for generic algebraic curves in arbitrary characteristic, as well as to a dramatically simpler proof in characteristic zero. Based on a dynamic view of mathematics in which ideas from one field trigger major developments in another, I propose to lead a group at HU Berlin dedicated to the following major themes, which are outlined in the proposal: (i) Find a solution to Green's Conjecture on the syzygies of an arbitrary smooth canonical curve of genus g. Find a full solution to the Prym-Green Conjecture on the syzygies of a general paracanonical algebraic curve of genus g. Formulate and prove a non-commutative Green's Conjecture for super algebraic curves. (ii) Compute the Kodaira dimension of the moduli space of curves in the transition case from unirationality to general type, when g is between 17 and 21. Construct the canonical model of the moduli space of curves and find its modular interpretation. (iii) Find algebro-geometric interpretations for Alexander invariants of the Torelli group of the mapping class group and that of the Torelli group of the free group. Understand the link between these invariants, the homotopy type and the cohomological dimension of the moduli space of curves. (iv) Get structural insight in the newly discovered topological version of Green's Conjecture involving the Alexander invariant of the group.
Summary
This is a proposal aimed at harvesting interconnections between algebraic geometry and geometric group theory using syzygies. The impetus of the proposal is the recent breakthrough in which, inspired by rational homotopy theory, we introduced Koszul modules as novel homological objects establishing striking connections between algebraic geometry and geometric group theory. Deep statements in geometric group theory have startling counterparts in algebraic geometry and these connection led to a recent proof of Green's Conjecture for generic algebraic curves in arbitrary characteristic, as well as to a dramatically simpler proof in characteristic zero. Based on a dynamic view of mathematics in which ideas from one field trigger major developments in another, I propose to lead a group at HU Berlin dedicated to the following major themes, which are outlined in the proposal: (i) Find a solution to Green's Conjecture on the syzygies of an arbitrary smooth canonical curve of genus g. Find a full solution to the Prym-Green Conjecture on the syzygies of a general paracanonical algebraic curve of genus g. Formulate and prove a non-commutative Green's Conjecture for super algebraic curves. (ii) Compute the Kodaira dimension of the moduli space of curves in the transition case from unirationality to general type, when g is between 17 and 21. Construct the canonical model of the moduli space of curves and find its modular interpretation. (iii) Find algebro-geometric interpretations for Alexander invariants of the Torelli group of the mapping class group and that of the Torelli group of the free group. Understand the link between these invariants, the homotopy type and the cohomological dimension of the moduli space of curves. (iv) Get structural insight in the newly discovered topological version of Green's Conjecture involving the Alexander invariant of the group.
Max ERC Funding
2 147 202 €
Duration
Start date: 2020-03-01, End date: 2025-02-28
Project acronym TOROS
Project A Theory-Oriented Real-Time Operating System for Temporally Sound Cyber-Physical Systems
Researcher (PI) Björn BRANDENBURG
Host Institution (HI) MAX-PLANCK-GESELLSCHAFT ZUR FORDERUNG DER WISSENSCHAFTEN EV
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary "The TOROS project targets the challenge of implementing safety-critical cyber-physical systems (CPSs) on commodity multicore processors such that their temporal correctness can be certified in a formal, trustworthy manner.
While today it is in principle possible to construct a CPS in a temporally sound way, in practice this rarely happens because, with the current real-time foundations, the prerequisite investments in time, expertise, and resources are prohibitive.
This situation is caused in large parts by three fundamental shortcomings in the design of state-of-the-art real-time operating systems (RTOSs) and the applicable timing analyses: (i) current RTOSs expose primarily low-level mechanisms that suffer from accidental unpredictability, i.e., mechanisms that require too much expertise to be used and composed in a temporally sound way; (ii) most analyses rely on idealized worst-case execution-time assumptions that realistically cannot be satisfied on commodity multicore platforms; and (iii) the available real-time theory depends on often complex and tedious proofs, and cannot always be trusted to be sound.
As a result, formal timing analysis is rarely relied upon in the certification of CPSs in reality, and instead
the use of ad-hoc, unsound ""safety margins"" prevails.
The TOROS project seeks to close this gap by moving the RTOS closer to analysis, the analysis closer to reality, and by ensuring that the analysis can be trusted.
Specifically, the TOROS project will
1. introduce a radically new, theory-oriented RTOS that by design ensures that the temporal behavior of any workload can be analyzed (even if the application developer is unaware of the relevant theory),
2. develop a matching novel timing analysis that allows for below-worst-case provisioning with analytically sound safety margins that yields meaningful probabilistic response-time guarantees, and
3. mechanize and verify all supporting timing analysis with the Coq proof assistant."
Summary
"The TOROS project targets the challenge of implementing safety-critical cyber-physical systems (CPSs) on commodity multicore processors such that their temporal correctness can be certified in a formal, trustworthy manner.
While today it is in principle possible to construct a CPS in a temporally sound way, in practice this rarely happens because, with the current real-time foundations, the prerequisite investments in time, expertise, and resources are prohibitive.
This situation is caused in large parts by three fundamental shortcomings in the design of state-of-the-art real-time operating systems (RTOSs) and the applicable timing analyses: (i) current RTOSs expose primarily low-level mechanisms that suffer from accidental unpredictability, i.e., mechanisms that require too much expertise to be used and composed in a temporally sound way; (ii) most analyses rely on idealized worst-case execution-time assumptions that realistically cannot be satisfied on commodity multicore platforms; and (iii) the available real-time theory depends on often complex and tedious proofs, and cannot always be trusted to be sound.
As a result, formal timing analysis is rarely relied upon in the certification of CPSs in reality, and instead
the use of ad-hoc, unsound ""safety margins"" prevails.
The TOROS project seeks to close this gap by moving the RTOS closer to analysis, the analysis closer to reality, and by ensuring that the analysis can be trusted.
Specifically, the TOROS project will
1. introduce a radically new, theory-oriented RTOS that by design ensures that the temporal behavior of any workload can be analyzed (even if the application developer is unaware of the relevant theory),
2. develop a matching novel timing analysis that allows for below-worst-case provisioning with analytically sound safety margins that yields meaningful probabilistic response-time guarantees, and
3. mechanize and verify all supporting timing analysis with the Coq proof assistant."
Max ERC Funding
1 499 813 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym TRANSHOLOMORPHIC
Project New transversality techniques in holomorphic curve theories
Researcher (PI) Chris M WENDL
Host Institution (HI) HUMBOLDT-UNIVERSITAET ZU BERLIN
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary "In the study of symplectic and contact manifolds, a decisive role has been played by the theory of pseudoholomorphic curves, introduced by Gromov in 1985. One major drawback of this theory is the fundamental conflict between ""genericity"" and ""symmetry"", which for instance causes moduli spaces of holomorphic curves to be singular or have the wrong dimension whenever multiply covered curves are present. Most traditional solutions to this problem involve abstract perturbations of the Cauchy-Riemann equation, but recently there has been progress in tackling the transversality problem more directly, leading in particular to a proof of the ""super-rigidity"" conjecture on symplectic Calabi-Yau 6-manifolds. The overriding goal of the proposed project is to unravel the full implications of these new transversality techniques for problems in symplectic topology and neighboring fields. Examples of applications to be explored include: (1) Understanding the symplectic field theory of unit cotangent bundles for manifolds with negative or nonpositive curvature, with applications to the nearby Lagrangian conjecture and dynamical questions in Riemannian geometry; (2) Developing a comprehensive bifurcation theory for Reeb orbits and holomorphic curves in symplectic cobordisms, leading e.g. to a proof that planar contact structures are ""quasiflexible""; (3) Completing the analytical foundations of Hutchings's embedded contact homology (ECH), a 3-dimensional holomorphic curve theory with important applications to dynamics and symplectic embedding problems; (4) Developing new refinements of the Gromov-Witten invariants based on super-rigidity and bifurcation theory; (5) Defining higher-dimensional analogues of ECH; (6) Proving integrality relations in the setting of 6-dimensional symplectic cobordisms, analogous to the Gopakumar-Vafa formula for Calabi-Yau 3-folds."
Summary
"In the study of symplectic and contact manifolds, a decisive role has been played by the theory of pseudoholomorphic curves, introduced by Gromov in 1985. One major drawback of this theory is the fundamental conflict between ""genericity"" and ""symmetry"", which for instance causes moduli spaces of holomorphic curves to be singular or have the wrong dimension whenever multiply covered curves are present. Most traditional solutions to this problem involve abstract perturbations of the Cauchy-Riemann equation, but recently there has been progress in tackling the transversality problem more directly, leading in particular to a proof of the ""super-rigidity"" conjecture on symplectic Calabi-Yau 6-manifolds. The overriding goal of the proposed project is to unravel the full implications of these new transversality techniques for problems in symplectic topology and neighboring fields. Examples of applications to be explored include: (1) Understanding the symplectic field theory of unit cotangent bundles for manifolds with negative or nonpositive curvature, with applications to the nearby Lagrangian conjecture and dynamical questions in Riemannian geometry; (2) Developing a comprehensive bifurcation theory for Reeb orbits and holomorphic curves in symplectic cobordisms, leading e.g. to a proof that planar contact structures are ""quasiflexible""; (3) Completing the analytical foundations of Hutchings's embedded contact homology (ECH), a 3-dimensional holomorphic curve theory with important applications to dynamics and symplectic embedding problems; (4) Developing new refinements of the Gromov-Witten invariants based on super-rigidity and bifurcation theory; (5) Defining higher-dimensional analogues of ECH; (6) Proving integrality relations in the setting of 6-dimensional symplectic cobordisms, analogous to the Gopakumar-Vafa formula for Calabi-Yau 3-folds."
Max ERC Funding
1 624 500 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym TrueBrainConnect
Project Advancing the non-invasive assessment of brain communication in neurological disease
Researcher (PI) Stefan HAUFE
Host Institution (HI) CHARITE - UNIVERSITAETSMEDIZIN BERLIN
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Pathological communication between different brain regions has been implicated in various neurological disorders. However, the computational tools for assessing such communication from neuroimaging data are not sufficiently developed. The goal of TrueBrainConnect is to establish brain connectivity analysis using non-invasive electrophysiology as a practical and reliable neuroscience tool. To achieve this, we will develop novel signal processing and machine learning techniques that address shortcomings in state-of-the-art reconstruction and localization of neural activity from sensor data, the estimation of genuine neural interactions, the prediction of external (e.g., clinical) variables from estimated neural interactions, and the interpretation of the resulting models. These techniques will be thoroughly validated and then made publicly available. We will use the TrueBrainConnect methodology to characterize the neural bases underlying dementia and Parkinson's disease (PD), two of the most pressing neurological health challenges of our time. In collaboration with clinical experts, we will address practically relevant issues such as how to determine the onset of 'freezing' episodes in PD patients, and how to detect different variants and precursors of dementia. The outcome of TrueBrainConnect will be a versatile methodology allowing researchers, for the first time, to reliably estimate and anatomically localize important types of interactions between different brain structures in humans within known confidence bounds. The proposed clinical applications will improve our understanding of the studied diseases and will lay the foundation for the development of novel diagnostic markers for these diseases.
Summary
Pathological communication between different brain regions has been implicated in various neurological disorders. However, the computational tools for assessing such communication from neuroimaging data are not sufficiently developed. The goal of TrueBrainConnect is to establish brain connectivity analysis using non-invasive electrophysiology as a practical and reliable neuroscience tool. To achieve this, we will develop novel signal processing and machine learning techniques that address shortcomings in state-of-the-art reconstruction and localization of neural activity from sensor data, the estimation of genuine neural interactions, the prediction of external (e.g., clinical) variables from estimated neural interactions, and the interpretation of the resulting models. These techniques will be thoroughly validated and then made publicly available. We will use the TrueBrainConnect methodology to characterize the neural bases underlying dementia and Parkinson's disease (PD), two of the most pressing neurological health challenges of our time. In collaboration with clinical experts, we will address practically relevant issues such as how to determine the onset of 'freezing' episodes in PD patients, and how to detect different variants and precursors of dementia. The outcome of TrueBrainConnect will be a versatile methodology allowing researchers, for the first time, to reliably estimate and anatomically localize important types of interactions between different brain structures in humans within known confidence bounds. The proposed clinical applications will improve our understanding of the studied diseases and will lay the foundation for the development of novel diagnostic markers for these diseases.
Max ERC Funding
1 499 875 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym TUgbOAT
Project Towards Unification of Algorithmic Tools
Researcher (PI) Piotr SANKOWSKI
Host Institution (HI) UNIWERSYTET WARSZAWSKI
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary Over last 50 years, extensive algorithmic research gave rise to a plethora of fundamental results. These results equipped us with increasingly better solutions to a number of core problems. However, many of these solutions are incomparable. The main reason for that is the fact that many cutting-edge algorithmic results are very specialized in their applicability. Often, they are limited to particular parameter range or require different assumptions.
A natural question arises: is it possible to get “one to rule them all” algorithm for some core problems such as matchings and maximum flow? In other words, can we unify our algorithms? That is, can we develop an algorithmic framework that enables us to combine a number of existing, only “conditionally” optimal, algorithms into a single all-around optimal solution? Such results would unify the landscape of algorithmic theory but would also greatly enhance the impact of these cutting-edge developments on the real world. After all, algorithms and data structures are the basic building blocks of every computer program. However, currently using cutting-edge algorithms in an optimal way requires extensive expertise and thorough understanding of both the underlying implementation and the characteristics of the input data.
Hence, the need for such unified solutions seems to be critical from both theoretical and practical perspective. However, obtaining such algorithmic unification poses serious theoretical challenges. We believe that some of the recent advances in algorithms provide us with an opportunity to make serious progress towards solving these challenges in the context of several fundamental algorithmic problems. This project should be seen as the start of such a systematic study of unification of algorithmic tools with the aim to remove the need to “under the hood” while still guaranteeing an optimal performance independently of the particular usage case.
Summary
Over last 50 years, extensive algorithmic research gave rise to a plethora of fundamental results. These results equipped us with increasingly better solutions to a number of core problems. However, many of these solutions are incomparable. The main reason for that is the fact that many cutting-edge algorithmic results are very specialized in their applicability. Often, they are limited to particular parameter range or require different assumptions.
A natural question arises: is it possible to get “one to rule them all” algorithm for some core problems such as matchings and maximum flow? In other words, can we unify our algorithms? That is, can we develop an algorithmic framework that enables us to combine a number of existing, only “conditionally” optimal, algorithms into a single all-around optimal solution? Such results would unify the landscape of algorithmic theory but would also greatly enhance the impact of these cutting-edge developments on the real world. After all, algorithms and data structures are the basic building blocks of every computer program. However, currently using cutting-edge algorithms in an optimal way requires extensive expertise and thorough understanding of both the underlying implementation and the characteristics of the input data.
Hence, the need for such unified solutions seems to be critical from both theoretical and practical perspective. However, obtaining such algorithmic unification poses serious theoretical challenges. We believe that some of the recent advances in algorithms provide us with an opportunity to make serious progress towards solving these challenges in the context of several fundamental algorithmic problems. This project should be seen as the start of such a systematic study of unification of algorithmic tools with the aim to remove the need to “under the hood” while still guaranteeing an optimal performance independently of the particular usage case.
Max ERC Funding
1 510 800 €
Duration
Start date: 2018-09-01, End date: 2023-08-31