Project acronym BirNonArchGeom
Project Birational and non-archimedean geometries
Researcher (PI) Michael TEMKIN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Summary
Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Max ERC Funding
1 365 600 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Summary
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Max ERC Funding
1 932 500 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym LiftMatch
Project Lifting Methods for Global Matching Problems
Researcher (PI) Yaron LIPMAN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Summary
"This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Max ERC Funding
1 675 640 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym PCPHDX
Project Probabilistically Checkable Proofs, Agreement Tests, and High Dimensional Expanders
Researcher (PI) Irit DVEER DINUR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Summary
PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Max ERC Funding
1 512 035 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym PolSymAGA
Project Polarity and Central-Symmetry in Asymptotic Geometric Analysis
Researcher (PI) Shiri ARTSTEIN
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Summary
Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Max ERC Funding
1 514 125 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym VERICOMP
Project Foundations of Verifiable Computing
Researcher (PI) Guy ROTHBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Summary
Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Max ERC Funding
1 882 460 €
Duration
Start date: 2019-01-01, End date: 2023-12-31