Project acronym BANDWIDTH
Project The cost of limited communication bandwidth in distributed computing
Researcher (PI) Keren CENSOR-HILLEL
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Summary
Distributed systems underlie many modern technologies, a prime example being the Internet. The ever-increasing abundance of distributed systems necessitates their design and usage to be backed by strong theoretical foundations.
A major challenge that distributed systems face is the lack of a central authority, which brings many aspects of uncertainty into the environment, in the form of unknown network topology or unpredictable dynamic behavior. A practical restriction of distributed systems, which is at the heart of this proposal, is the limited bandwidth available for communication between the network components.
A central family of distributed tasks is that of local tasks, which are informally described as tasks which are possible to solve by sending information through only a relatively small number of hops. A cornerstone example is the need to break symmetry and provide a better utilization of resources, which can be obtained by the task of producing a valid coloring of the nodes given some small number of colors. Amazingly, there are still huge gaps between the known upper and lower bounds for the complexity of many local tasks. This holds even if one allows powerful assumptions of unlimited bandwidth. While some known algorithms indeed use small messages, the complexity gaps are even larger compared to the unlimited bandwidth case. This is not a mere coincidence, and in fact the existing theoretical infrastructure is provably incapable of
giving stronger lower bounds for many local tasks under limited bandwidth.
This proposal zooms in on this crucial blind spot in the current literature on the theory of distributed computing, namely, the study of local tasks under limited bandwidth. The goal of this research is to produce fast algorithms for fundamental distributed local tasks under restricted bandwidth, as well as understand their limitations by providing lower bounds.
Max ERC Funding
1 486 480 €
Duration
Start date: 2018-06-01, End date: 2023-05-31
Project acronym BeyondA1
Project Set theory beyond the first uncountable cardinal
Researcher (PI) Assaf Shmuel Rinot
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Summary
We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah.
While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions.
A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that.
To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Max ERC Funding
1 362 500 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym BirNonArchGeom
Project Birational and non-archimedean geometries
Researcher (PI) Michael TEMKIN
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Summary
Resolution of singularities is one of classical, central and difficult areas of algebraic geometry, with a centennial history of intensive research and contributions of such great names as Zariski, Hironaka and Abhyankar. Nowadays, desingularization of schemes of characteristic zero is very well understood, while semistable reduction of morphisms and desingularization in positive characteristic are still waiting for major breakthroughs. In addition to the classical techniques with their triumph in characteristic zero, modern resolution of singularities includes de Jong's method of alterations, toroidal methods, formal analytic and non-archimedean methods, etc.
The aim of the proposed research is to study nearly all directions in resolution of singularities and semistable reduction, as well as the wild ramification phenomena, which are probably the main obstacle to transfer methods from characteristic zero to positive characteristic.
The methods of algebraic and non-archimedean geometries are intertwined in the proposal, though algebraic geometry is somewhat dominating, especially due to the new stack-theoretic techniques. It seems very probable that increasing the symbiosis between birational and non-archimedean geometries will be one of by-products of this research.
Max ERC Funding
1 365 600 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym CombiCompGeom
Project Combinatorial Aspects of Computational Geometry
Researcher (PI) Natan Rubin
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Summary
The project focuses on the interface between computational and combinatorial geometry.
Geometric problems emerge in a variety of computational fields that interact with the physical world.
The performance of geometric algorithms is determined by the description complexity of their underlying combinatorial structures. Hence, most theoretical challenges faced by computational geometry are of a distinctly combinatorial nature.
In the past two decades, computational geometry has been revolutionized by the powerful combination of random sampling techniques with the abstract machinery of geometric arrangements. These insights were used, in turn, to establish state-of-the-art results in combinatorial geometry. Nevertheless, a number of fundamental problems remained open and resisted numerous attempts to solve them.
Motivated by the recent breakthrough results, in which the PI played a central role, we propose two exciting lines of study with the potential to change the landscape of this field.
The first research direction concerns the complexity of Voronoi diagrams -- arguably the most common structures in computational geometry.
The second direction concerns combinatorial and algorithmic aspects of geometric intersection structures, including some fundamental open problems in geometric transversal theory. Many of these questions are motivated by geometric variants of general covering and packing problems, and all efficient approximation schemes for them must rely on the intrinsic properties of geometric graphs and hypergraphs.
Any progress in responding to these challenges will constitute a major breakthrough in both computational and combinatorial geometry.
Max ERC Funding
1 303 750 €
Duration
Start date: 2016-09-01, End date: 2021-08-31
Project acronym CSG
Project C° symplectic geometry
Researcher (PI) Lev Buhovski
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary "The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Summary
"The objective of this proposal is to study ""continuous"" (or C^0) objects, as well as C^0 properties of smooth objects, in the field of symplectic geometry and topology. C^0 symplectic geometry has seen spectacular progress in recent years, drawing attention of mathematicians from various background. The proposed study aims to discover new fascinating C^0 phenomena in symplectic geometry.
One circle of questions concerns symplectic and Hamiltonian homeomorphisms. Recent studies indicate that these objects possess both rigidity and flexibility, appearing in surprising and counter-intuitive ways. Our understanding of symplectic and Hamiltonian homeomorphisms is far from being satisfactory, and here we intend to study questions related to action of symplectic homeomorphisms on submanifolds. Some other questions are about Hamiltonian homeomorphisms in relation to the celebrated Arnold conjecture. The PI suggests to study spectral invariants of continuous Hamiltonian flows, which allow to formulate the C^0 Arnold conjecture in higher dimensions. Another central problem that the PI will work on is the C^0 flux conjecture.
A second circle of questions is about the Poisson bracket operator, and its functional-theoretic properties. The first question concerns the lower bound for the Poisson bracket invariant of a cover, conjectured by L. Polterovich who indicated relations between this problem and quantum mechanics. Another direction aims to study the C^0 rigidity versus flexibility of the L_p norm of the Poisson bracket. Despite a recent progress in dimension two showing rigidity, very little is known in higher dimensions. The PI proposes to use combination of tools from topology and from hard analysis in order to address this question, whose solution will be a big step towards understanding functional-theoretic properties of the Poisson bracket operator."
Max ERC Funding
1 345 282 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym DeepInternal
Project Going Deep and Blind with Internal Statistics
Researcher (PI) Michal IRANI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE6, ERC-2017-ADG
Summary Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Summary
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a “blind” manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the “wisdom of the crowd” which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).
Max ERC Funding
2 466 940 €
Duration
Start date: 2018-05-01, End date: 2023-04-30
Project acronym DELPHI
Project Computing Answers to Complex Questions in Broad Domains
Researcher (PI) Jonathan Berant
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Summary
The explosion of information around us has democratized knowledge and transformed its availability for
people around the world. Still, since information is mediated through automated systems, access is bounded
by their ability to understand language.
Consider an economist asking “What fraction of the top-5 growing countries last year raised their co2 emission?”.
While the required information is available, answering such complex questions automatically is
not possible. Current question answering systems can answer simple questions in broad domains, or complex
questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art.
This is because systems are unable to decompose questions into their parts, and find the relevant information
in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train
such models is prohibitive.
In this proposal I ask: Can computers answer broad and complex questions that require reasoning over
multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations
the answer will be “yes”. My thesis is that symbolic representations are suitable for meaning composition, as
they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned
by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions
are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer
is computed from all gathered information.
This research tackles foundational questions in language understanding. What is the right representation
for reasoning in language? Can models learn to perform complex actions in the face of paucity of data?
Moreover, my research, if successful, will transform how we interact with machines, and define a role for
them as research assistants in science, education, and our daily life.
Max ERC Funding
1 499 375 €
Duration
Start date: 2019-04-01, End date: 2024-03-31
Project acronym DIFFOP
Project Nonlinear Data and Signal Analysis with Diffusion Operators
Researcher (PI) Ronen TALMON
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Summary
Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful “handles” to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Max ERC Funding
1 260 000 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym EffectiveTG
Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics
Researcher (PI) Gal BINYAMINI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Summary
Tame geometry studies structures in which every definable set has a
finite geometric complexity. The study of tame geometry spans several
interrelated mathematical fields, including semialgebraic,
subanalytic, and o-minimal geometry. The past decade has seen the
emergence of a spectacular link between tame geometry and arithmetic
following the discovery of the fundamental Pila-Wilkie counting
theorem and its applications in unlikely diophantine
intersections. The P-W theorem itself relies crucially on the
Yomdin-Gromov theorem, a classical result of tame geometry with
fundamental applications in smooth dynamics.
It is natural to ask whether the complexity of a tame set can be
estimated effectively in terms of the defining formulas. While a large
body of work is devoted to answering such questions in the
semialgebraic case, surprisingly little is known concerning more
general tame structures - specifically those needed in recent
applications to arithmetic. The nature of the link between tame
geometry and arithmetic is such that any progress toward effectivizing
the theory of tame structures will likely lead to effective results
in the domain of unlikely intersections. Similarly, a more effective
version of the Yomdin-Gromov theorem is known to imply important
consequences in smooth dynamics.
The proposed research will approach effectivity in tame geometry from
a fundamentally new direction, bringing to bear methods from the
theory of differential equations which have until recently never been
used in this context. Toward this end, our key goals will be to gain
insight into the differential algebraic and complex analytic structure
of tame sets; and to apply this insight in combination with results
from the theory of differential equations to effectivize key results
in tame geometry and its applications to arithmetic and dynamics. I
believe that my preliminary work in this direction amply demonstrates
the feasibility and potential of this approach.
Max ERC Funding
1 155 027 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ErgComNum
Project Ergodic theory and additive combinatorics
Researcher (PI) Tamar Ziegler
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE1, ERC-2015-CoG
Summary The last decade has witnessed a new spring for dynamical systems. The field - initiated by Poincare in the study of the N-body problem - has become essential in the understanding of seemingly far off fields such as combinatorics, number theory and theoretical computer science. In particular, ideas from ergodic theory played an important role in the resolution of long standing open problems in combinatorics and number theory. A striking example is the role of dynamics on nilmanifolds in the recent proof of Hardy-Littlewood estimates for the number of solutions to systems of linear equations of finite complexity in the prime numbers. The interplay between ergodic theory, number theory and additive combinatorics has proved very fruitful; it is a fast growing area in mathematics attracting many young researchers. We propose to tackle central open problems in the area.
Summary
The last decade has witnessed a new spring for dynamical systems. The field - initiated by Poincare in the study of the N-body problem - has become essential in the understanding of seemingly far off fields such as combinatorics, number theory and theoretical computer science. In particular, ideas from ergodic theory played an important role in the resolution of long standing open problems in combinatorics and number theory. A striking example is the role of dynamics on nilmanifolds in the recent proof of Hardy-Littlewood estimates for the number of solutions to systems of linear equations of finite complexity in the prime numbers. The interplay between ergodic theory, number theory and additive combinatorics has proved very fruitful; it is a fast growing area in mathematics attracting many young researchers. We propose to tackle central open problems in the area.
Max ERC Funding
1 342 500 €
Duration
Start date: 2016-05-01, End date: 2021-04-30
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Summary
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers.
The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience.
Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very
likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Max ERC Funding
1 824 467 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym HARMONIC
Project Discrete harmonic analysis for computer science
Researcher (PI) Yuval FILMUS
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Summary
Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory.
Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science.
The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Max ERC Funding
1 473 750 €
Duration
Start date: 2019-03-01, End date: 2024-02-29
Project acronym HD-App
Project New horizons in homogeneous dynamics and its applications
Researcher (PI) Uri SHAPIRA
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Starting Grant (StG), PE1, ERC-2017-STG
Summary We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Summary
We present a large variety of novel lines of research in Homogeneous Dynamics with emphasis on the dynamics of the diagonal group. Both new and classical applications are suggested, most notably to
• Number Theory
• Geometry of Numbers
• Diophantine approximation.
Emphasis is given to applications in
• Diophantine properties of algebraic numbers.
The proposal is built of 4 sections.
(1) In the first section we discuss questions pertaining to topological and distributional aspects of periodic orbits of the diagonal group in the space of lattices in Euclidean space. These objects encode deep information regarding Diophantine properties of algebraic numbers. We demonstrate how these questions are closely related to, and may help solve, some of the central open problems in the geometry of numbers and Diophantine approximation.
(2) In the second section we discuss Minkowski's conjecture regarding integral values of products of linear forms. For over a century this central conjecture is resisting a general solution and a novel and promising strategy for its resolution is presented.
(3) In the third section, a novel conjecture regarding limiting distribution of infinite-volume-orbits is presented, in analogy with existing results regarding finite-volume-orbits. Then, a variety of applications and special cases are discussed, some of which give new results regarding classical concepts such as continued fraction expansion of rational numbers.
(4) In the last section we suggest a novel strategy to attack one of the most notorious open problems in Diophantine approximation, namely: Do cubic numbers have unbounded continued fraction expansion? This novel strategy leads us to embark on a systematic study of an area in homogeneous dynamics which has not been studied yet. Namely, the dynamics in the space of discrete subgroups of rank k in R^n (identified up to scaling).
Max ERC Funding
1 432 730 €
Duration
Start date: 2018-10-01, End date: 2023-09-30
Project acronym HIEXP
Project High Dimensional Expanders, Ramanujan Complexes and Codes
Researcher (PI) Alex LUBOTZKY
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2015-AdG
Summary "Expander graphs have been playing a fundamental role in many areas of computer science. During the last 15 years they have also found important and unexpected applications in pure mathematics. The goal of the current research is to develop systematically high-dimensional (HD) theory of expanders, i.e., simplicial complexes and hypergraphs which resemble in dimension d, the role of expander graphs for d = 1. There are several motivations for developing such a theory, some from pure mathematics and some from computer science. For example, Ramanujan complexes (the HD versions of the ""optimal"" expanders, the Ramanujan graphs) have already been useful for extremal hypergraph theory. One of the main goals of this research is to use them to solve other problems, such as Gromov's problem: are there bounded degree simplicial complexes with the topological overlapping property (""topological expanders""). Other directions of HD expanders have applications in property testing, a very important subject in theoretical computer science. Moreover they can be a tool for the construction of locally testable codes, an important question of theoretical and practical importance in the theory of error correcting codes. In addition, the study of these simplicial complexes suggests new quantum error correcting codes (QECC). It is hoped that it will lead to such codes which are also low density parity check (LDPC). The huge success and impact of the theory of expander graphs suggests that the high dimensional theory will also bring additional unexpected applications beside those which can be foreseen as of now."
Summary
"Expander graphs have been playing a fundamental role in many areas of computer science. During the last 15 years they have also found important and unexpected applications in pure mathematics. The goal of the current research is to develop systematically high-dimensional (HD) theory of expanders, i.e., simplicial complexes and hypergraphs which resemble in dimension d, the role of expander graphs for d = 1. There are several motivations for developing such a theory, some from pure mathematics and some from computer science. For example, Ramanujan complexes (the HD versions of the ""optimal"" expanders, the Ramanujan graphs) have already been useful for extremal hypergraph theory. One of the main goals of this research is to use them to solve other problems, such as Gromov's problem: are there bounded degree simplicial complexes with the topological overlapping property (""topological expanders""). Other directions of HD expanders have applications in property testing, a very important subject in theoretical computer science. Moreover they can be a tool for the construction of locally testable codes, an important question of theoretical and practical importance in the theory of error correcting codes. In addition, the study of these simplicial complexes suggests new quantum error correcting codes (QECC). It is hoped that it will lead to such codes which are also low density parity check (LDPC). The huge success and impact of the theory of expander graphs suggests that the high dimensional theory will also bring additional unexpected applications beside those which can be foreseen as of now."
Max ERC Funding
1 592 500 €
Duration
Start date: 2016-08-01, End date: 2021-07-31
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Summary
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Max ERC Funding
1 932 500 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Summary
We consider the dynamics of actions on homogeneous spaces of algebraic groups,
and propose to tackle a wide range of problems in the area, including the central open problems.
One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$.
A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term.
One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics.
Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Max ERC Funding
2 090 625 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym iEXTRACT
Project Information Extraction for Everyone
Researcher (PI) Yoav Goldberg
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Summary
Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward.
Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language.
I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query
unstructured texts via easily structured forms.
More specifically, I aim develop:
(a) a novel structured representation that abstracts much of the complexity of natural language;
(b) algorithms that derive these representations from texts;
(c) an accessible rule language to query this representation;
(d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention.
The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of
domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and
advance their profession.
Max ERC Funding
1 499 354 €
Duration
Start date: 2019-05-01, End date: 2024-04-30
Project acronym LiftMatch
Project Lifting Methods for Global Matching Problems
Researcher (PI) Yaron LIPMAN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary "This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Summary
"This proposal presents a research program aimed at breaking new ground in theoretical, algorithmic and practical aspects of the notoriously difficult matching problems. The main goal is developing a unified algorithmic framework for matching problems that enjoys theoretical guarantees and high practical value for ``real-life'' applications.
The core methodological aspect is the modelling of matching problems as high-dimensional, lifted convex problems that can be efficiently approximated. We present several case-studies of this methodology, constructing efficient algorithms for approximating the solutions of important instances of the matching problem. The results already demonstrate state of the art performance and are backed-up with novel theoretical analysis proving tightness and correctness of the suggested convexifications for certain classes of non-trivial input.
This proposal has ``high risk - high impact'' profile: The ``high-risk"" aspect of this proposal comes from the hardness of general matching problems which, when faithfully represented, are NP-hard. However, as we demonstrate, combining convex optimization tools in a careful way, that takes into account computational complexity, is likely to push forward the limits of current algorithmic solutions. The ``High-impact"" will be immediate: As matching problems exist in almost every aspect of scientific research, practical generic algorithms for matching problems are likely to have a significant influence.
We believe that the inroads and preliminary results presented in this proposal already provide strong evidence for the potential success of the suggested research program."
Max ERC Funding
1 675 640 €
Duration
Start date: 2018-03-01, End date: 2023-02-28
Project acronym LightCrypt
Project New Directions in Lightweight Cryptanalysis
Researcher (PI) Nathan Keller
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Over the next five years, fifty billion new smart devices will be connected to the Internet of Things (IoT), creating a revolution in the way we interact with our environment. Such resource constrained devices will require lightweight cryptography to protect them and us from bad actors. Unfortunately, such schemes can be highly vulnerable: Two notable examples are the encryption schemes used in GSM cellular phones and in car remote controls - both broken by the PI. We claim that it is not sufficient to adjust the current design and analysis tools to the constrained environment. Instead, we must establish a new research methodology, aiming directly at the problems arising in the 'lightweight realm'.
We plan to concentrate on four main directions. First, we will go 'a level up' to study the security of generic lightweight building blocks in order to find the minimal number of operations required to transition from insecure to secure designs. Second, when considering specific ciphers we will pursue practical low complexity attacks, which are more relevant to the lightweight realm than standard theoretical attacks. Third, we will pursue new directions toward establishing 'white-box cryptography' – a central challenge in IoT cryptography. Finally, we will explore further applications of discrete analysis to lightweight cryptography, trying to establish rigorous conditions under which the standard cryptanalytic techniques apply in order to avoid unnecessarily pessimistic security estimates.
For the near future, we hope that our research will make it possible to detect and fix weaknesses in existing lightweight ciphers before they can be exploited by the 'bad guys'. Looking forward farther, we hope to understand how to design new secure lightweight ciphers for the billions of IoT devices to come.
Summary
Over the next five years, fifty billion new smart devices will be connected to the Internet of Things (IoT), creating a revolution in the way we interact with our environment. Such resource constrained devices will require lightweight cryptography to protect them and us from bad actors. Unfortunately, such schemes can be highly vulnerable: Two notable examples are the encryption schemes used in GSM cellular phones and in car remote controls - both broken by the PI. We claim that it is not sufficient to adjust the current design and analysis tools to the constrained environment. Instead, we must establish a new research methodology, aiming directly at the problems arising in the 'lightweight realm'.
We plan to concentrate on four main directions. First, we will go 'a level up' to study the security of generic lightweight building blocks in order to find the minimal number of operations required to transition from insecure to secure designs. Second, when considering specific ciphers we will pursue practical low complexity attacks, which are more relevant to the lightweight realm than standard theoretical attacks. Third, we will pursue new directions toward establishing 'white-box cryptography' – a central challenge in IoT cryptography. Finally, we will explore further applications of discrete analysis to lightweight cryptography, trying to establish rigorous conditions under which the standard cryptanalytic techniques apply in order to avoid unnecessarily pessimistic security estimates.
For the near future, we hope that our research will make it possible to detect and fix weaknesses in existing lightweight ciphers before they can be exploited by the 'bad guys'. Looking forward farther, we hope to understand how to design new secure lightweight ciphers for the billions of IoT devices to come.
Max ERC Funding
1 487 500 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym LocalOrder
Project Localization and Ordering Phenomena in Statistical Physics, Probability Theory and Combinatorics
Researcher (PI) Ron Peled
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2015-STG
Summary Mathematical statistical physics has seen spectacular progress in recent years. Existing problems which were previously unattainable were solved, opening a way to approach some of the classical open questions in the field. The proposed research focuses on phenomena of localization and long-range order in physical systems of large size, identifying several fundamental questions lying at the interface of Statistical Physics, Probability Theory and Combinatorics.
One circle of questions concerns the fluctuation behavior of random surfaces, where the PI has recently proved delocalization in two dimensions answering a 1975 question of Brascamp, Lieb and Lebowitz. A main goal of the research is to establish some of the long-standing universality conjectures for random surfaces. This study is also tied to the localization features of random operators, such as random Schrodinger operators and band matrices, as well as those of reinforced random walks. The PI intends to develop this connection further to bring the state-of-the-art to the conjectured thresholds.
A second circle of questions regards long-range order in high-dimensional systems. This phenomenon is predicted to encompass many models of statistical physics but rigorous results are quite limited. A notable example is the PI’s proof of Kotecky’s 1985 conjecture on the rigidity of proper 3-colorings in high dimensions. The methods used in this context are not limited to high dimensions and were recently used by the PI to prove the analogue for the loop O(n) model of Polyakov’s 1975 prediction that the 2D Heisenberg model and its higher spin versions exhibit exponential decay of correlations at any temperature.
Lastly, statistical physics methods are proposed for solving purely combinatorial problems. The PI has applied this approach successfully to solve questions of existence and asymptotics for combinatorial structures and intends to develop it further to answer some of the tantalizing open questions in the field.
Summary
Mathematical statistical physics has seen spectacular progress in recent years. Existing problems which were previously unattainable were solved, opening a way to approach some of the classical open questions in the field. The proposed research focuses on phenomena of localization and long-range order in physical systems of large size, identifying several fundamental questions lying at the interface of Statistical Physics, Probability Theory and Combinatorics.
One circle of questions concerns the fluctuation behavior of random surfaces, where the PI has recently proved delocalization in two dimensions answering a 1975 question of Brascamp, Lieb and Lebowitz. A main goal of the research is to establish some of the long-standing universality conjectures for random surfaces. This study is also tied to the localization features of random operators, such as random Schrodinger operators and band matrices, as well as those of reinforced random walks. The PI intends to develop this connection further to bring the state-of-the-art to the conjectured thresholds.
A second circle of questions regards long-range order in high-dimensional systems. This phenomenon is predicted to encompass many models of statistical physics but rigorous results are quite limited. A notable example is the PI’s proof of Kotecky’s 1985 conjecture on the rigidity of proper 3-colorings in high dimensions. The methods used in this context are not limited to high dimensions and were recently used by the PI to prove the analogue for the loop O(n) model of Polyakov’s 1975 prediction that the 2D Heisenberg model and its higher spin versions exhibit exponential decay of correlations at any temperature.
Lastly, statistical physics methods are proposed for solving purely combinatorial problems. The PI has applied this approach successfully to solve questions of existence and asymptotics for combinatorial structures and intends to develop it further to answer some of the tantalizing open questions in the field.
Max ERC Funding
1 136 904 €
Duration
Start date: 2016-01-01, End date: 2020-12-31
Project acronym LogCorrelatedFields
Project Extremes in logarithmically correlated fields
Researcher (PI) Ofer Zeitouni
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE1, ERC-2015-AdG
Summary The proposed research deals with the extremes of logarithmically correlated fields, in both the Gaussian and non-Gaussian setups. Examples of such fields are branching random walks, the (discrete) two dimensional Gaussian free field, the set of points left uncovered by a random walk on the two dimensional torus at times close to the cover time of the torus, the (absolute) values of the characteristic polynomial of random matrices, Ginzburg-Landau models, and more. The proposal builds on recent progress in the study of the maximum and of the extremal process of the two dimensional Gaussian free field, which was made possible by Gaussian comparisons and the introduction of a refined version of the second moment method. The proposed research will develop the tools needed for building a general and flexible theory applicable to general logarithmically correlated fields. Applications to the multiplicative chaos will also be considered.
Summary
The proposed research deals with the extremes of logarithmically correlated fields, in both the Gaussian and non-Gaussian setups. Examples of such fields are branching random walks, the (discrete) two dimensional Gaussian free field, the set of points left uncovered by a random walk on the two dimensional torus at times close to the cover time of the torus, the (absolute) values of the characteristic polynomial of random matrices, Ginzburg-Landau models, and more. The proposal builds on recent progress in the study of the maximum and of the extremal process of the two dimensional Gaussian free field, which was made possible by Gaussian comparisons and the introduction of a refined version of the second moment method. The proposed research will develop the tools needed for building a general and flexible theory applicable to general logarithmically correlated fields. Applications to the multiplicative chaos will also be considered.
Max ERC Funding
1 292 500 €
Duration
Start date: 2016-06-01, End date: 2021-12-31
Project acronym MPM
Project Modern Pattern Matching
Researcher (PI) Ely Porat
Host Institution (HI) BAR ILAN UNIVERSITY
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The advances in technology over the last decade and the massive amount of data passing through the internet has intrigued and challenged computer scientists, as the old models of computation used before this era are now less relevant or too slow. New computational models have been suggested to tackle these technological advances. In the most basic sense, these modern models allow one to scan the input only once, possible with small auxiliary memory. Nevertheless, modern techniques have also been introduced such as sparse recovery which has proven to be a very useful tool for dealing with modern challenges, and the very popular notion of conditional lower bounds which has provided evidence of hardness for various algorithmic tasks based on very popular conjectures.
Pattern matching plays a crucial role in many computing applications that can be seen in day to day life. However, its research community has only recently started gaining insight on what can be done in modern models, and is lagging behind in this respect. In particular, there are no algorithms for pattern matching problems that have utilized ideas from sparse recovery, and only recently has there been progress in proving conditional lower bounds for string problems. Furthermore, conditional lower bounds suffer from the lack of hardness conjectures which address time/space tradeoffs.
This proposal will close this gap for many important pattern matching problems within the new models of computation, and will be the first to utilize modern algorithmic techniques, such as sparse recovery, and adapting them into the pattern matching world. Furthermore, this proposal will focus on developing a theory for proving conditional time/space lower bounds, based on new hardness conjectures. This will greatly influence not only the pattern matching sub-field, but the entire algorithmic field at large.
Summary
The advances in technology over the last decade and the massive amount of data passing through the internet has intrigued and challenged computer scientists, as the old models of computation used before this era are now less relevant or too slow. New computational models have been suggested to tackle these technological advances. In the most basic sense, these modern models allow one to scan the input only once, possible with small auxiliary memory. Nevertheless, modern techniques have also been introduced such as sparse recovery which has proven to be a very useful tool for dealing with modern challenges, and the very popular notion of conditional lower bounds which has provided evidence of hardness for various algorithmic tasks based on very popular conjectures.
Pattern matching plays a crucial role in many computing applications that can be seen in day to day life. However, its research community has only recently started gaining insight on what can be done in modern models, and is lagging behind in this respect. In particular, there are no algorithms for pattern matching problems that have utilized ideas from sparse recovery, and only recently has there been progress in proving conditional lower bounds for string problems. Furthermore, conditional lower bounds suffer from the lack of hardness conjectures which address time/space tradeoffs.
This proposal will close this gap for many important pattern matching problems within the new models of computation, and will be the first to utilize modern algorithmic techniques, such as sparse recovery, and adapting them into the pattern matching world. Furthermore, this proposal will focus on developing a theory for proving conditional time/space lower bounds, based on new hardness conjectures. This will greatly influence not only the pattern matching sub-field, but the entire algorithmic field at large.
Max ERC Funding
1 994 609 €
Duration
Start date: 2017-07-01, End date: 2022-06-30
Project acronym NLPRO
Project Natural Language Programming: Turning Text into Executable Code
Researcher (PI) Reut Tsarfaty
Host Institution (HI) THE OPEN UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary Can we program computers in our native tongue? This idea, termed natural language programming, has attracted attention almost since the inception of computers themselves.
From the point of view of software engineering (SE), efforts to program in natural language (NL) have relied thus far on controlled natural languages (CNL) -- small unambiguous fragments of English with strict grammars and limited expressivity. Is it possible to replace CNLs with truly natural, human language?
From the point of view of natural language processing (NLP), current technology successfully extracts static information from NL texts. However, human-like NL understanding goes far beyond such extraction -- it requires dynamic interpretation processes which affect, and are affected by, the environment, update states and lead to action. So, is it possible to endow computers with this kind of dynamic NL understanding?
These two questions are fundamental to SE and NLP, respectively, and addressing each requires a huge leap forward in the respective field. In this proposal I argue that the solutions to these seemingly separate challenges are actually closely intertwined, and that one community's challenge is the other community's stepping stone for a huge leap, and vice versa. Specifically, I propose to view executable programs in SE as semantic structures in NLP, and use them as the basis for broad-coverage dynamic semantic parsing.
My ambitious, cross-disciplinary goal is to develop a new NL compiler based on this novel approach to NL semantics. The NL compiler will accept an NL description as input and return an executable system as output. Moreover, it will continuously improve its NL understanding capacity via online learning that will feed on verification, simulation, synthesis or user feedback. Such dynamic, ever-improving, NL compilers will have vast applications in AI, SE, robotics and cognitive computing and will fundamentally change the way humans and computers interact.
Summary
Can we program computers in our native tongue? This idea, termed natural language programming, has attracted attention almost since the inception of computers themselves.
From the point of view of software engineering (SE), efforts to program in natural language (NL) have relied thus far on controlled natural languages (CNL) -- small unambiguous fragments of English with strict grammars and limited expressivity. Is it possible to replace CNLs with truly natural, human language?
From the point of view of natural language processing (NLP), current technology successfully extracts static information from NL texts. However, human-like NL understanding goes far beyond such extraction -- it requires dynamic interpretation processes which affect, and are affected by, the environment, update states and lead to action. So, is it possible to endow computers with this kind of dynamic NL understanding?
These two questions are fundamental to SE and NLP, respectively, and addressing each requires a huge leap forward in the respective field. In this proposal I argue that the solutions to these seemingly separate challenges are actually closely intertwined, and that one community's challenge is the other community's stepping stone for a huge leap, and vice versa. Specifically, I propose to view executable programs in SE as semantic structures in NLP, and use them as the basis for broad-coverage dynamic semantic parsing.
My ambitious, cross-disciplinary goal is to develop a new NL compiler based on this novel approach to NL semantics. The NL compiler will accept an NL description as input and return an executable system as output. Moreover, it will continuously improve its NL understanding capacity via online learning that will feed on verification, simulation, synthesis or user feedback. Such dynamic, ever-improving, NL compilers will have vast applications in AI, SE, robotics and cognitive computing and will fundamentally change the way humans and computers interact.
Max ERC Funding
1 449 375 €
Duration
Start date: 2016-08-01, End date: 2022-07-31
Project acronym PATHWISE
Project Pathwise methods and stochastic calculus in the path towards understanding high-dimensional phenomena
Researcher (PI) Ronen ELDAN
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE1, ERC-2018-STG
Summary Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging.
In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter.
We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Summary
Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging.
In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter.
We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks.
Max ERC Funding
1 308 188 €
Duration
Start date: 2019-01-01, End date: 2023-12-31
Project acronym PCPABF
Project Challenging Computational Infeasibility: PCP and Boolean functions
Researcher (PI) Shmuel Avraham Safra
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE6, ERC-2018-ADG
Summary Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible.
Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems.
The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize.
To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture.
It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it.
Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely.
A field that plays a crucial role in this progress is Analysis of Boolean-functions.
For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph.
The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph.
With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications.
This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain.
This is a thriving research area, but also an extremely vibrant High-Tech sector.
Summary
Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible.
Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems.
The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize.
To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture.
It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it.
Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely.
A field that plays a crucial role in this progress is Analysis of Boolean-functions.
For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph.
The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph.
With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications.
This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain.
This is a thriving research area, but also an extremely vibrant High-Tech sector.
Max ERC Funding
2 059 375 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym PCPHDX
Project Probabilistically Checkable Proofs, Agreement Tests, and High Dimensional Expanders
Researcher (PI) Irit DVEER DINUR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2017-COG
Summary PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Summary
PCPs capture a striking local to global phenomenon in which a global object such as an NP witness can be checked using local constraints, and its correctness is guaranteed even if only a fraction of the constraints are satisfied.
PCPs are tightly related to hardness of approximation. The relation is essentially due to the fact that exact optimization problems can be reduced to their approximation counterparts through this local to global connection.
We view this local to global connection is a type of high dimensional expansion, akin to relatively new notions of high dimensional expansion (such as coboundary and cosystolic expansion) that have been introduced in the literature recently. We propose to study PCPs and high dimensional expansion together. We describe a concrete notion of “agreement expansion” and propose a systematic study of this question. We show how progress on agreement expansion questions is directly related to some of the most important open questions in PCPs such as the unique games conjecture, and the problem of constructing linear size PCPs.
We also propose to study the phenomenon of high dimensional expansion more broadly and to investigate its relation and applicability to questions in computational complexity that go beyond PCPs, in particular for hardness amplification and for derandomizing direct product constructions.
Max ERC Funding
1 512 035 €
Duration
Start date: 2018-02-01, End date: 2023-01-31
Project acronym PolSymAGA
Project Polarity and Central-Symmetry in Asymptotic Geometric Analysis
Researcher (PI) Shiri ARTSTEIN
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Consolidator Grant (CoG), PE1, ERC-2017-COG
Summary Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Summary
Asymptotic Geometric Analysis is a relatively new field, the young finite dimensional cousin of Banach Space theory, functional analysis and classical convexity. It concerns the {\em geometric} study of high, but finite, dimensional objects, where the disorder of many parameters and many dimensions is "regularized" by convexity assumptions.
The proposed research is composed of several connected innovative studies in the frontier of Asymptotic Geometric Analysis, pertaining to the deeper understanding of two fundamental notions: Polarity and Central-Symmetry.
While the main drive comes from Asymptotic Convex Geometry, the applications extend throughout many mathematical fields from analysis, probability and symplectic geometry to combinatorics and computer science. The project will concern: The polarity map for functions, functional covering numbers, measures of Symmetry, Godbersen's conjecture, Mahler's conjecture, Minkowski billiard dynamics and caustics.
My research objectives are twofold. First, to progress towards a solution of the open research questions described in the proposal, which I consider to be pivotal in the field, including Mahler's conjecture, Viterbo's conjecture and Godberesen's conjecture. Some of these questions have already been studied intensively, and the solution is yet to found; progress toward solving them would be of high significance. Secondly, as the studies in this proposal lie at the meeting point of several mathematical fields, and use Asymptotic Geometric Analysis in order to address major questions in other fields, such as Symplectic Geometry and Optimal transport theory, my second goal is to deepen these connections, creating a powerful framework that will lead to a deeper understanding, and the formulation, and resolution, of interesting questions currently unattainable.
Max ERC Funding
1 514 125 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym ProDIS
Project Provenance for Data-Intensive Systems
Researcher (PI) Daniel Deutch
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary In the context of data-intensive systems, data provenance captures the way in which data is used, combined
and manipulated by the system. Provenance information can for instance be used to reveal whether
data was illegitimately used, to reason about hypothetical data modifications, to assess the trustworthiness
of a computation result, or to explain the rationale underlying the computation.
As data-intensive systems constantly grow in use, in complexity and in the size of data they manipulate,
provenance tracking becomes of paramount importance. In its absence, it is next to impossible to follow the
flow of data through the system. This in turn is extremely harmful for the quality of results, for enforcing
policies, and for the public trust in the systems.
Despite important advancements in research on data provenance, and its possible revolutionary impact,
it is unfortunately uncommon for practical data-intensive systems to support provenance tracking. The
goal of the proposed research is to develop models, algorithms and tools that facilitate provenance
tracking for a wide range of data-intensive systems, that can be applied to large-scale data analytics,
allowing to explain and reason about the computation that took place.
Towards this goal, we will address the following main objectives: (1) supporting provenance for modern
data analytics frameworks such as data exploration and data science, (2) overcoming the computational
overhead incurred by provenance tracking, (3) the development of user-friendly, provenance-based analysis
tools and (4) experimental validation based on the development of prototype tools and benchmarks.
Summary
In the context of data-intensive systems, data provenance captures the way in which data is used, combined
and manipulated by the system. Provenance information can for instance be used to reveal whether
data was illegitimately used, to reason about hypothetical data modifications, to assess the trustworthiness
of a computation result, or to explain the rationale underlying the computation.
As data-intensive systems constantly grow in use, in complexity and in the size of data they manipulate,
provenance tracking becomes of paramount importance. In its absence, it is next to impossible to follow the
flow of data through the system. This in turn is extremely harmful for the quality of results, for enforcing
policies, and for the public trust in the systems.
Despite important advancements in research on data provenance, and its possible revolutionary impact,
it is unfortunately uncommon for practical data-intensive systems to support provenance tracking. The
goal of the proposed research is to develop models, algorithms and tools that facilitate provenance
tracking for a wide range of data-intensive systems, that can be applied to large-scale data analytics,
allowing to explain and reason about the computation that took place.
Towards this goal, we will address the following main objectives: (1) supporting provenance for modern
data analytics frameworks such as data exploration and data science, (2) overcoming the computational
overhead incurred by provenance tracking, (3) the development of user-friendly, provenance-based analysis
tools and (4) experimental validation based on the development of prototype tools and benchmarks.
Max ERC Funding
1 306 250 €
Duration
Start date: 2018-12-01, End date: 2023-11-30
Project acronym RANDGEOM
Project Random Geometry
Researcher (PI) Asaf Nachmias
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE1, ERC-2015-STG
Summary The objective of this proposal is an investigation of the geometric structure of random spaces that arise in critical models of statistical physics. The proposal is motivated by inspiring yet non-rigorous predictions from the physics community and the models studied are some of the most popular models in contemporary probability theory such as percolation, random planar maps and random walks.
One set of problems are on the topic of random planar maps and quantum gravity, a thriving field on the intersection of probability, statistical physics, combinatorics and complex analysis. Our goal is to develop a rigorous theory of these maps viewed as surfaces (rather than metric spaces) via their circle packing. The circle packing structure was recently used by the PI and Gurel-Gurevich to show that these maps are a.s. recurrent, resolving a major conjecture in this area. Among other consequences, this research will hopefully lead to progress on the most important open problem in this field: a rigorous proof of the mysterious KPZ correspondence, a conjectural formula from the physics literature allowing to compute dimensions of certain random sets in the usual square lattice from the corresponding dimension in the random geometry. Such a program will hopefully lead to the solution of the most central problems in two-dimensional statistical physics, such as finding the typical displacement of the self-avoiding walk, proving conformal invariance for percolation on the square lattice and many others.
Another set of problems is investigating aspects of universality in critical percolation in various high-dimensional graphs. These graphs include lattices in dimension above 6, Cayley graphs of finitely generated non-amenable groups and also finite graphs such as the complete graph, the Hamming hypercube and expanders. It is believed that critical percolation on these graphs is universal in the sense that the resulting percolated clusters exhibit the same mean-field geometry.
Summary
The objective of this proposal is an investigation of the geometric structure of random spaces that arise in critical models of statistical physics. The proposal is motivated by inspiring yet non-rigorous predictions from the physics community and the models studied are some of the most popular models in contemporary probability theory such as percolation, random planar maps and random walks.
One set of problems are on the topic of random planar maps and quantum gravity, a thriving field on the intersection of probability, statistical physics, combinatorics and complex analysis. Our goal is to develop a rigorous theory of these maps viewed as surfaces (rather than metric spaces) via their circle packing. The circle packing structure was recently used by the PI and Gurel-Gurevich to show that these maps are a.s. recurrent, resolving a major conjecture in this area. Among other consequences, this research will hopefully lead to progress on the most important open problem in this field: a rigorous proof of the mysterious KPZ correspondence, a conjectural formula from the physics literature allowing to compute dimensions of certain random sets in the usual square lattice from the corresponding dimension in the random geometry. Such a program will hopefully lead to the solution of the most central problems in two-dimensional statistical physics, such as finding the typical displacement of the self-avoiding walk, proving conformal invariance for percolation on the square lattice and many others.
Another set of problems is investigating aspects of universality in critical percolation in various high-dimensional graphs. These graphs include lattices in dimension above 6, Cayley graphs of finitely generated non-amenable groups and also finite graphs such as the complete graph, the Hamming hypercube and expanders. It is believed that critical percolation on these graphs is universal in the sense that the resulting percolated clusters exhibit the same mean-field geometry.
Max ERC Funding
1 286 150 €
Duration
Start date: 2016-01-01, End date: 2020-12-31
Project acronym RandomZeroSets
Project Zero sets of random functions
Researcher (PI) Mikhail SODIN
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE1, ERC-2015-AdG
Summary "The proposed research is focused on zero sets of random functions.
This is a rapidly growing area that lies at the crossroads of analysis,
probability theory and mathematical physics. Various instances of zero
sets of random functions have been used to model different phenomena
in quantum chaos, complex analysis, real algebraic geometry, and
theory of random point processes.
The proposal consists of three parts. The first one deals with asymptotic
topology of zero sets of smooth random functions of several real variables.
This can be viewed as a statistical counterpart of the first half of Hilbert's 16th
problem. At the same time, it is closely related to percolation theory.
In the second and third parts, we turn to zero sets of random analytic functions
of one complex variable. The zero sets studied in the second part provide one
of few natural instances of a homogeneous point process with suppressed
fluctuations and strong short-range interactions. These point processes have
many features, which are in striking contrast with the ones of the Poisson point
process. One of these features is the coexistence of different Gaussian scaling
limits for different linear statistics.
The third part deals with zeroes of Taylor series with random and pseudo-random
coefficients. Studying these zero sets should shed light on the relation between
the distribution of coefficients of a Taylor series and the distribution of its zeroes,
which is still ""terra incognita'' of classical complex analysis."
Summary
"The proposed research is focused on zero sets of random functions.
This is a rapidly growing area that lies at the crossroads of analysis,
probability theory and mathematical physics. Various instances of zero
sets of random functions have been used to model different phenomena
in quantum chaos, complex analysis, real algebraic geometry, and
theory of random point processes.
The proposal consists of three parts. The first one deals with asymptotic
topology of zero sets of smooth random functions of several real variables.
This can be viewed as a statistical counterpart of the first half of Hilbert's 16th
problem. At the same time, it is closely related to percolation theory.
In the second and third parts, we turn to zero sets of random analytic functions
of one complex variable. The zero sets studied in the second part provide one
of few natural instances of a homogeneous point process with suppressed
fluctuations and strong short-range interactions. These point processes have
many features, which are in striking contrast with the ones of the Poisson point
process. One of these features is the coexistence of different Gaussian scaling
limits for different linear statistics.
The third part deals with zeroes of Taylor series with random and pseudo-random
coefficients. Studying these zero sets should shed light on the relation between
the distribution of coefficients of a Taylor series and the distribution of its zeroes,
which is still ""terra incognita'' of classical complex analysis."
Max ERC Funding
1 658 750 €
Duration
Start date: 2016-10-01, End date: 2021-09-30
Project acronym REACT
Project Realizable Advanced Cryptography
Researcher (PI) Zvika BRAKERSKI
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary In a free society, there is persistent tension between utility and privacy. Citizens have the basic right to keep their personal information private. However, sometimes keeping our data private could significantly reduce our ability to use this data to benefit ourselves or society. This tension is multiplied many times over in our modern data driven society, where data is utilized using remote algorithms.
State of the art research suggests that new advanced cryptographic primitives can mitigate this tension. These include computing on encrypted data via fully homomorphic encryption, fine grained access control to encrypted data via attribute based encryption, and most recently general purpose program obfuscation, which on paper can solve many of cryptography's long standing problems. However, these primitives are largely either too complicated or not sufficiently founded to be considered for real world applications.
Project REACT will apply foundational theoretical study towards removing the barriers between advanced cryptographic primitives and reality. My viewpoint, supported by my prior research success, is that orders-of-magnitude improvement in efficiency and security requires foundational theoretical study, rather than focusing on optimizations or heuristics. My projection is that progress in this direction will both allow for future realistic implementation of these primitives, reducing said tension, as well as contribute to basic cryptographic study by opening new avenues for future research.
To achieve this goal, I will pursue the following objectives: (i) Studying the computational complexity of underlying hardness assumptions, specifically lattice based, to better understand the level of security we can expect of proposed primitives. (ii) Simplifying and extending the LWE/trapdoor paradigm that underlies many of the new primitives, and that I find incomplete. (iii) Constructing cryptographic graded encoding schemes and obfuscators.
Summary
In a free society, there is persistent tension between utility and privacy. Citizens have the basic right to keep their personal information private. However, sometimes keeping our data private could significantly reduce our ability to use this data to benefit ourselves or society. This tension is multiplied many times over in our modern data driven society, where data is utilized using remote algorithms.
State of the art research suggests that new advanced cryptographic primitives can mitigate this tension. These include computing on encrypted data via fully homomorphic encryption, fine grained access control to encrypted data via attribute based encryption, and most recently general purpose program obfuscation, which on paper can solve many of cryptography's long standing problems. However, these primitives are largely either too complicated or not sufficiently founded to be considered for real world applications.
Project REACT will apply foundational theoretical study towards removing the barriers between advanced cryptographic primitives and reality. My viewpoint, supported by my prior research success, is that orders-of-magnitude improvement in efficiency and security requires foundational theoretical study, rather than focusing on optimizations or heuristics. My projection is that progress in this direction will both allow for future realistic implementation of these primitives, reducing said tension, as well as contribute to basic cryptographic study by opening new avenues for future research.
To achieve this goal, I will pursue the following objectives: (i) Studying the computational complexity of underlying hardness assumptions, specifically lattice based, to better understand the level of security we can expect of proposed primitives. (ii) Simplifying and extending the LWE/trapdoor paradigm that underlies many of the new primitives, and that I find incomplete. (iii) Constructing cryptographic graded encoding schemes and obfuscators.
Max ERC Funding
1 493 803 €
Duration
Start date: 2017-10-01, End date: 2022-09-30
Project acronym RMAST
Project Random Models in Arithmetic and Spectral Theory
Researcher (PI) Zeev Rudnick
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), PE1, ERC-2017-ADG
Summary The proposal studies deterministic problems in the spectral theory of the Laplacian and in analytic number theory by using random models. I propose two projects in spectral theory on this theme, both with a strong arithmetic ingredient, the first about minimal gaps between the eigenvalues of the Laplacian, where I seek a fit with the corresponding quantities for the various conjectured universality classes (Poisson/GUE/GOE), and the second about curvature measures of nodal lines of eigenfunctions of the Laplacian, where I seek to determine the size of the curvature measures for the large eigenvalue limit. The third project originates in analytic number theory, on angular distribution of prime ideals in number fields, function field analogues and connections with Random Matrix Theory, where I raise new conjectures and problems on a very classical subject, and aim to resolve them at least in the function field setting.
Summary
The proposal studies deterministic problems in the spectral theory of the Laplacian and in analytic number theory by using random models. I propose two projects in spectral theory on this theme, both with a strong arithmetic ingredient, the first about minimal gaps between the eigenvalues of the Laplacian, where I seek a fit with the corresponding quantities for the various conjectured universality classes (Poisson/GUE/GOE), and the second about curvature measures of nodal lines of eigenfunctions of the Laplacian, where I seek to determine the size of the curvature measures for the large eigenvalue limit. The third project originates in analytic number theory, on angular distribution of prime ideals in number fields, function field analogues and connections with Random Matrix Theory, where I raise new conjectures and problems on a very classical subject, and aim to resolve them at least in the function field setting.
Max ERC Funding
1 840 625 €
Duration
Start date: 2019-02-01, End date: 2024-01-31
Project acronym SensStabComp
Project Sensitivity, Stability, and Computation
Researcher (PI) Gil KALAI
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Call Details Advanced Grant (AdG), PE1, ERC-2018-ADG
Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas.
The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various
models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988),
is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods.
Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing.
The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Summary
Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas.
The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various
models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988),
is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods.
Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing.
The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Max ERC Funding
1 754 500 €
Duration
Start date: 2019-06-01, End date: 2024-05-31
Project acronym SIREN
Project Securing Internet Routing from the Ground Up
Researcher (PI) Michael Schapira
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary The Internet is made up of dozens of thousands of smaller networks, called Autonomous Systems (ASes), ranging from multinational corporations to small businesses and schools, e.g., Google, Deutsche Telekom, AT&T, and Hebrew U. Routing between ASes is handled by the Border Gateway Protocol (BGP), which is the glue that holds the Internet together. Alarmingly, despite the Internet's critical societal and economic role, BGP routing is dangerously vulnerable to configuration errors and attacks, and, consequently, every year or so a major Internet outage makes the news.
To remedy BGP’s many security vulnerabilities, researchers and practitioners have invested much effort into designing security solutions for BGP routing. Yet, despite over a decade of Herculean efforts, many technological, political, and economic hurdles hinder, and possibly even prevent, deployment. I argue that the reasons for this are deeply rooted in today’s centralized, top-down, hierarchical paradigm for securing Internet routing. The aim of the planned research project is to put forth and explore a radically new paradigm for securing routing on the Internet. The proposed alternative roadmap for securing the Internet consists of two steps:
1) Jumpstarting BGP security: A novel approach to routing security that bypasses the obstacles facing today’s agenda. Specifically, the proposed design will be flat, decentralized, fully automated, avoid dependency on a single root-of-trust, and not require modifying/replacing legacy BGP routers.
2) A long-term vision for Internet routing: Leveraging the vast computational resources in modern datacenters, and research on Secure Multi-Party Computation, to outsource routing to a small number of entities while retaining flexibility, autonomy and privacy.
I believe that, put together, these can lead to a more secure Internet in the short-run, and outline a promising, yet uncharted, new direction for the future of Internet routing.
Summary
The Internet is made up of dozens of thousands of smaller networks, called Autonomous Systems (ASes), ranging from multinational corporations to small businesses and schools, e.g., Google, Deutsche Telekom, AT&T, and Hebrew U. Routing between ASes is handled by the Border Gateway Protocol (BGP), which is the glue that holds the Internet together. Alarmingly, despite the Internet's critical societal and economic role, BGP routing is dangerously vulnerable to configuration errors and attacks, and, consequently, every year or so a major Internet outage makes the news.
To remedy BGP’s many security vulnerabilities, researchers and practitioners have invested much effort into designing security solutions for BGP routing. Yet, despite over a decade of Herculean efforts, many technological, political, and economic hurdles hinder, and possibly even prevent, deployment. I argue that the reasons for this are deeply rooted in today’s centralized, top-down, hierarchical paradigm for securing Internet routing. The aim of the planned research project is to put forth and explore a radically new paradigm for securing routing on the Internet. The proposed alternative roadmap for securing the Internet consists of two steps:
1) Jumpstarting BGP security: A novel approach to routing security that bypasses the obstacles facing today’s agenda. Specifically, the proposed design will be flat, decentralized, fully automated, avoid dependency on a single root-of-trust, and not require modifying/replacing legacy BGP routers.
2) A long-term vision for Internet routing: Leveraging the vast computational resources in modern datacenters, and research on Secure Multi-Party Computation, to outsource routing to a small number of entities while retaining flexibility, autonomy and privacy.
I believe that, put together, these can lead to a more secure Internet in the short-run, and outline a promising, yet uncharted, new direction for the future of Internet routing.
Max ERC Funding
1 468 200 €
Duration
Start date: 2016-02-01, End date: 2021-01-31
Project acronym SpeedInfTradeoff
Project Speed-Information Tradeoffs: Beyond Quasi-Entropy Analysis
Researcher (PI) Nir Yosef Ailon
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Call Details Consolidator Grant (CoG), PE6, ERC-2015-CoG
Summary The starting point of this research proposal is a recent result by the PI, making progress in a half century old,
notoriously open problem. In the mid 1960’s, Tukey and Cooley discovered the Fast Fourier Transform, an
algorithm for performing one of the most important linear transformations in science and engineering, the
(discrete) Fourier transform, in time complexity O(n log n).
In spite of its importance, a super-linear lower bound has been elusive for many years, with only very limited
results. Very recently the PI managed to show that, roughly speaking, a faster Fourier transform must result
in information loss, in the form of numerical accuracy. The result can be seen as a type of computational
uncertainty principle, whereby faster computation increases uncertainty in data. The mathematical argument
is established by defining a type of matrix quasi-entropy, generalizing Shannon’s measure of information
(entropy) to “quasi-probabilities” (which can be negative, more than 1, or even complex).
This result, which is not believed to be tight, does not close the book on Fourier complexity. More importantly,
the vision proposed by the PI here reaches far beyond Fourier computation. The computation-information
tradeoff underlying the result suggests a novel view of complexity theory as a whole. We can now revisit
some classic complexity theoretical problems with a fresh view. Examples of these problems include better
understanding of the complexity of polynomial multiplication, integer multiplication, auto-correlation and
cross-correlation computation, dimensionality reduction via the Fast Johnson-Linednstrauss Transform (FJLT;
also discovered and developed by the PI), large scale linear algebra (linear regression, Principal Component
Analysis - PCA, compressed sensing, matrix multiplication) as well as binary functions such as integer multiplication.
Summary
The starting point of this research proposal is a recent result by the PI, making progress in a half century old,
notoriously open problem. In the mid 1960’s, Tukey and Cooley discovered the Fast Fourier Transform, an
algorithm for performing one of the most important linear transformations in science and engineering, the
(discrete) Fourier transform, in time complexity O(n log n).
In spite of its importance, a super-linear lower bound has been elusive for many years, with only very limited
results. Very recently the PI managed to show that, roughly speaking, a faster Fourier transform must result
in information loss, in the form of numerical accuracy. The result can be seen as a type of computational
uncertainty principle, whereby faster computation increases uncertainty in data. The mathematical argument
is established by defining a type of matrix quasi-entropy, generalizing Shannon’s measure of information
(entropy) to “quasi-probabilities” (which can be negative, more than 1, or even complex).
This result, which is not believed to be tight, does not close the book on Fourier complexity. More importantly,
the vision proposed by the PI here reaches far beyond Fourier computation. The computation-information
tradeoff underlying the result suggests a novel view of complexity theory as a whole. We can now revisit
some classic complexity theoretical problems with a fresh view. Examples of these problems include better
understanding of the complexity of polynomial multiplication, integer multiplication, auto-correlation and
cross-correlation computation, dimensionality reduction via the Fast Johnson-Linednstrauss Transform (FJLT;
also discovered and developed by the PI), large scale linear algebra (linear regression, Principal Component
Analysis - PCA, compressed sensing, matrix multiplication) as well as binary functions such as integer multiplication.
Max ERC Funding
1 515 801 €
Duration
Start date: 2016-06-01, End date: 2021-05-31
Project acronym SVIS
Project Supervised Verification of Infinite-State Systems
Researcher (PI) Sharon Shoham Buchbinder
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary Modern society relies more and more on computing for managing complex safety-critical tasks (e.g., in medicine, avionics, economy). Correctness of computerized systems is therefore crucial, as incorrect behaviors might lead to disastrous outcomes. Still, correctness is a big open problem without satisfactory theoretical or practical solutions. This has dramatic effects on the security of our lives. The current practice in industry mainly employs testing which detects bugs but cannot ensure their absence. In contrast, formal verification can provide formal correctness guarantees. Unfortunately, existing formal methods are still limited in their applicability to real world systems since most of them are either too automatic, hindered by the undecidability of verification, or too manual, relying on substantial human effort. This proposal will introduce a new methodology of supervised verification, based on which, we will develop novel approaches that can be used to formally verify certain realistic systems. This will be achieved by dividing the verification process into tasks that are well suited for automation, and tasks that are best done by a human supervisor, and finding a suitable mode of interaction between the human and the machine. Supervised verification represents a conceptual leap as it is centered around automatic procedures but complements them with human guidance; It therefore breaks the classical pattern of automated verification, and creates new opportunities, both practical and theoretical. In addition to opening the way for developing tractable verification algorithms, it can be used to prove lower and upper bounds on the asymptotic complexity of verification by explicitly distilling the human's role. The approaches developed by this research will significantly improve system correctness and agility. At the same time, they will reduce the cost of testing, increase developers productivity, and improve the overall understanding of computerized systems.
Summary
Modern society relies more and more on computing for managing complex safety-critical tasks (e.g., in medicine, avionics, economy). Correctness of computerized systems is therefore crucial, as incorrect behaviors might lead to disastrous outcomes. Still, correctness is a big open problem without satisfactory theoretical or practical solutions. This has dramatic effects on the security of our lives. The current practice in industry mainly employs testing which detects bugs but cannot ensure their absence. In contrast, formal verification can provide formal correctness guarantees. Unfortunately, existing formal methods are still limited in their applicability to real world systems since most of them are either too automatic, hindered by the undecidability of verification, or too manual, relying on substantial human effort. This proposal will introduce a new methodology of supervised verification, based on which, we will develop novel approaches that can be used to formally verify certain realistic systems. This will be achieved by dividing the verification process into tasks that are well suited for automation, and tasks that are best done by a human supervisor, and finding a suitable mode of interaction between the human and the machine. Supervised verification represents a conceptual leap as it is centered around automatic procedures but complements them with human guidance; It therefore breaks the classical pattern of automated verification, and creates new opportunities, both practical and theoretical. In addition to opening the way for developing tractable verification algorithms, it can be used to prove lower and upper bounds on the asymptotic complexity of verification by explicitly distilling the human's role. The approaches developed by this research will significantly improve system correctness and agility. At the same time, they will reduce the cost of testing, increase developers productivity, and improve the overall understanding of computerized systems.
Max ERC Funding
1 499 528 €
Duration
Start date: 2018-04-01, End date: 2023-03-31
Project acronym TheoryDL
Project Practically Relevant Theory of Deep Learning
Researcher (PI) Shai Shalev-shwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Call Details Starting Grant (StG), PE6, ERC-2015-STG
Summary One of the most significant recent developments in applied machine learning has been the resurgence of ``deep learning'', usually in the form of artificial neural networks. The empirical success of deep learning is stunning, and deep learning based systems have already led to breakthroughs in computer vision and speech recognition. In contrast, from the theoretical point of view, by and large, we do not understand why deep learning is at all possible, since most state of
the art theoretical results show that deep learning is computationally hard.
Bridging this gap is a great challenge since it involves proficiency in several theoretic fields (algorithms, complexity, and statistics) and at the same time requires a good understanding of real world practical problems and the ability to conduct applied research. We believe that a good theory must lead to better practical algorithms. It should also broaden the applicability of learning in general, and deep learning in particular, to new domains. Such a practically relevant theory may also lead to a fundamental paradigm shift in the way we currently analyze the complexity of algorithms.
Previous works by the PI and his colleagues and students have provided novel ways to analyze the computational complexity of learning algorithms and understand the tradeoffs between data and computational time. In this proposal, in order to bridge the gap between theory and practice, I suggest a departure from worst-case analyses and the development of a more optimistic, data dependent, theory with ``grey'' components. Success will lead to a breakthrough in our understanding of learning at large with significant potential for impact on the field of machine learning and its applications.
Summary
One of the most significant recent developments in applied machine learning has been the resurgence of ``deep learning'', usually in the form of artificial neural networks. The empirical success of deep learning is stunning, and deep learning based systems have already led to breakthroughs in computer vision and speech recognition. In contrast, from the theoretical point of view, by and large, we do not understand why deep learning is at all possible, since most state of
the art theoretical results show that deep learning is computationally hard.
Bridging this gap is a great challenge since it involves proficiency in several theoretic fields (algorithms, complexity, and statistics) and at the same time requires a good understanding of real world practical problems and the ability to conduct applied research. We believe that a good theory must lead to better practical algorithms. It should also broaden the applicability of learning in general, and deep learning in particular, to new domains. Such a practically relevant theory may also lead to a fundamental paradigm shift in the way we currently analyze the complexity of algorithms.
Previous works by the PI and his colleagues and students have provided novel ways to analyze the computational complexity of learning algorithms and understand the tradeoffs between data and computational time. In this proposal, in order to bridge the gap between theory and practice, I suggest a departure from worst-case analyses and the development of a more optimistic, data dependent, theory with ``grey'' components. Success will lead to a breakthrough in our understanding of learning at large with significant potential for impact on the field of machine learning and its applications.
Max ERC Funding
1 342 500 €
Duration
Start date: 2016-02-01, End date: 2021-01-31
Project acronym THUNDEEP
Project A Theory for Understanding, Designing, and Training Deep Learning Systems
Researcher (PI) Ohad SHAMIR
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Starting Grant (StG), PE6, ERC-2017-STG
Summary The rise of deep learning, in the form of artificial neural networks, has been the most dramatic and important development in machine learning over the past decade. Much more than a merely academic topic, deep learning is currently being widely adopted in industry, placed inside commercial products, and is expected to play a key role in anticipated technological leaps such as autonomous driving and general-purpose artificial intelligence. However, our scientific understanding of deep learning is woefully incomplete. Most methods to design and train these systems are based on rules-of-thumb and heuristics, and there is a drastic theory-practice gap in our understanding of why these systems work in practice. We believe this poses a significant risk to the long-term health of the field, as well as an obstacle to widening the applicability of deep learning beyond what has been achieved with current methods.
Our goal is to tackle head-on this important problem, and develop principled tools for understanding, designing, and training deep learning systems, based on rigorous theoretical results.
Our approach is to focus on three inter-related sources of performance losses in neural networks learning: Their optimization error (that is, how to train a given network in a computationally efficient manner); their estimation error (how to ensure that training a network on a finite training set will ensure good performance on future examples); and their approximation error (how architectural choices of the networks affect the type of functions they can compute). For each of these problems, we show how recent advances allow us to effectively approach them, and describe concrete preliminary results and ideas, which will serve as starting points and indicate the feasibility of this challenging project.
Summary
The rise of deep learning, in the form of artificial neural networks, has been the most dramatic and important development in machine learning over the past decade. Much more than a merely academic topic, deep learning is currently being widely adopted in industry, placed inside commercial products, and is expected to play a key role in anticipated technological leaps such as autonomous driving and general-purpose artificial intelligence. However, our scientific understanding of deep learning is woefully incomplete. Most methods to design and train these systems are based on rules-of-thumb and heuristics, and there is a drastic theory-practice gap in our understanding of why these systems work in practice. We believe this poses a significant risk to the long-term health of the field, as well as an obstacle to widening the applicability of deep learning beyond what has been achieved with current methods.
Our goal is to tackle head-on this important problem, and develop principled tools for understanding, designing, and training deep learning systems, based on rigorous theoretical results.
Our approach is to focus on three inter-related sources of performance losses in neural networks learning: Their optimization error (that is, how to train a given network in a computationally efficient manner); their estimation error (how to ensure that training a network on a finite training set will ensure good performance on future examples); and their approximation error (how architectural choices of the networks affect the type of functions they can compute). For each of these problems, we show how recent advances allow us to effectively approach them, and describe concrete preliminary results and ideas, which will serve as starting points and indicate the feasibility of this challenging project.
Max ERC Funding
1 442 360 €
Duration
Start date: 2018-09-01, End date: 2023-08-31
Project acronym UncertainENV
Project The Power of Randomization in Uncertain Environments
Researcher (PI) Shiri Chechik
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Starting Grant (StG), PE6, ERC-2018-STG
Summary Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Summary
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
Max ERC Funding
1 500 000 €
Duration
Start date: 2019-10-01, End date: 2024-09-30
Project acronym VERICOMP
Project Foundations of Verifiable Computing
Researcher (PI) Guy ROTHBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Consolidator Grant (CoG), PE6, ERC-2018-COG
Summary Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Summary
Proof systems allow a weak verifier to ascertain the correctness of complex computational statements. Efficiently-verifiable proof systems are fundamental objects in the study of computation, and have led to some of the deepest and most celebrated insights in cryptography and in complexity theory.
The vast and rich literature on proof systems focuses primarily on proving the correctness of intractable statements, e.g. ones that are NP-complete. While the verification can be efficient, the proofs themselves cannot be generated in polynomial time. This limits the applicability of such proof systems, both from a theoretical perspective and in their real-world impact. This proposal aims to obtain a comprehensive understanding of proof systems with polynomial-time proof generation, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography and in complexity theory.
Our study will focus primarily on interactive proof systems for tractable computations. The proposed research aims to revolutionize our understanding of these foundational objects by providing a complete and tight characterization of the complexity or proving and verifying general statements, by achieving breakthroughs in the study of related proof system notions, such as cryptographic arguments, and by building a fine-grained “algorithmic” theory of proof systems for central polynomial-time computational problems.
Our research will leverage these advances towards diverse applications: from real-world security challenges, such as verifying the correctness of computations performed by the cloud and cryptographic “proofs of work”, to a complexity-theoretic understanding of the complexity of approximating problems in P and of solving them on random instances.
Max ERC Funding
1 882 460 €
Duration
Start date: 2019-01-01, End date: 2023-12-31